In the olden days of programming, back when you had to code by candlelight, it was considered acceptable to release beta versions that contained tons of bugs. For the most part, people avoided using beta software because they knew it could bite them. The only people who tried beta versions were hard-core geeks who knew what to expect. They expected show-stopping bugs, and plenty of 'em.
Those days are over, though. The mainstreaming of software coupled with the increasing prevalence of beta versions has resulted in the widespread usage of pre-release software. People no longer consider "beta" to mean "use at your own risk." And given the years-long beta status of web applications like Gmail, who could blame them?
These days, a lot of people get their first impression of your product by trying out a beta version. It doesn't matter if you warn people about problems - they'll still forever think of your work as low-grade crap if your beta isn't polished. If they run into any annoying bugs, they may steer clear of your software in the future.
So unlike the carefree code-slinging days of my youth, today I've got to make sure my beta versions are as solid as possible. I have to treat public betas with almost as much care as the final release.
In some ways I view this as a good thing, because it forces me to pay more attention to potential problems as I'm coding. But it's also a bad thing, because it's flat-out impossible to test desktop software with every possible combination of hardware and software that exists in the wild. I have to release a public beta in order to uncover bugs involving specific configurations, but quite often users who encounter these problems never report them - they simply uninstall the software and move on.