Обсуждение: AW: Open Source Database Routs Competition in New Bench mark Tests
> > Given all this performance discussion, has anyone seen any > > numbersregarding the speed of PostgreSQl vs Oracle? > > Oracle and MS SQL Server must have been the two > "leading commercial RDBMSes" mentioned in the > article. They mention Linux as one of the OS'es tested. Dont tell me they compared numbers under different OS's, like PostgreSQL on RedHat and M$Sql on NT. Thus my conclusion would be it can't be M$sql. Andreas
On Wed, 16 Aug 2000, Zeugswetter Andreas SB wrote: > > > > Given all this performance discussion, has anyone seen any > > > numbersregarding the speed of PostgreSQl vs Oracle? > > > > Oracle and MS SQL Server must have been the two > > "leading commercial RDBMSes" mentioned in the > > article. > > They mention Linux as one of the OS'es tested. Dont tell me they compared > numbers under different OS's, like PostgreSQL on RedHat and M$Sql on NT. > Thus my conclusion would be it can't be M$sql. IMHO, and I think this is pretty common across must ppl in the computer field, *any* benchmark generated by *anyone* has to be taken with a very very large grain of salt. I don't care if its Progress benchmarking MySQL against the rest, or Great Bridge benchmarkng PostgreSQL against the rest, or Oracle doing their own against the rest ... the results of any benchmark is going to favor whom the benchmarker wants to favor, period. Not because of any malicious act on the benchmarker side, but because the results that are presented generally don't show the whole picture, or the benchmarker spent a bit more time tweaking the server that they care about, or ... 101 other reasons ... From what I've gathered in the threads, the tests that GB did were mainly SELECT based ... fill a table, vacuum it and then run SELECTs against that ... but if GB were to release their exact tests, could the MySQL folks re-run those same tests and have them come out in their favor? My guess is probably ... same with Oracle ... same with Informix ... Now, a *good* benchmark would be for all the various vendors getting together, agreeing on a set of benchmark tests as well as agreeing on the environment (ie. everyone runs on a Dual-PIII 500 with 512Meg of RAM, and these drives, this OS, etc) and they each run their own tests ... then each vendor could sit down and optimize their software as only they really know how and *then* see how each compares against the other ... *that* is a test that I've love to see the results of ... that's just my opinion ... its nice to finally see some tests out there that does show us in front, and I thank GB for going through the trouble of doing this, but I'm more a believer in what I can *see* in a real life environment vs what a test environment shows, which is why I started in with PostgreSQL in the first place, and why I've stuck with it all these years *shrug*
At 10:05 AM 8/16/00 -0300, The Hermit Hacker wrote: >From what I've gathered in the threads, the tests that GB did were mainly >SELECT based ... fill a table, vacuum it and then run SELECTs against that >... but if GB were to release their exact tests, could the MySQL folks >re-run those same tests and have them come out in their favor? My guess >is probably ... I wouldn't bet on it. Even MySQL's own benchmark page shows Postgres 6.5 beating MySQL for selects with JOIN, quite handily. Since real-world database usage depends heavily on JOINs I wouldn't be surprised if the standard benchmark used by Xperts contains lots and lots of joins, which would tend to make MySQL run slow. MySQL is good at one thing, and one thing only: running simple queries in single-user mode. >Now, a *good* benchmark would be for all the various vendors getting >together, agreeing on a set of benchmark tests as well as agreeing on the >environment (ie. everyone runs on a Dual-PIII 500 with 512Meg of RAM, and >these drives, this OS, etc) and they each run their own tests ... then >each vendor could sit down and optimize their software as only they really >know how and *then* see how each compares against the other ... >*that* is a test that I've love to see the results of ... That's how the normal TPC testing is done, I believe. Except on huge honkin' hardware. - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.
> That's how the normal TPC testing is done, I believe. Except on huge > honkin' hardware. Right. And it's on huge honkin' hardware because you won't see a number published which doesn't *win* in the commercial wars. That said, I suppose you wouldn't have seen the GB results if they turned out sucky for Postgres. You probably wouldn't see GB anywhere if Postgres wasn't competitive in performance during their evaluation phase of the company startup :) otoh, GB *did* do the tests on hardware representative of equipment small- and medium-sized companies would be using, and has been (afaik) forthcoming about the test setup (at least to the extent that they can given the restrictive licensing of some of the tested products). - Thomas
On Wed, 16 Aug 2000, Don Baccus wrote: > >*that* is a test that I've love to see the results of ... > > That's how the normal TPC testing is done, I believe. Except on huge > honkin' hardware. Okay, my understandign of the 'normal TPC tseting' is that Oracle goes out, buys this major system to run their tests on and submits those ... then MicroSloth goes out and buys one that happens to be bigger and faster to run theirs on and submits those ... ... the idea being that you basically invest the money into the hardware required to make yours look good, but all run the same test suite ...
At 03:26 PM 8/16/00 +0000, Thomas Lockhart wrote: >> That's how the normal TPC testing is done, I believe. Except on huge >> honkin' hardware. > >Right. And it's on huge honkin' hardware because you won't see a number >published which doesn't *win* in the commercial wars. That said, I >suppose you wouldn't have seen the GB results if they turned out sucky >for Postgres. You probably wouldn't see GB anywhere if Postgres wasn't >competitive in performance during their evaluation phase of the company >startup :) Exactly! Little Stick Over River, maybe, but not Great Bridge with $25M of funding! >otoh, GB *did* do the tests on hardware representative of equipment >small- and medium-sized companies would be using, and has been (afaik) >forthcoming about the test setup (at least to the extent that they can >given the restrictive licensing of some of the tested products). They even split index and data files onto different platters in Ora...oops "Proprietary 1, V8.1.5", seems more than fair ... - Don Baccus, Portland OR <dhogaza@pacifier.com> Nature photos, on-line guides, Pacific Northwest Rare Bird Alert Serviceand other goodies at http://donb.photo.net.