Jim C. Nasby wrote:
> On Mon, May 15, 2006 at 02:18:03PM -0400, Tom Lane wrote:
>
>> "Jim C. Nasby" <jnasby@pervasive.com> writes:
>>
>>> A recent post Tom made in -bugs about how bad performance would be if we
>>> spilled after-commit triggers to disk got me thinking... There are
>>> several operations the database performs that potentially spill to disk.
>>> Given that any time that happens we end up caring much less about CPU
>>> usage and much more about disk IO, for any of these cases that use
>>> non-random access, compressing the data before sending it to disk would
>>> potentially be a sizeable win.
>>>
>> Note however that what the code thinks is a spill to disk and what
>> actually involves disk I/O are two different things. If you think
>> of it as a spill to kernel disk cache then the attraction is a lot
>> weaker...
>>
>
> I'm really starting to see why other databases want the OS out of their
> way...
>
Some of it is pure NIH syndrome. I recently heard of some tests done by
a major DB team that showed their finely crafted raw file system stuff
performing at best a few percent better than a standard file system, and
sometimes worse. I have often heard of the supposed benefits of our
being able to go behind the OS, but I am very dubious about it. What
makes people think that we could do any better than the OS guys?
cheers
andrew