On Fri, 31 May 2002, Tom Lane wrote:
> On a pure random-chance basis, you'd not expect that fetching 5k rows
> out of 100m would hit the same table block twice --- but I'm wondering
> if the data was somewhat clustered.
I dont' think so. I generated the data myself, and the data are
entirely pseudo-random. (Unless perl's PNRG is quite screwed,
which doesn't seem so likely.)
> Do the system usage stats on your machine reflect the difference
> between physical reads and reads satisfied from kernel buffer cache?
Well, not that I've been looking at yet, but it would definitely
be cool if I could figure out a way to do this.
> Or maybe your idea about extra seek time is correct.
If I can get a spare couple of hours, I'll cons up a little benchmark
program to help determine seek time on disks and various parts of
disks, and play a bit.
But on the other hand, it's not like I can do all that much about this
sort of problem, anyway, and I'm kind of doubting that the fault here
lies with postgres.
cjs
--
Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC