Re: Estimating seq_page_fetch and random_page_fetch
От | Jim C. Nasby |
---|---|
Тема | Re: Estimating seq_page_fetch and random_page_fetch |
Дата | |
Msg-id | 20070308210913.GX24979@nasby.net обсуждение исходный текст |
Ответ на | Re: Estimating seq_page_fetch and random_page_fetch (Gregory Stark <stark@enterprisedb.com>) |
Список | pgsql-hackers |
On Thu, Mar 08, 2007 at 05:35:03PM +0000, Gregory Stark wrote: > > "Tom Lane" <tgl@sss.pgh.pa.us> writes: > > > "Umar Farooq Minhas" <umarfm13@hotmail.com> writes: > >> How can we accrately estimate the "seq_page_fetch" and = > >> "random_page_fetch" costs from outside the postgres using for example a = > >> C routine. > > > > Use a test case larger than memory. Repeat many times to average out > > noise. IIRC, when I did the experiments that led to the current > > random_page_cost of 4.0, it took about a week before I had numbers I > > trusted. > > When I was running tests I did it on a filesystem where nothing else was > running. Between tests I unmounted and remounted it. As I understand it Linux > associates the cache with the filesystem and not the block device and discards > all pages from cache when the filesystem is unmounted. > > That doesn't contradict anything Tom said, it might be useful as an additional > tool though. Another trick I've used in the past is to just run the machine out of memory, using the following: /** $Id: clearmem.c,v 1.1 2003/06/29 20:41:33 decibel Exp $** Utility to clear out a chunk of memory and zero it. Usefulfor flushing disk buffers*/ int main(int argc, char *argv[]) { if (!calloc(atoi(argv[1]), 1024*1024)) { printf("Error allocating memory.\n"); } } I'll monitor top while that's running to ensure that some stuff gets swapped out to disk. I believe this might still leave some cached data in other areas of the kernel, but it's probably not enough to worry about. -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
В списке pgsql-hackers по дате отправления: