Re: reducing random_page_cost from 4 to 2 to force index scan
От | Jesper Krogh |
---|---|
Тема | Re: reducing random_page_cost from 4 to 2 to force index scan |
Дата | |
Msg-id | 4DD0ACD0.1090801@krogh.cc обсуждение исходный текст |
Ответ на | Re: reducing random_page_cost from 4 to 2 to force index scan (Jesper Krogh <jesper@krogh.cc>) |
Ответы |
Re: reducing random_page_cost from 4 to 2 to force index scan
Re: reducing random_page_cost from 4 to 2 to force index scan |
Список | pgsql-performance |
On 2011-05-16 06:41, Jesper Krogh wrote: > On 2011-05-16 03:18, Greg Smith wrote: >> You can't do it in real-time. You don't necessarily want that to >> even if it were possible; too many possibilities for nasty feedback >> loops where you always favor using some marginal index that happens >> to be in memory, and therefore never page in things that would be >> faster once they're read. The only reasonable implementation that >> avoids completely unstable plans is to scan this data periodically >> and save some statistics on it--the way ANALYZE does--and then have >> that turn into a planner input. > > Would that be feasible? Have process collecting the data every > now-and-then > probably picking some conservative-average function and feeding > it into pg_stats for each index/relation? > > To me it seems like a robust and fairly trivial way to to get better > numbers. The > fear is that the OS-cache is too much in flux to get any stable > numbers out > of it. Ok, it may not work as well with index'es, since having 1% in cache may very well mean that 90% of all requested blocks are there.. for tables in should be more trivial. -- Jesper
В списке pgsql-performance по дате отправления: