Re: per table random-page-cost?
От | Greg Stark |
---|---|
Тема | Re: per table random-page-cost? |
Дата | |
Msg-id | 407d949e0910221101te247c85me4e1027d8090d405@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: per table random-page-cost? (Cédric Villemain <cedric.villemain@dalibo.com>) |
Ответы |
Re: per table random-page-cost?
Re: per table random-page-cost? Re: per table random-page-cost? |
Список | pgsql-hackers |
On Thu, Oct 22, 2009 at 8:16 AM, Cédric Villemain <cedric.villemain@dalibo.com> wrote: > You can have situation where you don't want some tables go to OS memory I don't think this is a configuration we want to cater for. The sysadmin shouldn't be required to understand the i/o pattern of postgres. He or she cannot know whether the database will want to access the same blocks twice for internal algorithms that isn't visible from the user point of view. The scenarios where you might want to do this would be if you know there are tables which are accessed very randomly with no locality and very low cache hit rates. I think the direction we want to head is towards making sure the cache manager is automatically resistant to such data. There is another use case which perhaps needs to be addressed: if the user has some queries which are very latency sensitive and others which are not latency sensitive. In that case it might be very important to keep the pages of data used by the high priority queries in the cache. That's something we should have a high level abstract interface for, not depend on low level system features. -- greg
В списке pgsql-hackers по дате отправления: