Re: How to keep a table in memory?
От | Gregory Stark |
---|---|
Тема | Re: How to keep a table in memory? |
Дата | |
Msg-id | 87mytiqytt.fsf@oxford.xeocode.com обсуждение исходный текст |
Ответ на | Re: How to keep a table in memory? (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: How to keep a table in memory?
|
Список | pgsql-hackers |
"Tom Lane" <tgl@sss.pgh.pa.us> writes: > I'd be inclined to think instead about a scheme that lets references made by > higher-priority queries bump buffers' use-counts by more than 1, or some > other way of making the priority considerations visible to an automatic > cache management algorithm. I don't think that really solves the problem. Consider a case where you have a few dozen queries all of which use indexes to access only a few pages per call (but spread across a large enough table), except for just one query which uses a sequential scan of a moderate sized table. In such a circumstance the best average performance might be to keep the pages used by the index scans in memory and force most of the sequential scan to go to disk. Especially if the sequential scan is fairly rare and especially if random_page_cost is fairly high. However if your concern is response time, not average performance, then that would be disastrous. In exchange for a slight improvement of already fast queries you would be obtaining an unsatisfactory response time for the sequential scan. I'm not sure what the solution is. This scenario is going to be a problem for any system which tries to judge future usage based on past usage. If the infrequent query with a strict response time requirement is infrequent enough any automatic algorithm will evict it. Some brainstorming ideas: What if a prepared query which previously ran under some specified response time guarantee didn't bump the usage counts at all. That way frequently run queries which are fast enough even with disk accesses don't evict pages needed for slower queries. Or better yet if we tag a prepared query with the average (or 90% percentile or something like that) response time from the past and tag every buffer it touches with that response time if it's greater than what the buffer is already tagged with. When scanning for a page to evict we ignore any buffer with response times larger than ours. Ie, queries which respond quickly are not allowed to evict buffers needed by queries which response slower than them. Only a slower or ad-hoc non-prepared query is allowed to evict those pages. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support!
В списке pgsql-hackers по дате отправления: