Possible performance improvement: buffer replacement policy

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Possible performance improvement: buffer replacement policy
Дата
Msg-id 1601.967421129@sss.pgh.pa.us
обсуждение исходный текст
Ответы Re: Possible performance improvement: buffer replacement policy  (Bruce Momjian <pgman@candle.pha.pa.us>)
Re: Possible performance improvement: buffer replacement policy  (Bruce Momjian <pgman@candle.pha.pa.us>)
Список pgsql-hackers
Those of you with long memories may recall a benchmark that Edmund Mergl
drew our attention to back in May '99.  That test showed extremely slow
performance for updating a table with many indexes (about 20).  At the
time, it seemed the problem was due to bad performance of btree with
many equal keys, so I thought I'd go back and retry the benchmark after
this latest round of btree hackery.

The good news is that btree itself seems to be pretty well fixed; the
bad news is that the benchmark is still slow for large numbers of rows.
The problem is I/O: the CPU mostly sits idle waiting for the disk.
As best I can tell, the difficulty is that the working set of pages
needed to update this many indexes is too large compared to the number
of disk buffers Postgres is using.  (I was running with -B 1000 and
looking at behavior for a 100000-row test table.  This gave me a table
size of 3876 pages, plus 11526 pages in 20 indexes.)

Of course, there's only so much we can do when the number of buffers
is too small, but I still started to wonder if we are using the buffers
as effectively as we can.  Some tracing showed that most of the pages
of the indexes were being read and written multiple times within a
single UPDATE query, while most of the pages of the table proper were
fetched and written only once.  That says we're not using the buffers
as well as we could; the index pages are not being kept in memory when
they should be.  In a query like this, we should displace main-table
pages sooner to allow keeping more index pages in cache --- but with
the simple LRU replacement method we use, once a page has been loaded
it will stay in cache for at least the next NBuffers (-B) page
references, no matter what.  With a large NBuffers that's a long time.

I've come across an interesting article:The LRU-K Page Replacement Algorithm For Database Disk BufferingElizabeth J.
O'Neil,Patrick E. O'Neil, Gerhard WeikumProceedings of the 1993 ACM SIGMOD international conferenceon Management of
Data,May 1993
 
(If you subscribe to the ACM digital library, you can get a PDF of this
from there.)  This article argues that standard LRU buffer management is
inherently not great for database caches, and that it's much better to
replace pages on the basis of time since the K'th most recent reference,
not just time since the most recent one.  K=2 is enough to get most of
the benefit.  The big win is that you are measuring an actual page
interreference time (between the last two references) and not just
dealing with a lower-bound guess on the interreference time.  Frequently
used pages are thus much more likely to stay in cache.

It looks like it wouldn't take too much work to replace shared buffers
on the basis of LRU-2 instead of LRU, so I'm thinking about trying it.

Has anyone looked into this area?  Is there a better method to try?
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: The Hermit Hacker
Дата:
Сообщение: Re: Too many open files (was Re: spinlock problems reported earlier)
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Too many open files (was Re: spinlock problems reported earlier)