Re: Lock problem with autovacuum truncating heap
От | Jan Wieck |
---|---|
Тема | Re: Lock problem with autovacuum truncating heap |
Дата | |
Msg-id | 4D8E4725.5000908@Yahoo.com обсуждение исходный текст |
Ответ на | Re: Lock problem with autovacuum truncating heap (Simon Riggs <simon@2ndQuadrant.com>) |
Ответы |
Re: Lock problem with autovacuum truncating heap
|
Список | pgsql-hackers |
On 3/26/2011 12:12 PM, Simon Riggs wrote: > On Sat, Mar 26, 2011 at 2:30 PM, Jan Wieck<JanWieck@yahoo.com> wrote: > >> My current idea for a fix is to modify lazy_truncate_heap(). It does acquire >> and release the exclusive lock, so it should be possible to do this in >> smaller chunks, releasing and reacquiring the lock so that client >> transactions can get their work done as well. > > Agreed, presumably with vacuum delay in there as well? Not sure about that. My theory is that unless somebody needs access to that table, just have at it like it is now. The current implementation seems to assume that the blocks, checked for being empty, are still found in memory (vacuum just scanned them). And that seems to be correct most of the time, in which case adding vacuum delay only gives more time that the blocks get evicted and have to be read back in. > >> At the same time I would >> change count_nondeletable_pages() so that it uses a forward scan direction >> (if that leads to a speedup). > > Do we need that? Linux readahead works in both directions doesn't it? > Guess it wouldn't hurt too much. > > BTW does it read the blocks at that point using a buffer strategy? Is reading a file backwards "in 8K blocks" actually an access pattern, that may confuse buffer strategies? I don't know. I also don't know if what I am suggesting is much better. If you think about it, I merely suggested to "try" and do the same access pattern with larger chunks. We need to run some tests to find out. Jan -- Anyone who trades liberty for security deserves neither liberty nor security. -- Benjamin Franklin
В списке pgsql-hackers по дате отправления: