Re: autovacuum_work_mem
От | Simon Riggs |
---|---|
Тема | Re: autovacuum_work_mem |
Дата | |
Msg-id | CA+U5nMJiTwk=u0_2AVt+3SgkQ065CRKoqFuiLKUkRFeXasaNYg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: autovacuum_work_mem (Peter Geoghegan <pg@heroku.com>) |
Ответы |
Re: autovacuum_work_mem
|
Список | pgsql-hackers |
On 25 November 2013 21:51, Peter Geoghegan <pg@heroku.com> wrote: > On Sun, Nov 24, 2013 at 9:06 AM, Simon Riggs <simon@2ndquadrant.com> wrote: >> VACUUM uses 6 bytes per dead tuple. And autovacuum regularly removes >> dead tuples, limiting their numbers. >> >> In what circumstances will the memory usage from multiple concurrent >> VACUUMs become a problem? In those circumstances, reducing >> autovacuum_work_mem will cause more passes through indexes, dirtying >> more pages and elongating the problem workload. > > Yes, of course, but if we presume that the memory for autovacuum > workers to do everything in one pass simply isn't there, it's still > better to do multiple passes. That isn't clear to me. It seems better to wait until we have the memory. My feeling is this parameter is a fairly blunt approach to the problems of memory pressure on autovacuum and other maint tasks. I am worried that it will not effectively solve the problem. I don't wish to block the patch; I wish to get to an effective solution to the problem. A better aproach to handling memory pressure would be to globally coordinate workers so that we don't oversubscribe memory, allocating memory from a global pool. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: