RE: Does the work made by vaccum in the current pass is lost when interrupted?
От | Jean-Marc Lessard |
---|---|
Тема | RE: Does the work made by vaccum in the current pass is lost when interrupted? |
Дата | |
Msg-id | QB1PR01MB36334C6FE1F518EDE242867E94030@QB1PR01MB3633.CANPRD01.PROD.OUTLOOK.COM обсуждение исходный текст |
Ответ на | Re: Does the work made by vaccum in the current pass is lost when interrupted? (Michael Lewis <mlewis@entrata.com>) |
Список | pgsql-general |
Thank you Michael for your suggestion,
I will reduce vacuum_cost_delay and increasing vacuum_cost_limit to speedup vacuum.
But the I/O system is very solicitated on PROD system and I do not want to impact end user performance.
If aggressive cost_delay and vacuum_cost_limit slow down significatively end user performance, I may have to kill the vacuum, but would like to keep as much vacuum work as possible.
My concern is about the size of the max_dead_tuples buffer determined by the autovacuum_work_mem
- autovacuum_work_mem is currently 1GB (default to maintenance_work_mem)
When I ran a vacuum on a DEV system and it made a single “scanning heap/vacuuming indexes/vacuuming heap” cycle of about 12 hours.
The vacuuming indexes phase is half the total time (6 hrs).
Q1: I guess that if I have to kill the vacuum midway, I will lose all the work it did during the scanning heap step? Right?
Q2: If I reduce autovacuum_work_mem to 32MB, is there any side effect performance on the vacuum?
thanks
В списке pgsql-general по дате отправления: