Re: Add index scan progress to pg_stat_progress_vacuum
От | Imseih (AWS), Sami |
---|---|
Тема | Re: Add index scan progress to pg_stat_progress_vacuum |
Дата | |
Msg-id | F5F2CD8C-49D4-4B9A-983F-E3FF0E8CF048@amazon.com обсуждение исходный текст |
Ответ на | Re: Add index scan progress to pg_stat_progress_vacuum (Masahiko Sawada <sawada.mshk@gmail.com>) |
Ответы |
Re: Add index scan progress to pg_stat_progress_vacuum
|
Список | pgsql-hackers |
+ +/* + + * vacuum_worker_init --- initialize this module's shared memory hash + + * to track the progress of a vacuum worker + + */ + +void + +vacuum_worker_init(void) + +{ + + HASHCTL info; + + long max_table_size = GetMaxBackends(); + + + + VacuumWorkerProgressHash = NULL; + + + + info.keysize = sizeof(pid_t); + + info.entrysize = sizeof(VacProgressEntry); + + + + VacuumWorkerProgressHash = ShmemInitHash("Vacuum Progress Hash", + + + max_table_size, + + + max_table_size, + + + &info, + + + HASH_ELEM | HASH_BLOBS); + +} + It seems to me that creating a shmem hash with max_table_size entries + for parallel vacuum process tracking is too much. IIRC an old patch + had parallel vacuum workers advertise its progress and changed the + pg_stat_progress_vacuum view so that it aggregates the results + including workers' stats. I think it’s better than the current one. + Why did you change that? + Regards, I was trying to avoid a shared memory to track completed indexes, but aggregating stats does not work with parallel vacuums.This is because a parallel worker will exit before the vacuum completes causing the aggregated total to be wrong. For example Leader_pid advertises it completed 2 indexes Parallel worker advertises it completed 2 indexes When aggregating we see 4 indexes completed. After the parallel worker exits, the aggregation will show only 2 indexes completed. -- Sami Imseih Amazon Web Services
В списке pgsql-hackers по дате отправления: