Re: New GUC autovacuum_max_threshold ?

Поиск
Список
Период
Сортировка
От Frédéric Yhuel
Тема Re: New GUC autovacuum_max_threshold ?
Дата
Msg-id cc23c226-e0be-47a7-bf6f-bcedd097a239@dalibo.com
обсуждение исходный текст
Ответ на Re: New GUC autovacuum_max_threshold ?  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: New GUC autovacuum_max_threshold ?  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers

Le 09/05/2024 à 16:58, Robert Haas a écrit :
> As I see it, a lot of the lack of agreement up until now is people
> just not understanding the math. Since I think I've got the right idea
> about the math, I attribute this to other people being confused about
> what is going to happen and would tend to phrase it as: some people
> don't understand how catastrophically bad it will be if you set this
> value too low.

FWIW, I do agree with your math. I found your demonstration convincing. 
500000 was selected with the wet finger.

Using the formula I suggested earlier:

vacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples, 
vac_base_thresh + vac_scale_factor * sqrt(reltuples) * 1000);

your table of 2.56 billion tuples will be vacuumed if there are
more than 10 million dead tuples (every 28 minutes).

If we want to stick with the simple formula, we should probably choose a 
very high default, maybe 100 million, as you suggested earlier.

However, it would be nice to have the visibility map updated more 
frequently than every 100 million dead tuples. I wonder if this could be 
decoupled from the vacuum process?



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: cataloguing NOT NULL constraints
Следующее
От: Nathan Bossart
Дата:
Сообщение: Re: An improved README experience for PostgreSQL