Re: autovacuum truncate exclusive lock round two
От | Robert Haas |
---|---|
Тема | Re: autovacuum truncate exclusive lock round two |
Дата | |
Msg-id | CA+TgmoZHXQGW6O-HZyaLj17rJazmRhv68WZ7QdEc7MnYwQCfpQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: autovacuum truncate exclusive lock round two (Jan Wieck <JanWieck@Yahoo.com>) |
Ответы |
Re: autovacuum truncate exclusive lock round two
|
Список | pgsql-hackers |
On Wed, Dec 5, 2012 at 10:16 PM, Jan Wieck <JanWieck@yahoo.com> wrote: > On 12/5/2012 2:00 PM, Robert Haas wrote: >> >> Many it'd be sensible to relate the retry time to the time spend >> vacuuming the table. Say, if the amount of time spent retrying >> exceeds 10% of the time spend vacuuming the table, with a minimum of >> 1s and a maximum of 1min, give up. That way, big tables will get a >> little more leeway than small tables, which is probably appropriate. > > That sort of "dynamic" approach would indeed be interesting. But I fear that > it is going to be complex at best. The amount of time spent in scanning > heavily depends on the visibility map. The initial vacuum scan of a table > can take hours or more, but it does update the visibility map even if the > vacuum itself is aborted later. The next vacuum may scan that table in > almost no time at all, because it skips all blocks that are marked "all > visible". Well, if that's true, then there's little reason to worry about giving up quickly, because the next autovacuum a minute later won't consume many resources. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: