Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise
От | Claudio Freire |
---|---|
Тема | Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise |
Дата | |
Msg-id | CAGTBQpbjmHqBVA=bApBcx_8pfkE1tr_ErKPGGUG7Cf+YohwMew@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise (Peter Geoghegan <pg@bowt.ie>) |
Ответы |
Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise
|
Список | pgsql-hackers |
On Thu, Jul 20, 2017 at 12:08 AM, Peter Geoghegan <pg@bowt.ie> wrote: >> The traditional >> wisdom about btrees, for instance, is that no matter how full you pack >> them to start with, the steady state is going to involve something like >> 1/3rd free space. You can call that bloat if you want, but it's not >> likely that you'll be able to reduce the number significantly without >> paying exorbitant costs. > > For the purposes of this discussion, I'm mostly talking about > duplicates within a page on a unique index. If the keyspace owned by > an int4 unique index page only covers 20 distinct values, it will only > ever cover 20 distinct values, now and forever, despite the fact that > there is room for about 400 (a 90/10 split leaves you with 366 items + > 1 high key). Microvacuum could also help. If during a scan you find pointers that point to dead (in vacuum terms) tuples, the pointers in the index could be deleted. That could be done during insert into unique indexes before a split, to avoid the split. Chances are, if there are duplicates, at least a few of them will be dead.
В списке pgsql-hackers по дате отправления: