Re: BUG #5946: Long exclusive lock taken by vacuum (not full)
От | Greg Stark |
---|---|
Тема | Re: BUG #5946: Long exclusive lock taken by vacuum (not full) |
Дата | |
Msg-id | AANLkTik11YkL2Otst7Uf0f-_3+YmTh6O8tFyg8CnQ5o2@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: BUG #5946: Long exclusive lock taken by vacuum (not full) (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: BUG #5946: Long exclusive lock taken by vacuum (not full)
Re: BUG #5946: Long exclusive lock taken by vacuum (not full) Re: BUG #5946: Long exclusive lock taken by vacuum (not full) |
Список | pgsql-bugs |
On Fri, Mar 25, 2011 at 8:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Interesting, but I don't understand/believe your argument as to why this > is a bad idea or fixed-size extents are better. =A0It sounds to me just > like the typical Oracle DBA compulsion to have a knob to twiddle. =A0A > self-adjusting enlargement behavior seems smarter all round. > So is it ok for inserting one row to cause my table to grow by 90GB? Or should there be some maximum size increment at which it stops growing? What should that maximum be? What if I'm on a big raid system where that size doesn't even add a block to every stripe element? Say you start with 64k (8 pg blocks). That means your growth increments will be 64k, 70k, 77kl, 85k, 94k, 103k, 113k, 125k, 137k, ... I'm having trouble imagining a set of hardware and filesystem where growing a table by 125k will be optimal. The next allocation will have to do some or all of a) go back and edit the previous one to round it up, then b) add 128k more, then c) still have 6k more to allocate in a new allocation. --=20 greg
В списке pgsql-bugs по дате отправления: