Re: DropRelFileNodeBuffers API change (was Re: [BUGS] BUG #5599: Vacuum fails due to index corruption issues)
От | Robert Haas |
---|---|
Тема | Re: DropRelFileNodeBuffers API change (was Re: [BUGS] BUG #5599: Vacuum fails due to index corruption issues) |
Дата | |
Msg-id | AANLkTinOSX8-EUr4P7EkY_oqeWeCaC2NRAj+f2J8=KWH@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: DropRelFileNodeBuffers API change (was Re: [BUGS] BUG #5599: Vacuum fails due to index corruption issues) (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: DropRelFileNodeBuffers API change (was Re: [BUGS] BUG #5599: Vacuum fails due to index corruption issues)
|
Список | pgsql-hackers |
On Sun, Aug 15, 2010 at 5:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Could we avoid this >> altogether by allocating a new relfilenode on truncate? > > Then we'd have to copy all the data we *didn't* truncate, which is > hardly likely to be a win. Oh, sorry. I was thinking we were talking about complete truncation rather than partial truncation. I'm still pretty unhappy with the proposed fix, though, because it gives up performance in a broad range of cases to cater to an extremely narrow failure case. Considering the rarity of the proposed problem, are we sure that it isn't better to adopt a solution like what Heikki proposed? If truncation fails, try to zero the pages; if that also fails, PANIC. I'm really reluctant to back-patch a performance regression. Perhaps, as Greg Stark says, there are a variety of ways that this can happen - but they're all pretty rare, and seem to require a fairly substantial amount of broken-ness. If we're in a situation where we can't reliably update our disk files, it seems optimistic to assume that keeping on running is going to be a whole lot better than PANICing. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company
В списке pgsql-hackers по дате отправления: