Re: 8.3.0 Core with concurrent vacuum fulls
От | Heikki Linnakangas |
---|---|
Тема | Re: 8.3.0 Core with concurrent vacuum fulls |
Дата | |
Msg-id | 47D03DC5.8010301@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: 8.3.0 Core with concurrent vacuum fulls (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: 8.3.0 Core with concurrent vacuum fulls
|
Список | pgsql-hackers |
Tom Lane wrote: > "Pavan Deolasee" <pavan.deolasee@gmail.com> writes: >> On Wed, Mar 5, 2008 at 9:29 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> [ thinks some more... ] I guess we could use a flag array dimensioned >>> MaxHeapTuplesPerPage to mark already-processed tuples, so that you >>> wouldn't need to search the existing arrays but just index into the flag >>> array with the tuple's offsetnumber. > >> We can actually combine this and the page copying ideas. Instead of copying >> the entire page, we can just copy the line pointers array and work on the copy. > > I think that just makes things more complex and fragile. I like > Heikki's idea, in part because it makes the normal path and the WAL > recovery path guaranteed to work alike. I'll attach my work-in-progress > patch for this --- it doesn't do anything about the invalidation > semantics problem but it does fix the critical-section-too-big problem. FWIW, the patch looks fine to me. By inspection; I didn't test it. I'm glad we got away with a single "marked" array. I was afraid we would need to consult the unused/redirected/dead arrays separately. Do you have a plan for the invalidation problem? I think we could just not remove the redirection line pointers in catalog tables. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: