Re: eliminate xl_heap_visible to reduce WAL (and eventually set VM on-access)
От | Andres Freund |
---|---|
Тема | Re: eliminate xl_heap_visible to reduce WAL (and eventually set VM on-access) |
Дата | |
Msg-id | yn4zp35kkdsjx6wf47zcfmxgexxt4h2og47pvnw2x5ifyrs3qc@7uw6jyyxuyf7 обсуждение исходный текст |
Ответ на | Re: eliminate xl_heap_visible to reduce WAL (and eventually set VM on-access) (Melanie Plageman <melanieplageman@gmail.com>) |
Ответы |
Re: eliminate xl_heap_visible to reduce WAL (and eventually set VM on-access)
|
Список | pgsql-hackers |
Hi, On 2025-09-17 20:10:07 -0400, Melanie Plageman wrote: > 0001 is RFC but waiting on one other reviewer > From cacff6c95e38d370b87148bc48cf6ac5f086ed07 Mon Sep 17 00:00:00 2001 > From: Melanie Plageman <melanieplageman@gmail.com> > Date: Tue, 17 Jun 2025 17:22:10 -0400 > Subject: [PATCH v14 01/24] Eliminate COPY FREEZE use of XLOG_HEAP2_VISIBLE > diff --git a/src/backend/access/heap/heapam_xlog.c b/src/backend/access/heap/heapam_xlog.c > index cf843277938..faa7c561a8a 100644 > --- a/src/backend/access/heap/heapam_xlog.c > +++ b/src/backend/access/heap/heapam_xlog.c > @@ -551,6 +551,7 @@ heap_xlog_multi_insert(XLogReaderState *record) > int i; > bool isinit = (XLogRecGetInfo(record) & XLOG_HEAP_INIT_PAGE) != 0; > XLogRedoAction action; > + Buffer vmbuffer = InvalidBuffer; > > /* > * Insertion doesn't overwrite MVCC data, so no conflict processing is > @@ -571,11 +572,11 @@ heap_xlog_multi_insert(XLogReaderState *record) > if (xlrec->flags & XLH_INSERT_ALL_VISIBLE_CLEARED) > { > Relation reln = CreateFakeRelcacheEntry(rlocator); > - Buffer vmbuffer = InvalidBuffer; > > visibilitymap_pin(reln, blkno, &vmbuffer); > visibilitymap_clear(reln, blkno, vmbuffer, VISIBILITYMAP_VALID_BITS); > ReleaseBuffer(vmbuffer); > + vmbuffer = InvalidBuffer; > FreeFakeRelcacheEntry(reln); > } > > @@ -662,6 +663,57 @@ heap_xlog_multi_insert(XLogReaderState *record) > if (BufferIsValid(buffer)) > UnlockReleaseBuffer(buffer); > > + buffer = InvalidBuffer; > + > + /* > + * Now read and update the VM block. > + * > + * Note that the heap relation may have been dropped or truncated, leading > + * us to skip updating the heap block due to the LSN interlock. I don't fully understand this - how does dropping/truncating the relation lead to skipping due to the LSN interlock? > + * even in that case, it's still safe to update the visibility map. Any > + * WAL record that clears the visibility map bit does so before checking > + * the page LSN, so any bits that need to be cleared will still be > + * cleared. > + * > + * Note that the lock on the heap page was dropped above. In normal > + * operation this would never be safe because a concurrent query could > + * modify the heap page and clear PD_ALL_VISIBLE -- violating the > + * invariant that PD_ALL_VISIBLE must be set if the corresponding bit in > + * the VM is set. > + * > + * In recovery, we expect no other writers, so writing to the VM page > + * without holding a lock on the heap page is considered safe enough. It > + * is done this way when replaying xl_heap_visible records (see > + * heap_xlog_visible()). > + */ > + if (xlrec->flags & XLH_INSERT_ALL_FROZEN_SET && > + XLogReadBufferForRedoExtended(record, 1, RBM_ZERO_ON_ERROR, false, > + &vmbuffer) == BLK_NEEDS_REDO) > + { Why are we using RBM_ZERO_ON_ERROR here? I know it's copied from heap_xlog_visible(), but I don't immediately understand (or remember) why we do so there either? > + Page vmpage = BufferGetPage(vmbuffer); > + Relation reln = CreateFakeRelcacheEntry(rlocator); Hm. Do we really need to continue doing this ugly fake relcache stuff? I'd really like to eventually get rid of that and given that the new "code shape" delegates a lot more responsibility to the redo routines, they should have a fairly easy time not needing a fake relcache? Afaict the relation already is not used outside of debugging paths? > + /* initialize the page if it was read as zeros */ > + if (PageIsNew(vmpage)) > + PageInit(vmpage, BLCKSZ, 0); > + > + visibilitymap_set_vmbits(reln, blkno, > + vmbuffer, > + VISIBILITYMAP_ALL_VISIBLE | > + VISIBILITYMAP_ALL_FROZEN); > + > + /* > + * It is not possible that the VM was already set for this heap page, > + * so the vmbuffer must have been modified and marked dirty. > + */ I assume that's because we a) checked the LSN interlock b) are replaying something that needed to newly set the bit? Except for the above comments, this looks pretty good to me. Seems 0002 should just be applied... Re 0003: I wonder if it's getting to the point that a struct should be used as the argument. Greetings, Andres Freund
В списке pgsql-hackers по дате отправления: