Re: Emit fewer vacuum records by reaping removable tuples during pruning

Поиск
Список
Период
Сортировка
От Andres Freund
Тема Re: Emit fewer vacuum records by reaping removable tuples during pruning
Дата
Msg-id 20240104192420.thovl34sf7wu6llh@awork3.anarazel.de
обсуждение исходный текст
Ответ на Re: Emit fewer vacuum records by reaping removable tuples during pruning  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
Hi,

On 2024-01-04 12:31:36 -0500, Robert Haas wrote:
> On Wed, Dec 27, 2023 at 11:27 AM Melanie Plageman
> <melanieplageman@gmail.com> wrote:
> > Do you have specific concerns about its correctness? I understand it
> > is an area where we have to be sure we are correct. But, to be fair,
> > that is true of all the pruning and vacuuming code.
> 
> I'm kind of concerned that 0002 might be a performance regression. It
> pushes more branches down into the heap-pruning code, which I think
> can sometimes be quite hot, for the sake of a case that rarely occurs
> in practice.

I was wondering the same when talking about this with Melanie. But I guess
there are some scenarios that aren't unrealistic, consider e.g. bulk data
loading with COPY with an UPDATE to massage the data afterwards, before
creating the indexes.

Where I could see this becoming more interesting / powerful is if we were able
to do 'pronto reaping' not just from vacuum but also during on-access
pruning. For ETL workloads VACUUM might often not run between modifications of
the same page, but on-access pruning will. Being able to reclaim dead items at
that point seems like it could be pretty sizable improvement?


> I take your point about it improving things when there are no indexes, but
> what about when there are?

I suspect that branch prediction should take care of the additional
branches. heap_prune_chain() indeed can be very hot, but I think we're
primarily bottlenecked on data cache misses.

For a single VACUUM, whether we'd do pronto reaping would be constant -> very
well predictable. We could add an unlikely() to make sure the
branch-predictor-is-empty case optimizes for the much more common case of
having indexes.  Falsely assuming we'd not pronto reap wouldn't be
particularly bad, as the wins for the case are so much bigger.

If we were to use pronto reaping for on-access pruning, it's perhaps a bit
less predictable, as pruning for pages of a relation with indexes could be
interleaved with pruning for relations without. But even there I suspect it'd
not be the primary bottleneck: We call heap_page_prune_chain() in a loop for
every tuple on a page, the branch predictor should quickly learn whether we're
using pronto reaping. Whereas we're not becoming less cache-miss heavy when
looking at subsequent tuples.


> And even if there are no adverse performance consequences, is it really
> worth complicating the logic at such a low level?

Yes, I think this is the main question here. It's not clear though that the
state after the patch is meaningfullye more complicated? It removes nontrivial
code from lazy_scan_heap() and pays for that with a bit more complexity in
heap_prune_chain().


> Also, I find "pronto_reap" to be a poor choice of name. "pronto" is an
> informal word that seems to have no advantage over something like
> "immediate" or "now,"

I didn't like that either :)


> and I don't think "reap" has a precise, universally-understood meaning.

Less concerned about that.


> You could call this "mark_unused_now" or "immediately_mark_unused" or
> something and it would be far more self-documenting, IMHO.

How about 'no_indexes' or such?

Greetings,

Andres Freund



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated column's expression
Следующее
От: Andres Freund
Дата:
Сообщение: Re: Emit fewer vacuum records by reaping removable tuples during pruning