Re: Improve eviction algorithm in ReorderBuffer

Поиск
Список
Период
Сортировка
От Masahiko Sawada
Тема Re: Improve eviction algorithm in ReorderBuffer
Дата
Msg-id CAD21AoBYjJmz7q_=Z+eXJgm0FScyu3_iGFshPAvnq78B2KL3qQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Improve eviction algorithm in ReorderBuffer  (vignesh C <vignesh21@gmail.com>)
Ответы Re: Improve eviction algorithm in ReorderBuffer  (Peter Smith <smithpb2250@gmail.com>)
Список pgsql-hackers
On Tue, Mar 5, 2024 at 12:20 PM vignesh C <vignesh21@gmail.com> wrote:
>
> On Wed, 28 Feb 2024 at 11:40, Amit Kapila <amit.kapila16@gmail.com> wrote:
> >
> > On Mon, Feb 26, 2024 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> > >
> >
> > A few comments on 0003:
> > ===================
> > 1.
> > +/*
> > + * Threshold of the total number of top-level and sub transactions
> > that controls
> > + * whether we switch the memory track state. While the MAINTAIN_HEAP state is
> > + * effective when there are many transactions being decoded, in many systems
> > + * there is generally no need to use it as long as all transactions
> > being decoded
> > + * are top-level transactions. Therefore, we use MaxConnections as
> > the threshold
> > + * so we can prevent switch to the state unless we use subtransactions.
> > + */
> > +#define REORDER_BUFFER_MEM_TRACK_THRESHOLD MaxConnections
> >
> > The comment seems to imply that MAINTAIN_HEAP is useful for large
> > number of transactions but ReorderBufferLargestTXN() switches to this
> > state even when there is one transaction. So, basically we use the
> > binary_heap technique to get the largest even when we have one
> > transaction but we don't maintain that heap unless we have
> > REORDER_BUFFER_MEM_TRACK_THRESHOLD number of transactions are
> > in-progress. This means there is some additional work when (build and
> > reset heap each time when we pick largest xact) we have fewer
> > transactions in the system but that may not be impacting us because of
> > other costs involved like serializing all the changes. I think once we
> > can try to stress test this by setting
> > debug_logical_replication_streaming to 'immediate' to see if the new
> > mechanism has any overhead.
>
> I ran the test with a transaction having many inserts:
>
>          | 5000     | 10000   |  20000  | 100000    |  1000000   | 10000000
> ------- |-----------|------------|------------|--------------|----------------|----------------
> Head | 26.31 | 48.84 | 93.65  | 480.05  |  4808.29   | 47020.16
> Patch |  26.35  | 50.8   | 97.99  | 484.8    |  4856.95   | 48108.89
>
> The same test with debug_logical_replication_streaming= 'immediate'
>
>          | 5000     | 10000   |  20000  | 100000    |  1000000   | 10000000
> ------- |-----------|------------|------------|--------------|----------------|----------------
> Head | 59.29   |  115.84  |  227.21 | 1156.08   |  11367.42   |  113986.14
> Patch | 62.45  |  120.48  |  240.56 | 1185.12   |  11855.37   |  119921.81
>
> The execution time is in milliseconds. The column header indicates the
> number of inserts in the transaction.
> In this case I noticed that the test execution with patch was taking
> slightly more time.
>

Thank you for testing! With 10M records, I can see 2% regression in
the 'buffered' case and 5% regression in the 'immediate' case.

I think that in general it makes sense to postpone using a max-heap
until the number of transactions is higher than the threshold. I've
implemented this idea and here are the results on my environment (with
10M records and debug_logical_replication_streaming = 'immediate'):

HEAD:
68937.887 ms
69450.174 ms
68808.248 ms

v7 patch:
71280.783 ms
71673.101 ms
71330.898 ms

v8 patch:
68918.259 ms
68822.330 ms
68972.452 ms

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: shveta malik
Дата:
Сообщение: Re: Synchronizing slots from primary to standby
Следующее
От: Masahiko Sawada
Дата:
Сообщение: Re: Improve eviction algorithm in ReorderBuffer