Re: Reduce useless changes before reassembly during logical replication

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: Reduce useless changes before reassembly during logical replication
Дата
Msg-id CAA4eK1+qVztg-noRzsHnAqrPgwNLb=YZC4Ri9EeUS6sdBdkfJw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Reduce useless changes before reassembly during logical replication  (Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>)
Ответы Re: Reduce useless changes before reassembly during logical replication  (Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>)
Re: Reduce useless changes before reassembly during logical replication  (Andy Fan <zhihuifan1213@163.com>)
Список pgsql-hackers
On Thu, Jan 18, 2024 at 12:12 PM Bharath Rupireddy
<bharath.rupireddyforpostgres@gmail.com> wrote:
>
> On Wed, Jan 17, 2024 at 11:45 AM li jie <ggysxcq@gmail.com> wrote:
> >
> > Hi hackers,
> >
> > During logical replication, if there is a large write transaction, some
> > spill files will be written to disk, depending on the setting of
> > logical_decoding_work_mem.
> >
> > This behavior can effectively avoid OOM, but if the transaction
> > generates a lot of change before commit, a large number of files may
> > fill the disk. For example, you can update a TB-level table.
> >
> > However, I found an inelegant phenomenon. If the modified large table is not
> > published, its changes will also be written with a large number of spill files.
> > Look at an example below:
>
> Thanks. I agree that decoding and queuing the changes of unpublished
> tables' data into reorder buffer is an unnecessary task for walsender.
> It takes processing efforts (CPU overhead), consumes disk space and
> uses memory configured via logical_decoding_work_mem for a replication
> connection inefficiently.
>

This is all true but note that in successful cases (where the table is
published) all the work done by FilterByTable(accessing caches,
transaction-related stuff) can add noticeable overhead as anyway we do
that later in pgoutput_change(). I think I gave the same comment
earlier as well but didn't see any satisfactory answer or performance
data for successful cases to back this proposal. Note, users can
configure to stream_in_progress transactions in which case they
shouldn't see such a big problem. However, I agree that if we can find
some solution where there is no noticeable overhead then that would be
worth considering.

--
With Regards,
Amit Kapila.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Anthonin Bonnefoy
Дата:
Сообщение: Add \syncpipeline command to pgbench
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: speed up a logical replica setup