Re: Make reorder buffer max_changes_in_memory adjustable?
От | Jingtang Zhang |
---|---|
Тема | Re: Make reorder buffer max_changes_in_memory adjustable? |
Дата | |
Msg-id | CAPsk3_BcmRTMTSnicUj6dyns2AtoipkqjM2Xp6uDgQj9n4kJ6g@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Make reorder buffer max_changes_in_memory adjustable? (Tomas Vondra <tomas.vondra@enterprisedb.com>) |
Ответы |
Re: Make reorder buffer max_changes_in_memory adjustable?
|
Список | pgsql-hackers |
Thanks, Tomas.
> Theoretically, yes, we could make max_changes_in_memory a GUC, but it's
> not clear to me how would that help 12/13, because there's ~0% chance> we'd backpatch that ...
What I mean is not about back-patch work. Things should happen on publisher
side?
Consider when the publisher is a PostgreSQL v14+~master (with streaming
support) and subscriber is a 12/13 where streaming is not supported, the publisher
would still have the risk of OOM. The same thing should happen when we use a
v14+~master as publisher and a whatever open source CDC as subscriber.
> Wouldn't it be better to have adjusts the value automatically, somehow?
> For example, before restoring the changes, we could count the number of
> transactions, and set it to 4096/ntransactions or something like that.
> Or do something smarter by estimating tuple size, to count it in the
> logical__decoding_work_mem budget.
> For example, before restoring the changes, we could count the number of
> transactions, and set it to 4096/ntransactions or something like that.
> Or do something smarter by estimating tuple size, to count it in the
> logical__decoding_work_mem budget.
Yes, I think this issue should have been solved when logical_decoding_work_mem
was initially been introduced, but it didn't. There could be some reasons like
sub-transaction stuff which has been commented in the header of reorderbuffer.c.
regards, Jingtang
В списке pgsql-hackers по дате отправления: