Re: [BUG] Logical replication failure "ERROR: could not map filenode "base/13237/442428" to relation OID" with catalog modifying txns
От | Amit Kapila |
---|---|
Тема | Re: [BUG] Logical replication failure "ERROR: could not map filenode "base/13237/442428" to relation OID" with catalog modifying txns |
Дата | |
Msg-id | CAA4eK1KZHnHhZdL1-Fa-6+sw4JHEPjLjZ=+wTwRHnkooXoxp2A@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [BUG] Logical replication failure "ERROR: could not map filenode "base/13237/442428" to relation OID" with catalog modifying txns (Masahiko Sawada <sawada.mshk@gmail.com>) |
Ответы |
Re: [BUG] Logical replication failure "ERROR: could not map filenode "base/13237/442428" to relation OID" with catalog modifying txns
|
Список | pgsql-hackers |
On Tue, Jul 12, 2022 at 11:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote: > > On Tue, Jul 12, 2022 at 10:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote: > > > > > > I'm doing benchmark tests and will share the results. > > > > I've done benchmark tests to measure the overhead introduced by doing > bsearch() every time when decoding a commit record. I've simulated a > very intensified situation where we decode 1M commit records while > keeping builder->catchange.xip array but the overhead is negilible: > > HEAD: 584 ms > Patched: 614 ms > > I've attached the benchmark script I used. With increasing > LOG_SNAPSHOT_INTERVAL_MS to 90000, the last decoding by > pg_logicla_slot_get_changes() decodes 1M commit records while keeping > catalog modifying transactions. > Thanks for the test. We should also see how it performs when (a) we don't change LOG_SNAPSHOT_INTERVAL_MS, and (b) we have more DDL xacts so that the array to search is somewhat bigger -- With Regards, Amit Kapila.
В списке pgsql-hackers по дате отправления: