Обсуждение: BF mamba failure

Поиск
Список
Период
Сортировка

BF mamba failure

От
Amit Kapila
Дата:
Hi,

Peter Smith has recently reported a BF failure [1]. AFAICS, the call
stack of failure [2] is as follows:

0x1e66644 <ExceptionalCondition+0x8c> at postgres
0x1d0143c <pgstat_release_entry_ref+0x4c0> at postgres
0x1d02534 <pgstat_get_entry_ref+0x780> at postgres
0x1cfb120 <pgstat_prep_pending_entry+0x8c> at postgres
0x1cfd590 <pgstat_report_disconnect+0x34> at postgres
0x1cfbfc0 <pgstat_shutdown_hook+0xd4> at postgres
0x1ca7b08 <shmem_exit+0x7c> at postgres
0x1ca7c74 <proc_exit_prepare+0x70> at postgres
0x1ca7d2c <proc_exit+0x18> at postgres
0x1cdf060 <PostgresMain+0x584> at postgres
0x1c203f4 <ServerLoop.isra.0+0x1e88> at postgres
0x1c2161c <PostmasterMain+0xfa4> at postgres
0x1edcf94 <main+0x254> at postgres

I couldn't correlate it to the recent commits. Any thoughts?

[1] - https://www.postgresql.org/message-id/CAHut%2BPsHdWFjU43VEX%2BR-8de6dFQ-_JWrsqs%3DvWek1hULexP4Q%40mail.gmail.com
[2] -
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-03-17%2005%3A36%3A10

-- 
With Regards,
Amit Kapila.



Re: BF mamba failure

От
Tom Lane
Дата:
Amit Kapila <amit.kapila16@gmail.com> writes:
> Peter Smith has recently reported a BF failure [1]. AFAICS, the call
> stack of failure [2] is as follows:

Note the assertion report a few lines further up:

TRAP: failed Assert("pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0"), File: "pgstat_shmem.c", Line: 560,
PID:25004 

            regards, tom lane



Re: BF mamba failure

От
Alexander Lakhin
Дата:
Hi,

18.03.2023 07:26, Tom Lane wrote:
Amit Kapila <amit.kapila16@gmail.com> writes:
Peter Smith has recently reported a BF failure [1]. AFAICS, the call
stack of failure [2] is as follows:
Note the assertion report a few lines further up:

TRAP: failed Assert("pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0"), File: "pgstat_shmem.c", Line: 560, PID: 25004

This assertion failure can be reproduced easily with the attached patch:
============== running regression test queries        ==============
test oldest_xmin                  ... ok           55 ms
test oldest_xmin                  ... FAILED (test process exited with exit code 1)      107 ms
test oldest_xmin                  ... FAILED (test process exited with exit code 1)        8 ms
============== shutting down postmaster               ==============

contrib/test_decoding/output_iso/log/postmaster.log contains:
TRAP: failed Assert("pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0"), File: "pgstat_shmem.c", Line: 561, PID: 456844

With the sleep placed above Assert(entry_ref->shared_entry->dropped) this Assert fails too.

Best regards,
Alexander
Вложения

Re: BF mamba failure

От
Peter Smith
Дата:
On Sun, Mar 19, 2023 at 2:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:
>
> Hi,
>
> 18.03.2023 07:26, Tom Lane wrote:
>
> Amit Kapila <amit.kapila16@gmail.com> writes:
>
> Peter Smith has recently reported a BF failure [1]. AFAICS, the call
> stack of failure [2] is as follows:
>
> Note the assertion report a few lines further up:
>
> TRAP: failed Assert("pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0"), File: "pgstat_shmem.c", Line:
560,PID: 25004 
>
>
> This assertion failure can be reproduced easily with the attached patch:
> ============== running regression test queries        ==============
> test oldest_xmin                  ... ok           55 ms
> test oldest_xmin                  ... FAILED (test process exited with exit code 1)      107 ms
> test oldest_xmin                  ... FAILED (test process exited with exit code 1)        8 ms
> ============== shutting down postmaster               ==============
>
> contrib/test_decoding/output_iso/log/postmaster.log contains:
> TRAP: failed Assert("pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0"), File: "pgstat_shmem.c", Line:
561,PID: 456844 
>
> With the sleep placed above Assert(entry_ref->shared_entry->dropped) this Assert fails too.
>
> Best regards,
> Alexander

I used a slightly modified* patch of Alexander's [1] applied to the
latest HEAD code (but with my "toptxn" patch reverted).
--- the patch was modified in that I injected 'sleep' both above and
below the Assert(entry_ref->shared_entry->dropped).

Using this I was also able to reproduce the problem. But test failures
were rare. The make check-world seemed OK, and indeed the
test_decoding tests would also appear to PASS around 14 out of 15
times.

============== running regression test queries        ==============
test oldest_xmin                  ... ok          342 ms
test oldest_xmin                  ... ok          121 ms
test oldest_xmin                  ... ok          283 ms
============== shutting down postmaster               ==============
============== removing temporary instance            ==============

=====================
 All 3 tests passed.
=====================

~~

Often (but not always) depite the test_decoding reported PASS all 3
tests as "ok", I still observed there was a TRAP in the logfile
(contrib/test_decoding/output_iso/log/postmaster.log).
TRAP: failed Assert("entry_ref->shared_entry->dropped")

~~

Occasionally (about 1 in 15 test runs) the test would fail the same
way as described by Alexander [1], with the accompanying TRAP.
TRAP: failed Assert("pg_atomic_read_u32(&entry_ref->shared_entry->refcount)
== 0"), File: "pgstat_shmem.c", Line: 562, PID: 32013

============== running regression test queries        ==============
test oldest_xmin                  ... ok          331 ms
test oldest_xmin                  ... ok           91 ms
test oldest_xmin                  ... FAILED      702 ms
============== shutting down postmaster               ==============

======================
 1 of 3 tests failed.
======================


~~


FWIW, the "toptxn" patch. whose push coincided with the build-farm
error I first reported [2], turns out to be an innocent party in this
TRAP. We know this because all of the above results were running using
HEAD code but with that "toptxn" patch reverted.

------
[1] https://www.postgresql.org/message-id/1941b7e2-be7c-9c4c-8505-c0fd05910e9a%40gmail.com
[2] https://www.postgresql.org/message-id/CAHut%2BPsHdWFjU43VEX%2BR-8de6dFQ-_JWrsqs%3DvWek1hULexP4Q%40mail.gmail.com

Kind Regards,
Peter Smith.
Fujitsu Australia



Re: BF mamba failure

От
Alexander Lakhin
Дата:
Hello hackers,

20.03.2023 09:10, Peter Smith wrote:
>
> Using this I was also able to reproduce the problem. But test failures
> were rare. The make check-world seemed OK, and indeed the
> test_decoding tests would also appear to PASS around 14 out of 15
> times.

I've stumbled upon this assertion failure again during testing following cd312adc5.

This time I've simplified the reproducer to the attached modification.
With this patch applied, `make -s check -C contrib/test_decoding` fails on master as below:
ok 1         - pgstat_rc_1                                14 ms
not ok 2     - pgstat_rc_2                              1351 ms


contrib/test_decoding/output_iso/log/postmaster.log contains:
TRAP: failed Assert("pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0"), File: "pgstat_shmem.c", Line: 562,

PID: 1130928

With extra logging added, I see the following events happening:
1) pgstat_rc_1.setup calls pgstat_create_replslot(), gets
   ReplicationSlotIndex(slot) = 0 and calls
   pgstat_get_entry_ref_locked(PGSTAT_KIND_REPLSLOT, InvalidOid, 0, 0).

2) pgstat_rc_1.s0_get_changes executes pg_logical_slot_get_changes(...)
   and then calls pgstat_gc_entry_refs on shmem_exit() ->
   pgstat_shutdown_hook() ...;
   with the sleep added inside pgstat_release_entry_ref, this backend waits
   after decreasing entry_ref->shared_entry->refcount to 0.

3) pgstat_rc_1.stop removes the replication slot.

4) pgstat_rc_2.setup calls pgstat_create_replslot(), gets
   ReplicationSlotIndex(slot) = 0 and calls
   pgstat_get_entry_ref_locked(PGSTAT_KIND_REPLSLOT, InvalidOid, 0, 0),
   which leads to the call pgstat_reinit_entry(), which increases refcount
   for the same shared_entry as in (1) and (2), and then to the call
   pgstat_acquire_entry_ref(), which increases refcount once more.

5) the backend 2 reaches
Assert(pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0),
   which fails due to refcount = 2.

Best regards,
Alexander
Вложения

Re: BF mamba failure

От
Michael Paquier
Дата:
On Wed, Jun 12, 2024 at 08:00:00PM +0300, Alexander Lakhin wrote:
> With extra logging added, I see the following events happening:
> 1) pgstat_rc_1.setup calls pgstat_create_replslot(), gets
>   ReplicationSlotIndex(slot) = 0 and calls
>   pgstat_get_entry_ref_locked(PGSTAT_KIND_REPLSLOT, InvalidOid, 0, 0).
>
> 2) pgstat_rc_1.s0_get_changes executes pg_logical_slot_get_changes(...)
>   and then calls pgstat_gc_entry_refs on shmem_exit() ->
>   pgstat_shutdown_hook() ...;
>   with the sleep added inside pgstat_release_entry_ref, this backend waits
>   after decreasing entry_ref->shared_entry->refcount to 0.
>
> 3) pgstat_rc_1.stop removes the replication slot.
>
> 4) pgstat_rc_2.setup calls pgstat_create_replslot(), gets
>   ReplicationSlotIndex(slot) = 0 and calls
>   pgstat_get_entry_ref_locked(PGSTAT_KIND_REPLSLOT, InvalidOid, 0, 0),
>   which leads to the call pgstat_reinit_entry(), which increases refcount
>   for the same shared_entry as in (1) and (2), and then to the call
>   pgstat_acquire_entry_ref(), which increases refcount once more.
>
> 5) the backend 2 reaches
> Assert(pg_atomic_read_u32(&entry_ref->shared_entry->refcount) == 0),
>   which fails due to refcount = 2.

Thanks for the details.

So this comes down to the point that we offer no guarantee that the
stats entry a backend sees at shutdown is the same as the one it wants
to clean up.  That's the same problem as what Floris has reported
here, with an OID wraparound and tables to get the same hash key.
That can happen for all stats kinds:
https://www.postgresql.org/message-id/b14ae28029f64757bb64613be2549a74@Optiver.com

I don't think that this is going to fly far except if we introduce a
concept of "generation" or "age" in the stats entries.  The idea is
simple: when a stats entry is reinitialized because of a drop&create,
increment a counter to tell that this is a new generation, and keep
track of it in *both* PgStat_EntryRef (local backend reference to the
shmem stats entry) *and* PgStatShared_HashEntry (the central one).
When releasing an entry, if we know that the shared entry we are
pointing at is not of the same generation as the local reference, it
means that the entry has been reused for something else with the same
hash key, so give up.  It should not be that invasive, still it means
ABI breakage in the two pgstats internal structures I am mentioning,
which is OK for a backpatch as this stuff is internal.  On top of
that, this problem means that we can silently and randomly lose stats,
which is not cool.

Note that Noah has been working on a better integration of injection
points with isolation tests.  We could reuse that here to have a test
case, with an injection point waiting around the pg_usleep() you have
hardcoded:
https://www.postgresql.org/message-id/20240614003549.c2.nmisch@google.com

I'll try to give it a go on Monday.
--
Michael

Вложения

Re: BF mamba failure

От
Michael Paquier
Дата:
On Fri, Jun 14, 2024 at 02:31:37PM +0900, Michael Paquier wrote:
> I don't think that this is going to fly far except if we introduce a
> concept of "generation" or "age" in the stats entries.  The idea is
> simple: when a stats entry is reinitialized because of a drop&create,
> increment a counter to tell that this is a new generation, and keep
> track of it in *both* PgStat_EntryRef (local backend reference to the
> shmem stats entry) *and* PgStatShared_HashEntry (the central one).
> When releasing an entry, if we know that the shared entry we are
> pointing at is not of the same generation as the local reference, it
> means that the entry has been reused for something else with the same
> hash key, so give up.  It should not be that invasive, still it means
> ABI breakage in the two pgstats internal structures I am mentioning,
> which is OK for a backpatch as this stuff is internal.  On top of
> that, this problem means that we can silently and randomly lose stats,
> which is not cool.
>
> I'll try to give it a go on Monday.

Here you go, the patch introduces what I've named an "age" counter
attached to the shared entry references, and copied over to the local
references.  The countner is initialized at 0 and incremented each
time an entry is reused, then when attempting to drop an entry we
cross-check the version hold locally with the shared one.

While looking at the whole, this is close to a concept patch sent
previously, where a counter is used in the shared entry with a
cross-check done with the local reference, that was posted here
(noticed that today):
https://www.postgresql.org/message-id/20230603.063418.767871221863527769.horikyota.ntt@gmail.com

The logic is different though, as we don't need to care about the
contents of the local cache when cross-checking the "age" count when
retrieving the contents, just the case where a backend would attempt
to drop an entry it thinks is OK to operate on, that got reused
because of the effect of other backends doing creates and drops with
the same hash key.

This idea needs more eyes, so I am adding that to the next CF for now.
I've played with it for a few hours and concurrent replication slot
drops/creates, without breaking it.  I have not implemented an
isolation test for this case, as it depends on where we are going with
their integration with injection points.
--
Michael

Вложения

Re: BF mamba failure

От
Kouber Saparev
Дата:
Am I correct to believe this patch is fixing the "can only drop stats once" issue?

It just happened to us, one of our streaming replicas decided to shut down.

Re: BF mamba failure

От
Bertrand Drouvot
Дата:
Hi,

On Thu, Oct 10, 2024 at 06:19:30AM +0000, Kouber Saparev wrote:
> Am I correct to believe this patch is fixing the "can only drop stats once" issue?
> 
> It just happened to us, one of our streaming replicas decided to shut down.

Does the error message looks like (added in [1]):

"
trying to drop stats entry already dropped: kind=%s dboid=%u objoid=%u refcount=%u",
"

by any chance? If so, would you mind to share it?

Regards,

[1]: https://postgr.es/c/d92573adcb

-- 
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com



Re: BF mamba failure

От
Kouber Saparev
Дата:
На чт, 10.10.2024 г. в 17:42 Bertrand Drouvot <bertranddrouvot.pg@gmail.com> написа:

Does the error message looks like (added in [1]):

"
trying to drop stats entry already dropped: kind=%s dboid=%u objoid=%u refcount=%u",
"

by any chance? If so, would you mind to share it?

Regards,

[1]: https://postgr.es/c/d92573adcb

Unfortunately not, we are running 15.4 and planning to upgrade very soon. Is the patch mentioned already merged in PostgreSQL 16?

The last lines in the log are:

2024-09-22 07:33:53.913 UTC,,,223214,,64e79dc1.367ee,3255151,,2023-08-24 18:13:21 UTC,1/0,0,FATAL,XX000,"can only drop stats once",,,,,"WAL redo at 45F8F/DB943818 for Transaction/COMMIT: 2024-09-22 07:24:49.603408+00; dropped stats: 2/16420/3237787065 2/16420/3237787064 2/16420/3237787061; inval msgs: catcache 32 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 relcache 3237787065 relcache 3237787064 snapshot 2608 relcache 3237787064 snapshot 2608 snapshot 2608 snapshot 2608 relcache 3237787061 snapshot 2608 snapshot 1214",,,,"","startup",,0
2024-09-22 07:33:54.501 UTC,,,223207,,64e79dc0.367e7,1423,,2023-08-24 18:13:20 UTC,,0,LOG,00000,"startup process (PID 223214) exited with exit code 1",,,,,,,,,"","postmaster",,0
2024-09-22 07:33:54.501 UTC,,,223207,,64e79dc0.367e7,1424,,2023-08-24 18:13:20 UTC,,0,LOG,00000,"terminating any other active server processes",,,,,,,,,"","postmaster",,0
2024-09-22 07:33:55.398 UTC,,,223207,,64e79dc0.367e7,1425,,2023-08-24 18:13:20 UTC,,0,LOG,00000,"shutting down due to startup process failure",,,,,,,,,"","postmaster",,0
2024-09-22 07:33:56.697 UTC,,,223207,,64e79dc0.367e7,1426,,2023-08-24 18:13:20 UTC,,0,LOG,00000,"database system is shut down",,,,,,,,,"","postmaster",,0

--
Kouber

 

Re: BF mamba failure

От
Bertrand Drouvot
Дата:
Hi,

On Fri, Oct 11, 2024 at 11:07:29AM +0300, Kouber Saparev wrote:
> На чт, 10.10.2024 г. в 17:42 Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
> написа:
> 
> >
> > Does the error message looks like (added in [1]):
> >
> > "
> > trying to drop stats entry already dropped: kind=%s dboid=%u objoid=%u
> > refcount=%u",
> > "
> >
> > by any chance? If so, would you mind to share it?
> >
> > Regards,
> >
> > [1]: https://postgr.es/c/d92573adcb
> 
> 
> Unfortunately not, we are running 15.4 and planning to upgrade very soon.
> Is the patch mentioned already merged in PostgreSQL 16?

Yes, as of 16.4.

Regards,

-- 
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com



Re: BF mamba failure

От
Bertrand Drouvot
Дата:
Hi,

On Wed, Oct 16, 2024 at 09:43:48AM +0900, Michael Paquier wrote:
> On Fri, Oct 11, 2024 at 08:18:58AM +0000, Bertrand Drouvot wrote:
> > On Fri, Oct 11, 2024 at 11:07:29AM +0300, Kouber Saparev wrote:
> >> Unfortunately not, we are running 15.4 and planning to upgrade very soon.
> >> Is the patch mentioned already merged in PostgreSQL 16?
> > 
> > Yes, as of 16.4.
> 
> Right.  I'd surely welcome more eyes on what I have posted here:
> https://www.postgresql.org/message-id/Zm-8Xo93K9yD9fy7@paquier.xyz

I applied your patch and do confirm that it fixes the issue. While that works I
wonder is there no other way to fix the issue.

Indeed, in pgstat_release_entry_ref(), we're doing:

"
if (pg_atomic_fetch_sub_u32(&entry_ref->shared_entry->refcount, 1) == 1)
.
.
    shent = dshash_find(pgStatLocal.shared_hash,
                        &entry_ref->shared_entry->key,
                        true);
"

I wonder if we are not decrementing &entry_ref->shared_entry->refcount too early.

I mean, wouldn't that make sense to decrement it after the dshash_find() call?
(to ensure a "proper" whole entry locking, done in dshash_find(), first)

> I am a bit annoyed by the fact of making PgStatShared_HashEntry
> slightly larger to track the age of the entries,

Yeah, FWIW, we would be going from 32 bytes:

(gdb)  ptype /o struct PgStatShared_HashEntry
/* offset      |    size */  type = struct PgStatShared_HashEntry {
/*      0      |      16 */    PgStat_HashKey key;
/*     16      |       1 */    _Bool dropped;
/* XXX  3-byte hole      */
/*     20      |       4 */    pg_atomic_uint32 refcount;
/*     24      |       8 */    dsa_pointer body;

                               /* total size (bytes):   32 */
                             }

to 40 bytes (with the patch applied):

(gdb) ptype /o struct PgStatShared_HashEntry
/* offset      |    size */  type = struct PgStatShared_HashEntry {
/*      0      |      16 */    PgStat_HashKey key;
/*     16      |       1 */    _Bool dropped;
/* XXX  3-byte hole      */
/*     20      |       4 */    pg_atomic_uint32 refcount;
/*     24      |       4 */    pg_atomic_uint32 agecount;
/* XXX  4-byte hole      */
/*     32      |       8 */    dsa_pointer body;

                               /* total size (bytes):   40 */
                             }

Due to the padding, that's a 8 bytes increase while we're adding "only" 4 bytes.

Regards,

-- 
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com



Re: BF mamba failure

От
Bertrand Drouvot
Дата:
Hi,

On Thu, Nov 14, 2024 at 04:55:23PM +0900, Michael Paquier wrote:
> I've been going through this patch again, and the failures that could
> be seen because of such failures, like standbys failing in an
> unpredictible way is not cool, so I am planning to apply the attached
> down to 15 now that the release has colled down.  At the end I am not
> really stressing about this addition in the structure for the sake of
> making the stats entries safe to handle.

I don't think the addition is a concern too.

> I did not like much the name "agecount" though, used to cross-check
> how many times an entry is reused in shared memory and in the local
> copy we keep in a process, so I've renamed it to "generation".

"generation" sounds more appropriate to me too.

The approach to avoid the error sounds reasonable to me.

Just 2 comments about the patch:

=== 1

Maybe use "generation" instead of generation in the comments (where it's not done,
some comments do it already).

=== 2

We could think about adding a test. That should be doable with replication slots
or custom pgstats kinds and probably injection points. But I'm not sure that's
worth it, as the code in the patch looks "simple" enough. Thoughts?

Regards,

-- 
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com



Re: BF mamba failure

От
Kouber Saparev
Дата:
We just had the same sudden replica shutdown, this time with version 17.3.

2025-08-02 22:10:02.229 UTC,,,473966,,67b3d76c.73b6e,14,,2025-02-18 00:42:20 UTC,8111/0,0,FATAL,XX000,"trying to drop stats entry already dropped: kind=relation dboid=16420 objoid=4169049057 refcount=1",,,,,"WAL redo at 6337E/9FF275E8 for Transaction/COMMIT: 2025-08-02
22:10:02.163756+00; dropped stats: 2/16420/4169049057 2/16420/4169049056 2/16420/4169049053 2/16420/4169049071 2/16420/4169049070 2/16420/4169049068 2/16420/4169049064 2/16420/4169049154 2/16420/4169049153 2/16420/4169049152 2/16420/4169049149; inval msgs: catcache 80 c
atcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7
catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 32 cat
cache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 32 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7
catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7
catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache
7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 ca
tcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 32 catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 7 catcache 6 catcache 32 catcache 55 catcache 54 catcache 55 catcache 54 cat
cache 32 catcache 7 catcache 6 catcache 55 catcache 54 catcache 32 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 cat
cache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 cat
cache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 c
atcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catca
che 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 32 catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 7 catcache 6 catcache 32 catcache 55 catcache 54 catcache 55 catcache 54 catcache 32 catca
che 7 catcache 6 catcache 55 catcache 54 catcache 32 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcac
he 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcac
he 6 catcache 7 catcache 6 catcache 55 catcache 54 snapshot 2608 relcache 4169049053 snapshot 1214 relcache 4169049056 relcache 4169049057 relcache 4169049056 snapshot 2608 relcache 4169049056 relcache 4169049057 relcache 4169049053 snapshot 2608 relcache 4169049057 rel
cache 4169049056 snapshot 2608 relcache 4169049056 snapshot 2608 snapshot 2608 snapshot 2608 relcache 4169049053 snapshot 2608 snapshot 1214 snapshot 2608 relcache 4169049064 snapshot 1214 relcache 4169049068 relcache 4169049070 relcache 4169049068 snapshot 2608 relcach
e 4169049068 relcache 4169049070 relcache 4169049064 snapshot 2608 relcache 4169049071 relcache 4169049064 snapshot 2608 relcache 4169049064 relcache 4169049071 relcache 4169049071 relcache 4169049064 snapshot 2608 relcache 4169049070 relcache 4169049068 snapshot 2608 r
elcache 4169049068 snapshot 2608 snapshot 2608 snapshot 2608 relcache 4169049064 snapshot 2608 snapshot 1214 snapshot 2608 relcache 4169049149 snapshot 1214 relcache 4169049152 relcache 4169049153 relcache 4169049152 snapshot 2608 relcache 4169049152 relcache 4169049153
 relcache 4169049149 snapshot 2608 relcache 4169049154 relcache 4169049149 snapshot 2608 relcache 4169049149 relcache 4169049154 relcache 4169049154 relcache 4169049149 snapshot 2608 relcache 4169049153 relcache 4169049152 snapshot 2608 relcache 4169049152 snapshot 2608
 snapshot 2608 snapshot 2608 relcache 4169049149 snapshot 2608 snapshot 1214",,,,"","startup",,0
2025-08-02 22:10:02.389 UTC,,,473962,,67b3d76b.73b6a,27,,2025-02-18 00:42:19 UTC,,0,LOG,00000,"startup process (PID 473966) exited with exit code 1",,,,,,,,,"","postmaster",,0
2025-08-02 22:10:02.389 UTC,,,473962,,67b3d76b.73b6a,28,,2025-02-18 00:42:19 UTC,,0,LOG,00000,"terminating any other active server processes",,,,,,,,,"","postmaster",,0
2025-08-02 22:10:02.516 UTC,,,473962,,67b3d76b.73b6a,29,,2025-02-18 00:42:19 UTC,,0,LOG,00000,"shutting down due to startup process failure",,,,,,,,,"","postmaster",,0
2025-08-02 22:10:02.563 UTC,,,473962,,67b3d76b.73b6a,30,,2025-02-18 00:42:19 UTC,,0,LOG,00000,"database system is shut down",,,,,,,,,"","postmaster",,0

На пт, 15.11.2024 г. в 6:45 Michael Paquier <michael@paquier.xyz> написа:
On Thu, Nov 14, 2024 at 10:07:41AM +0000, Bertrand Drouvot wrote:
> === 1
>
> Maybe use "generation" instead of generation in the comments (where it's not done,
> some comments do it already).

I've tweaked things to be more consistency, and applied that down to 15.

> === 2
>
> We could think about adding a test. That should be doable with replication slots
> or custom pgstats kinds and probably injection points. But I'm not sure that's
> worth it, as the code in the patch looks "simple" enough. Thoughts?

The tricky part is that this path involves a backend shutdown.  We
should be able to do something with a wait before the dshash_find() in
pgstat_release_entry_ref() with a BackgroundPsql object that gets
stopped, but I'm not sure if we can guarantee a lookup of
pg_stat_activity at this stage.  Let me see..
--
Michael

Re: BF mamba failure

От
Michael Paquier
Дата:
On Mon, Aug 04, 2025 at 01:00:35PM +0300, Kouber Saparev wrote:
> We just had the same sudden replica shutdown, this time with version 17.3.
>
> 2025-08-02 22:10:02.229 UTC,,,473966,,67b3d76c.73b6e,14,,2025-02-18
> 00:42:20 UTC,8111/0,0,FATAL,XX000,"trying to drop stats entry already
> dropped: kind=relation dboid=16420 objoid=4169049057 refcount=1",,,,,"WAL
> redo at 6337E/9FF275E8 for Transaction/COMMIT: 2025-08-02

You are close to wraparound for this relation.  Is that vanilla
streaming replication with heap?
--
Michael

Вложения

Re: BF mamba failure

От
Bertrand Drouvot
Дата:
Hi,

On Thu, Aug 07, 2025 at 08:18:26AM +0900, Michael Paquier wrote:
> On Mon, Aug 04, 2025 at 01:00:35PM +0300, Kouber Saparev wrote:
> > We just had the same sudden replica shutdown, this time with version 17.3.
> > 
> > 2025-08-02 22:10:02.229 UTC,,,473966,,67b3d76c.73b6e,14,,2025-02-18
> > 00:42:20 UTC,8111/0,0,FATAL,XX000,"trying to drop stats entry already
> > dropped: kind=relation dboid=16420 objoid=4169049057 refcount=1",,,,,"WAL
> > redo at 6337E/9FF275E8 for Transaction/COMMIT: 2025-08-02
> 
> You are close to wraparound for this relation.  Is that vanilla
> streaming replication with heap?

I was wondering about adding the "generation" (introduced in 818119afccd3) to
the above error log message, thoughts?

Regards,

-- 
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com



Re: BF mamba failure

От
Michael Paquier
Дата:
On Thu, Aug 07, 2025 at 05:28:17AM +0000, Bertrand Drouvot wrote:
> On Thu, Aug 07, 2025 at 08:18:26AM +0900, Michael Paquier wrote:
>> On Mon, Aug 04, 2025 at 01:00:35PM +0300, Kouber Saparev wrote:
>> You are close to wraparound for this relation.  Is that vanilla
>> streaming replication with heap?
>
> I was wondering about adding the "generation" (introduced in 818119afccd3) to
> the above error log message, thoughts?

Yes, that could prove to be useful when debugging such issues.  We do
not have a lot of information here.
--
Michael

Вложения

Re: BF mamba failure

От
Bertrand Drouvot
Дата:
Hi,

On Thu, Aug 07, 2025 at 04:17:29PM +0900, Michael Paquier wrote:
> On Thu, Aug 07, 2025 at 05:28:17AM +0000, Bertrand Drouvot wrote:
> > On Thu, Aug 07, 2025 at 08:18:26AM +0900, Michael Paquier wrote:
> >> On Mon, Aug 04, 2025 at 01:00:35PM +0300, Kouber Saparev wrote:
> >> You are close to wraparound for this relation.  Is that vanilla
> >> streaming replication with heap?
> > 
> > I was wondering about adding the "generation" (introduced in 818119afccd3) to
> > the above error log message, thoughts?
> 
> Yes, that could prove to be useful when debugging such issues.  We do
> not have a lot of information here.

Yes, done in the attached.

Regards,

-- 
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

Вложения

Re: BF mamba failure

От
Michael Paquier
Дата:
On Thu, Aug 07, 2025 at 08:35:17AM +0000, Bertrand Drouvot wrote:
> On Thu, Aug 07, 2025 at 04:17:29PM +0900, Michael Paquier wrote:
>> Yes, that could prove to be useful when debugging such issues.  We do
>> not have a lot of information here.
>
> Yes, done in the attached.

Thanks.  With the next minor release happening next week, done that
now so as we will able to gather more information from the field.
--
Michael

Вложения

Re: BF mamba failure

От
Kouber Saparev
Дата:
Indeed, nothing exotic about our replication.

As for the object 4169049057, I am not able to find it anywhere in the catalogs. Perhaps it was something that was dropped in the meantime.

На чт, 7.08.2025 г. в 2:18 Michael Paquier <michael@paquier.xyz> написа:
On Mon, Aug 04, 2025 at 01:00:35PM +0300, Kouber Saparev wrote:
> We just had the same sudden replica shutdown, this time with version 17.3.
>
> 2025-08-02 22:10:02.229 UTC,,,473966,,67b3d76c.73b6e,14,,2025-02-18
> 00:42:20 UTC,8111/0,0,FATAL,XX000,"trying to drop stats entry already
> dropped: kind=relation dboid=16420 objoid=4169049057 refcount=1",,,,,"WAL
> redo at 6337E/9FF275E8 for Transaction/COMMIT: 2025-08-02

You are close to wraparound for this relation.  Is that vanilla
streaming replication with heap?
--
Michael

Re: BF mamba failure

От
Kouber Saparev
Дата:
Yet again one of our replicas died. Should I file a bug report or something, what should we do in order to prevent it? Restart the database every month/week or so?...

2025-09-09 12:08:10.688 UTC,,,1510554,,67b3d8bc.170c9a,337,,2025-02-18 00:47:56 UTC,8111/0,0,FATAL,XX000,"trying to drop stats entry already dropped: kind=relation dboid=16420 objoid=767325170 refcount=2"
,,,,,"WAL redo at 66E4F/74A0F210 for Transaction/COMMIT: 2025-09-09 12:08:10.657171+00; dropped stats: 2/16420/767325174 2/16420/767325173 2/16420/767325170 2/16420/767325192 2/16420/767325191 2/16420/767
325189 2/16420/767325186 2/16420/767325198 2/16420/767325197 2/16420/767325196 2/16420/767325193; inval msgs: catcache 80 catcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6
catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catc
ache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache
7 catcache 6 catcache 7 catcache 6 catcache 32 catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 32 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catc
ache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 5
5 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7
catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache
7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 c
atcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 32 catc
ache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 7 catcache 6 catcache 32 catcache 55 catcache 54 catcache 55 catcache 54 catcache 32 catcache 7 catcach
e 6 catcache 55 catcache 54 catcache 32 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache
7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 7 catcache 6 catcache
7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 8
0 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 c
atcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catca
che 6 catcache 7 catcache 6 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 32 catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54
catcache 7 catcache 6 catcache 32 catcache 55 catcache 54 catcache 55 catcache 54 catcache 32 catcache 7 catcache 6 catcache 55 catcache 54 catcache 32 catcache 7 catcache 6 catcache 7 catcache 6 catcache
55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 c
atcache 6 catcache 55 catcache 54 catcache 80 catcache 79 catcache 80 catcache 79 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 c
atcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 snapshot 2608 relcache 767325170 snapshot 1214 relcache 767325173 relcache 767325174 relcache 767325173 snapshot 2608 relcache
767325173 relcache 767325174 relcache 767325170 snapshot 2608 relcache 767325174 relcache 767325173 snapshot 2608 relcache 767325173 snapshot 2608 snapshot 2608 snapshot 2608 relcache 767325170 snapshot
2608 snapshot 1214 snapshot 2608 relcache 767325186 snapshot 1214 relcache 767325189 relcache 767325191 relcache 767325189 snapshot 2608 relcache 767325189 relcache 767325191 relcache 767325186 snapshot 2
608 relcache 767325192 relcache 767325186 snapshot 2608 relcache 767325186 relcache 767325192 relcache 767325192 relcache 767325186 snapshot 2608 relcache 767325191 relcache 767325189 snapshot 2608 relcac
he 767325189 snapshot 2608 snapshot 2608 snapshot 2608 relcache 767325186 snapshot 2608 snapshot 1214 snapshot 2608 relcache 767325193 snapshot 1214 relcache 767325196 relcache 767325197 relcache 76732519
6 snapshot 2608 relcache 767325196 relcache 767325197 relcache 767325193 snapshot 2608 relcache 767325198 relcache 767325193 snapshot 2608 relcache 767325193 relcache 767325198 relcache 767325198 relcache
767325193 snapshot 2608 relcache 767325197 relcache 767325196 snapshot 2608 relcache 767325196 snapshot 2608 snapshot 2608 snapshot 2608 relcache 767325193 snapshot 2608 snapshot 1214",,,,"","startup",,0

На пт, 8.08.2025 г. в 12:04 Kouber Saparev <kouber@gmail.com> написа:
Indeed, nothing exotic about our replication.

As for the object 4169049057, I am not able to find it anywhere in the catalogs. Perhaps it was something that was dropped in the meantime.

На чт, 7.08.2025 г. в 2:18 Michael Paquier <michael@paquier.xyz> написа:
On Mon, Aug 04, 2025 at 01:00:35PM +0300, Kouber Saparev wrote:
> We just had the same sudden replica shutdown, this time with version 17.3.
>
> 2025-08-02 22:10:02.229 UTC,,,473966,,67b3d76c.73b6e,14,,2025-02-18
> 00:42:20 UTC,8111/0,0,FATAL,XX000,"trying to drop stats entry already
> dropped: kind=relation dboid=16420 objoid=4169049057 refcount=1",,,,,"WAL
> redo at 6337E/9FF275E8 for Transaction/COMMIT: 2025-08-02

You are close to wraparound for this relation.  Is that vanilla
streaming replication with heap?
--
Michael

Re: BF mamba failure

От
Michael Paquier
Дата:
On Tue, Sep 09, 2025 at 04:07:45PM +0300, Kouber Saparev wrote:
> Yet again one of our replicas died. Should I file a bug report or
> something, what should we do in order to prevent it? Restart the database
> every month/week or so?...

I don't think we need another bug to report the same problem.  We are
aware that this may be an issue and that it is hard to track, the
problem is to find room to be able to investigate it, at this stage.

I may be able to come back to it soon-ishly, looking at how to trigger
any race condition.  The difficulty is to think how the current code
is able to reach this state, because we have a race condition at hand
in standbys.

As a start, are these failures only in the startup process?  Has the
startup process reached a consistent state when the problem happens
because the replay code is too eager at removing the stats entries?
Has it not reached a consistent state.  These could be useful hints to
extract a reproducible test case, looking for common patterns.

I'll ask around if I have seen cases like that in the user pool I have
an access to.
--
Michael

Вложения

Re: BF mamba failure

От
Kouber Saparev
Дата:
На чт, 11.09.2025 г. в 2:28 Michael Paquier <michael@paquier.xyz> написа:
As a start, are these failures only in the startup process?  Has the
startup process reached a consistent state when the problem happens
because the replay code is too eager at removing the stats entries?
Has it not reached a consistent state.  These could be useful hints to
extract a reproducible test case, looking for common patterns.

The pattern is the same, although I am not 100% sure that the same replica is having this - it is a cascaded streaming replica, i.e. a replica of another replica. Once we had this in October 2024, with version 15.4, then in August 2025 with 17.3, and now in September again (17.3). The database is working for month(s) perfectly fine in a heavy production workload (lots of WALs etc.), and then all of a sudden it shuts down.

Thanks for the feedback, and let me know if I could provide any additional info. 

--
Kouber

Re: BF mamba failure

От
Michael Paquier
Дата:
On Thu, Sep 11, 2025 at 04:35:01PM +0300, Kouber Saparev wrote:
> The pattern is the same, although I am not 100% sure that the same replica
> is having this - it is a cascaded streaming replica, i.e. a replica of
> another replica. Once we had this in October 2024, with version 15.4, then
> in August 2025 with 17.3, and now in September again (17.3). The database
> is working for month(s) perfectly fine in a heavy production workload (lots
> of WALs etc.), and then all of a sudden it shuts down.

The shutdown is caused by the startup process choking on redo.  FWIW

> Thanks for the feedback, and let me know if I could provide any additional
> info.

Okay, the bit about the cascading standby is a useful piece of
information.  Do you have some data about the relation reported in the
error message this is choking on based on its OID?  Is this actively
used in read-only workloads, with the relation looked at in the
cascading standby?  Is hot_standby_feedback enabled in the cascading
standby?  With which process has this cascading standby been created?
Does the workload of the primary involve a high consumption of OIDs
for relations, say many temporary tables?

Another thing that may help is the WAL record history.  Are you for
example seeing attempts to drop twice the same pgstats entry in WAL
records?  Perhaps the origin of the problem is in this area.  A
refcount of 2 is relevant, of course.

I have looked a bit around but nothing has popped up here, so as far
as I know you seem to be the only one impacted by that.

1d6a03ea4146 and dc5f9054186a are in 17.3, so perhaps something is
still off with the drop when applied to cascading standbys.  A vital
piece of information may also be with "generation", which would show
up in the error report thanks to bdda6ba30cbe, and that's included in
17.6.  A first thing would be to update to 17.6 and see how things
go for these cascading setups.  If it takes a couple of weeks to have
one report, we have a hunt that may take a few months at least, except
if somebody is able to find out the race condition here, me or someone
else.
--
Michael

Вложения

Re: BF mamba failure

От
Kouber Saparev
Дата:


На пт, 12.09.2025 г. в 3:37 Michael Paquier <michael@paquier.xyz> написа:
Okay, the bit about the cascading standby is a useful piece of
information.  Do you have some data about the relation reported in the
error message this is choking on based on its OID?  Is this actively
used in read-only workloads, with the relation looked at in the
cascading standby?

This objoid=767325170 is non-existent, nor was it present in the previous shutdown (objoid=4169049057). So I guess it is something quasi-temporary that has been dropped afterwards.
 
  Is hot_standby_feedback enabled in the cascading
standby?

Yes, hot_standby_feedback = on.
 
With which process has this cascading standby been created?
Does the workload of the primary involve a high consumption of OIDs
for relations, say many temporary tables?

Yes, we have around 150 entries added and deleted per second in pg_class, and around 800 in pg_attribute. So something is actively creating and dropping tables all the time.
 

Another thing that may help is the WAL record history.  Are you for
example seeing attempts to drop twice the same pgstats entry in WAL
records?  Perhaps the origin of the problem is in this area.  A
refcount of 2 is relevant, of course.

How could we dig into this, i.e. inspecting such attempts in the WAL records?
 

I have looked a bit around but nothing has popped up here, so as far
as I know you seem to be the only one impacted by that.

1d6a03ea4146 and dc5f9054186a are in 17.3, so perhaps something is
still off with the drop when applied to cascading standbys.  A vital
piece of information may also be with "generation", which would show
up in the error report thanks to bdda6ba30cbe, and that's included in
17.6.  A first thing would be to update to 17.6 and see how things
go for these cascading setups.  If it takes a couple of weeks to have
one report, we have a hunt that may take a few months at least, except
if somebody is able to find out the race condition here, me or someone
else.


Is it enough to upgrade the replicas or we need to upgrade the primary as well?

--
Kouber 

Re: BF mamba failure

От
Michael Paquier
Дата:
On Tue, Sep 16, 2025 at 02:45:03PM +0300, Kouber Saparev wrote:
> На пт, 12.09.2025 г. в 3:37 Michael Paquier <michael@paquier.xyz> написа:
>> With which process has this cascading standby been created?
>> Does the workload of the primary involve a high consumption of OIDs
>> for relations, say many temporary tables?
>
> Yes, we have around 150 entries added and deleted per second in pg_class,
> and around 800 in pg_attribute. So something is actively creating and
> dropping tables all the time.

The number of relations would most likely matter, I doubt that
pg_attribute contributes a lot in the frictions.  One of the test
scenarios I am running is exactly that: a bunch of CREATE/DROP tables
running concurrently with backends in a cascading standby accessing
them, while tweaking the OID generated in the control file to force
wraparounds.  I am failing to re-create your problem, for now at
least.

> > Another thing that may help is the WAL record history.  Are you for
> > example seeing attempts to drop twice the same pgstats entry in WAL
> > records?  Perhaps the origin of the problem is in this area.  A
> > refcount of 2 is relevant, of course.
> >
>
> How could we dig into this, i.e. inspecting such attempts in the WAL
> records?

Yeah, with pg_walinspect or pg_waldump, but I doubt that you retain
enough WAL history to be able to ping at something specific.  One
pattern I am looking for is duplicated drops initiated from the WAL
records, when wraparound hits, or at least if there's a window small
enough that two WAL records are generated and attempt to generate the
same entry drop.  That's just an assumption at this stage.


> Is it enough to upgrade the replicas or we need to upgrade the primary as
> well?

That would be both.  There are always reasons to update to the latest
minor version of Postgres.  See the release notes for a bunch of them,
that's usually convincing enough especially with CVEs addressed.  :)
--
Michael

Вложения