Обсуждение: Change checkpoint‑record‑missing PANIC to FATAL

Поиск
Список
Период
Сортировка

Change checkpoint‑record‑missing PANIC to FATAL

От
Nitin Jadhav
Дата:
Hi,

While working on [1], we discussed whether the redo-record-missing error should be a PANIC or a FATAL. We concluded that FATAL is more appropriate, as it is more appropriate for the current situation and achieves the intended behavior and also it is consistent with the backup_label path, which already reports FATAL in the same scenario.

However, when the checkpoint record is missing, the behavior remains inconsistent: Without a backup_label, we currently raise a PANIC. With a backup_label, the same code path reports a FATAL.Since we have already made the redo‑record‑missing case to FATAL in 15f68ce, it seems reasonable to align the checkpoint‑record‑missing case as well. The existing PANIC dates back to an era before online backups and archive recovery existed, when external manipulation of WAL was not expected and such conditions were treated as internal faults. With all such features, it is much more realistic for WAL segments to go missing due to operational issues, and such cases are often recoverable. So switching this to FATAL appears appropriate.

Please share your thoughts.

I am happy to share a patch including a TAP test to cover this behavior once we agree to proceed.

[1]: https://www.postgresql.org/message-id/flat/CAMm1aWaaJi2w49c0RiaDBfhdCL6ztbr9m%3DdaGqiOuVdizYWYaA%40mail.gmail.com

Best Regards,
Nitin Jadhav
Azure Database for PostgreSQL
Microsoft

Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Michael Paquier
Дата:
On Tue, Dec 16, 2025 at 04:25:37PM +0530, Nitin Jadhav wrote:
> it seems reasonable to align the checkpoint‑record‑missing case as well.
> The existing PANIC dates back to an era before online backups and archive
> recovery existed, when external manipulation of WAL was not expected and
> such conditions were treated as internal faults. With all such features, it
> is much more realistic for WAL segments to go missing due to operational
> issues, and such cases are often recoverable. So switching this to FATAL
> appears appropriate.
>
> Please share your thoughts.

FWIW, I think that we should lift the PANIC pattern in this case, at
least to be able to provide more tests around the manipulation of WAL
segments when triggering recovery, with or without a backup_label as
much as with or without a recovery/standby.signal defined in the tree.
The PANIC pattern to blow up the backend when missing a checkpoint
record at the beginning of recovery is a historical artifact of
4d14fe0048cf.  The backend has evolved a lot since, particularly with
WAL archives that came much later than that.  Lowering that to a FATAL
does not imply a loss of information, just the lack of a backtrace
that can be triggered depending on how one has set of a cluster to
start (say a recovery.signal was forgotten and pg_wal/ has no
contents, etc.).  And IMO I doubt that a trace is really useful anyway
in this specific code path.

I'd love to hear the opinion of others on the matter, so if anybody
has comments, feel free.

I'd be curious to look at the amount of tests related to recovery
startup you have in mind anyway, Nitin.
--
Michael

Вложения

Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Nitin Jadhav
Дата:
> I'd be curious to look at the amount of tests related to recovery
> startup you have in mind anyway, Nitin.

Apologies for the delay.
At a high level, the recovery startup cases we want to test fall into
two main buckets:
(1) with a backup_label file and (2) without a backup_label file.

From these two situations, we can cover the following scenarios:
1) Primary crash recovery without a backup_label – Delete the WAL
segment containing the checkpoint record and try starting the server.
2) Primary crash recovery with a backup_label – Take a base backup
(which creates the backup_label), remove the checkpoint WAL segment,
and start the server with that backup directory.
3) Standby crash recovery – Stop the standby, delete the checkpoint
WAL segment, and start it again to see how standby recovery behaves.
4) PITR / archive‑recovery – Remove the checkpoint WAL segment and
start the server with a valid restore_command so it enters archive
recovery.

Tests (2) and (4) are fairly similar, so we can merge them if they
turn out to be redundant.
These are the scenarios I have in mind so far. Please let me know if
you think anything else should be added.

Best Regards,
Nitin Jadhav
Azure Database for PostgreSQL
Microsoft



Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Michael Paquier
Дата:
On Mon, Dec 29, 2025 at 08:39:08PM +0530, Nitin Jadhav wrote:
> Apologies for the delay.
> At a high level, the recovery startup cases we want to test fall into
> two main buckets:
> (1) with a backup_label file and (2) without a backup_label file.

For clarity's sake, we are talking about lowering this one in
xlogrecovery.c, which relates to the code path where these is no
backup_label file:
ereport(PANIC,
        errmsg("could not locate a valid checkpoint record at %X/%08X",
               LSN_FORMAT_ARGS(CheckPointLoc)));

> From these two situations, we can cover the following scenarios:
> 1) Primary crash recovery without a backup_label – Delete the WAL
> segment containing the checkpoint record and try starting the server.

Yeah, let's add a test for that.  It would be enough to remove the
segment that includes the checkpoint record.  There should be no need
to be fancy with injection points like the other test case from
15f68cebdcec.

> 2) Primary crash recovery with a backup_label – Take a base backup
> (which creates the backup_label), remove the checkpoint WAL segment,
> and start the server with that backup directory.

Okay.  I don't mind something here, for the two FATAL cases in the
code path where the backup_label exists:
- REDO record missing with checkpoint record found.  This is similar
to 15f68cebdcec.
- Checkpoint record missing.
Both should be cross-checked with the FATAL errors generated in the
server logs.

> 3) Standby crash recovery – Stop the standby, delete the checkpoint
> WAL segment, and start it again to see how standby recovery behaves.

In this case, we need to have a restore_command set anyway, no,
meaning that we should never fail?  I don't recall that we have a test
for that, currently, where we could look at the server logs to check
that a segment has been retrieved because the segment that includes
the checkpoint record is missing..

> 4) PITR / archive‑recovery – Remove the checkpoint WAL segment and
> start the server with a valid restore_command so it enters archive
> recovery.

Same as 3) to me, standby mode cannot be activated without a
restore_command and the recovery GUC checks are done in accordance to
the signal files before we attempt to read the initial checkpoint
record.

> Tests (2) and (4) are fairly similar, so we can merge them if they
> turn out to be redundant.
> These are the scenarios I have in mind so far. Please let me know if
> you think anything else should be added.

For the sake of the change from the PANIC to FATAL mentioned at the
top of this message, (1) would be enough.

The two cases of (2) I'm mentioning would be nice bonuses.  I would
recommend to double-check first if we trigger these errors in some
tests of the existing tests, actually, perhaps we don't need to add
anything except a check in some node's logs for the error string
patterns wanted.
--
Michael

Вложения

Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Nitin Jadhav
Дата:
Hi Michael,

Thanks for the detailed feedback.

> For clarity's sake, we are talking about lowering this one in
> xlogrecovery.c, which relates to the code path where these is no
> backup_label file:
> ereport(PANIC,
>         errmsg("could not locate a valid checkpoint record at %X/%08X",
>                LSN_FORMAT_ARGS(CheckPointLoc)));

I agree that case (1) is sufficient for the purpose of this change. I
mentioned the scenarios where a backup_label file exists mainly to
consider additional coverage in this area, but I agree those would
only be bonuses, as you note later.

> For the sake of the change from the PANIC to FATAL mentioned at the
> top of this message, (1) would be enough.
>
> The two cases of (2) I'm mentioning would be nice bonuses.  I would
> recommend to double-check first if we trigger these errors in some
> tests of the existing tests, actually, perhaps we don't need to add
> anything except a check in some node's logs for the error string
> patterns wanted.

I agree with your assessment. Case (1) is enough for this change, and
the cases in (2) would be nice bonuses. I’m fine with dropping cases
(3), (4) for now.

I had a quick look at the existing recovery TAP tests and didn’t
immediately find a case where simply adding log checks would cover
these error paths, but I’ll double‑check once more before sending the
patch. I’ll work on this and share the patch soon.

Best Regards,
Nitin Jadhav
Azure Database for PostgreSQL
Microsoft



Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Michael Paquier
Дата:
On Thu, Feb 19, 2026 at 08:24:02AM +0530, Nitin Jadhav wrote:
> I had a quick look at the existing recovery TAP tests and didn’t
> immediately find a case where simply adding log checks would cover
> these error paths, but I’ll double‑check once more before sending the
> patch. I’ll work on this and share the patch soon.

Thanks, Nitin.  Perhaps it would be a better approach to split the
patch into multiple pieces, with the most relevant PANIC->FATAL
switches and the most critical tests on top of the rest.  It would be
nice to get most of that by the end of the release cycle, or a rather
"good" chunk of it.
--
Michael

Вложения

Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Nitin Jadhav
Дата:
> Thanks, Nitin.  Perhaps it would be a better approach to split the
> patch into multiple pieces, with the most relevant PANIC->FATAL
> switches and the most critical tests on top of the rest.  It would be
> nice to get most of that by the end of the release cycle, or a rather
> "good" chunk of it.

Thanks for the suggestion, Michael, and apologies for the delay. I had
intended to send this sooner, but it took a bit more time.

I’m sharing the two patches below.

Patch 0001 adjusts the error severity during crash recovery when the
checkpoint record referenced by pg_control cannot be located and no
backup_label file is present. The error is lowered from PANIC to
FATAL. This patch also adds a new TAP test that verifies startup fails
with a clear FATAL error. The test is straightforward: it removes the
WAL segment containing the checkpoint record and confirms that the
server reports the expected error.

Patch 0002 adds two TAP tests that exercise similar missing‑WAL
scenarios during backup recovery, i.e., when a backup_label file is
present:

Missing checkpoint WAL segment referenced by backup_label:
This test uses an online backup to create a backup_label file,
extracts the checkpoint record information from it, removes the
corresponding WAL segment, and verifies that the server reports the
expected error.

Missing redo WAL segment referenced by the checkpoint:
In this test, redo and checkpoint records are forced into different
WAL segments using injection points. A cold backup is then taken, with
an explicit backup_label created in the restored cluster. The WAL
segment containing the redo record is removed, and startup is expected
to fail with the appropriate error message.

Please review and share your feedback. I’m happy to adjust the patches
if there are better ways to handle any of these cases, especially the
054_missing_redo_with_backup_label test.

Best Regards,
Nitin Jadhav
Azure Database for PostgreSQL
Microsoft

On Thu, Feb 19, 2026 at 9:48 AM Michael Paquier <michael@paquier.xyz> wrote:
>
> On Thu, Feb 19, 2026 at 08:24:02AM +0530, Nitin Jadhav wrote:
> > I had a quick look at the existing recovery TAP tests and didn’t
> > immediately find a case where simply adding log checks would cover
> > these error paths, but I’ll double‑check once more before sending the
> > patch. I’ll work on this and share the patch soon.
>
> Thanks, Nitin.  Perhaps it would be a better approach to split the
> patch into multiple pieces, with the most relevant PANIC->FATAL
> switches and the most critical tests on top of the rest.  It would be
> nice to get most of that by the end of the release cycle, or a rather
> "good" chunk of it.
> --
> Michael

Вложения

Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Michael Paquier
Дата:
On Fri, Mar 06, 2026 at 11:02:41AM +0530, Nitin Jadhav wrote:
> Patch 0001 adjusts the error severity during crash recovery when the
> checkpoint record referenced by pg_control cannot be located and no
> backup_label file is present. The error is lowered from PANIC to
> FATAL. This patch also adds a new TAP test that verifies startup fails
> with a clear FATAL error. The test is straightforward: it removes the
> WAL segment containing the checkpoint record and confirms that the
> server reports the expected error.

There could be one possibility for 0001 to be unstable, actually.  One
could imagine that by getting the segment from a live server things
could fail: if the run is very slow then we may finish with an
incorrect record.  It would be possibe to use pg_controldata once the
server has been shut down, but we should be on the same segment anyway
because nothing happens on the server, so I have let the test are you
have suggested, and applied this one.  You have missed an update in
meson.build, and I am not sure that there was a need for renaming 050,
either.

> Missing checkpoint WAL segment referenced by backup_label:
> This test uses an online backup to create a backup_label file,
> extracts the checkpoint record information from it, removes the
> corresponding WAL segment, and verifies that the server reports the
> expected error.
>
> Missing redo WAL segment referenced by the checkpoint:
> In this test, redo and checkpoint records are forced into different
> WAL segments using injection points. A cold backup is then taken, with
> an explicit backup_label created in the restored cluster. The WAL
> segment containing the redo record is removed, and startup is expected
> to fail with the appropriate error message.

These two are very close to the existing tests that we have on HEAD
now, could it be better to group them together instead?  Particularly,
054 reuses the injection point trick in what is clearly a copy-paste
taken from 050.  Avoiding this duplication may be nice.
--
Michael

Вложения

Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Nitin Jadhav
Дата:
> There could be one possibility for 0001 to be unstable, actually.  One
> could imagine that by getting the segment from a live server things
> could fail: if the run is very slow then we may finish with an
> incorrect record.  It would be possibe to use pg_controldata once the
> server has been shut down, but we should be on the same segment anyway
> because nothing happens on the server, so I have let the test are you
> have suggested, and applied this one.  You have missed an update in
> meson.build, and I am not sure that there was a need for renaming 050,
> either.

Good catch—thanks. This does seem like it could apply to the earlier
redo‑record‑missing case as well. That said, since there’s no activity
on the server during the test, it should be safe as you mentioned. My
apologies for missing the meson.build update again—thanks for taking
care of it. On the renaming, I was aiming to separate things into four
tests to make the differences explicit, but I’m happy to stick with
two test files.

> These two are very close to the existing tests that we have on HEAD
> now, could it be better to group them together instead?  Particularly,
> 054 reuses the injection point trick in what is clearly a copy-paste
> taken from 050.  Avoiding this duplication may be nice.

That's a good point. I agree these new cases are very close to what we
already have on HEAD. I can rework this to avoid duplication by
grouping them with the existing tests.

Best Regards,
Nitin Jadhav
Azure Database for PostgreSQL
Microsoft

On Tue, Mar 10, 2026 at 9:28 AM Michael Paquier <michael@paquier.xyz> wrote:
>
> On Fri, Mar 06, 2026 at 11:02:41AM +0530, Nitin Jadhav wrote:
> > Patch 0001 adjusts the error severity during crash recovery when the
> > checkpoint record referenced by pg_control cannot be located and no
> > backup_label file is present. The error is lowered from PANIC to
> > FATAL. This patch also adds a new TAP test that verifies startup fails
> > with a clear FATAL error. The test is straightforward: it removes the
> > WAL segment containing the checkpoint record and confirms that the
> > server reports the expected error.
>
> There could be one possibility for 0001 to be unstable, actually.  One
> could imagine that by getting the segment from a live server things
> could fail: if the run is very slow then we may finish with an
> incorrect record.  It would be possibe to use pg_controldata once the
> server has been shut down, but we should be on the same segment anyway
> because nothing happens on the server, so I have let the test are you
> have suggested, and applied this one.  You have missed an update in
> meson.build, and I am not sure that there was a need for renaming 050,
> either.
>
> > Missing checkpoint WAL segment referenced by backup_label:
> > This test uses an online backup to create a backup_label file,
> > extracts the checkpoint record information from it, removes the
> > corresponding WAL segment, and verifies that the server reports the
> > expected error.
> >
> > Missing redo WAL segment referenced by the checkpoint:
> > In this test, redo and checkpoint records are forced into different
> > WAL segments using injection points. A cold backup is then taken, with
> > an explicit backup_label created in the restored cluster. The WAL
> > segment containing the redo record is removed, and startup is expected
> > to fail with the appropriate error message.
>
> These two are very close to the existing tests that we have on HEAD
> now, could it be better to group them together instead?  Particularly,
> 054 reuses the injection point trick in what is clearly a copy-paste
> taken from 050.  Avoiding this duplication may be nice.
> --
> Michael



Re: Change checkpoint‑record‑missing PANIC to FATAL

От
Michael Paquier
Дата:
On Fri, Mar 20, 2026 at 07:56:46PM +0530, Nitin Jadhav wrote:
> That's a good point. I agree these new cases are very close to what we
> already have on HEAD. I can rework this to avoid duplication by
> grouping them with the existing tests.

Thanks.  I am not sure if I will be able to follow up the work of this
thread for this commit fest, so I'll probably consider more
improvements in this area once v20 opens for business around the end
of June.
--
Michael

Вложения