Обсуждение: WAL replay failure after file truncation(?)
We've seen two recent reports: http://archives.postgresql.org/pgsql-admin/2005-04/msg00008.php http://archives.postgresql.org/pgsql-general/2005-05/msg01143.php of postmaster restart failing because the WAL contains a reference to a page that no longer exists. I can think of a couple of possible explanations: 1. filesystem corruption, ie the page should exist in the file but the kernel has forgotten about it; 2. we truncated the file subsequent to the WAL record that causes the panic. However, neither of these theories is entirely satisfying, because the WAL replay logic has always acted like this; why haven't we seen similar reports ever since 7.1? And why are both of these reports connected to btrees, when file truncation probably happens far more often on regular tables? But, setting those nagging doubts aside, theory #2 seems like a definite bug that we ought to do something about. The only really clean answer I can see is for file truncation to force a checkpoint just before issuing the ftruncate call. That way, no WAL records referencing the to-be-deleted pages would need to be replayed in a subsequent crash. However, checkpoints are expensive enough to make this solution very unattractive from a performance point of view. And I fear it's not a 100% solution anyway: what about the PITR scenario, where you need to replay a WAL log that was made concurrently with a filesystem backup being taken? The backup might well include the truncated version of the file, but you can't avoid replaying the beginning portion of the WAL log. Plan B is for WAL replay to always be willing to extend the file to whatever record number is mentioned in the log, even though this may require inventing the contents of empty pages; we trust that their contents won't matter because they'll be truncated again later in the replay sequence. This seems pretty messy though, especially for indexes. The major objection to it is that it gives up error detection in real filesystem-corruption cases: we'll just silently build an invalid index and then try to run with it. (Still, that might be better than refusing to start; at least you can REINDEX afterwards.) Any thoughts? regards, tom lane
Tom Lane wrote: > Plan B is for WAL replay to always be willing to extend the file to > whatever record number is mentioned in the log, even though this > may require inventing the contents of empty pages; we trust that their > contents won't matter because they'll be truncated again later in the > replay sequence. This seems pretty messy though, especially for > indexes. The major objection to it is that it gives up error detection > in real filesystem-corruption cases: we'll just silently build an > invalid index and then try to run with it. (Still, that might be better > than refusing to start; at least you can REINDEX afterwards.) Should we add a GUC to allow recovery in such cases, but don't mention it in postgresql.conf? This way we could give people a recovery solution, and also track the cases it happens, and not accidentally trigger the recovery case. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
On Wed, 25 May 2005 11:02:11 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote: >Plan B is for WAL replay to always be willing to extend the file to >whatever record number is mentioned in the log, even though this >may require inventing the contents of empty pages; we trust that their >contents won't matter because they'll be truncated again later in the >replay sequence. Another idea: WAL replay does not apply changes to nonexistent blocks, but it keeps a list (hash table, file, whatever) of those blocks. When a truncate WAL record is found, all entries for blocks affected by the truncation are removed from the list. Is it sufficient to remember just the relation and the block number or do we need the contents a well? If the list is non-empty at the end of WAL replay, this is evidence of a serious problem (file system corruption or Postgres bug). ServusManfred
Manfred Koizar <mkoi-pg@aon.at> writes: > Another idea: WAL replay does not apply changes to nonexistent blocks, > but it keeps a list (hash table, file, whatever) of those blocks. > When a truncate WAL record is found, all entries for blocks affected > by the truncation are removed from the list. Is it sufficient to > remember just the relation and the block number or do we need the > contents a well? We don't *have* the contents ... that's exactly why it's panicking ... > If the list is non-empty at the end of WAL replay, this is evidence of > a serious problem (file system corruption or Postgres bug). That seems like a good idea --- it covers the problem, and what's more, it won't complain until after it finishes replay. Which means that if you do get the PANIC, you can get out of it with pg_resetxlog and not need to worry that you are throwing away whatever good data is available from the WAL log. (This assumes that we go ahead and checkpoint out the working buffers before we make the check for empty list.) regards, tom lane
> Plan B is for WAL replay to always be willing to extend the file to > whatever record number is mentioned in the log, even though this > may require inventing the contents of empty pages; we trust that their > contents won't matter because they'll be truncated again later in the > replay sequence. This seems pretty messy though, especially for > indexes. The major objection to it is that it gives up error detection > in real filesystem-corruption cases: we'll just silently build an > invalid index and then try to run with it. (Still, that might be better > than refusing to start; at least you can REINDEX afterwards.) You could at least log some sort of warning during the PITR process. Anyone running a PITR not paying attention to their logs is in trouble anyway... Chris
Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes: >> Plan B is for WAL replay to always be willing to extend the file to >> whatever record number is mentioned in the log, even though this >> may require inventing the contents of empty pages; we trust that their >> contents won't matter because they'll be truncated again later in the >> replay sequence. This seems pretty messy though, especially for >> indexes. The major objection to it is that it gives up error detection >> in real filesystem-corruption cases: we'll just silently build an >> invalid index and then try to run with it. (Still, that might be better >> than refusing to start; at least you can REINDEX afterwards.) > You could at least log some sort of warning during the PITR process. > Anyone running a PITR not paying attention to their logs is in trouble > anyway... I'm more worried about the garden variety restart-after-power-failure scenario. As long as the postmaster starts up, it's unlikely people will inspect the postmaster log too closely. I think we have a choice of PANICking and refusing to start, or assuming that no one will notice that we did something dubious. regards, tom lane
On Wed, 25 May 2005 18:19:19 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> but it keeps a list (hash table, file, whatever) of those blocks. >> [...] Is it sufficient to >> remember just the relation and the block number or do we need the >> contents a well? > >We don't *have* the contents ... that's exactly why it's panicking ... I meant the contents of the WAL record, not the original block contents. Anyway, I think it's not needed. ServusManfred
Manfred Koizar <mkoi-pg@aon.at> writes: >>> [...] Is it sufficient to >>> remember just the relation and the block number or do we need the >>> contents a well? > I meant the contents of the WAL record, not the original block > contents. Anyway, I think it's not needed. Oh, I see. Yes, it might be worth hanging onto for debugging purposes. If we did get a report of such a failure, I'm sure we'd wish to know what sort of WAL record triggered it. One trusts there won't be so many that storing 'em all is a problem ... regards, tom lane
Tom Lane wrote: > Manfred Koizar <mkoi-pg@aon.at> writes: > >>>>[...] Is it sufficient to >>>>remember just the relation and the block number or do we need the >>>>contents a well? > > >>I meant the contents of the WAL record, not the original block >>contents. Anyway, I think it's not needed. > > > Oh, I see. Yes, it might be worth hanging onto for debugging purposes. > If we did get a report of such a failure, I'm sure we'd wish to know > what sort of WAL record triggered it. One trusts there won't be so many > that storing 'em all is a problem ... > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly I guess I am having the same problem here: I am just dealing with a truncated table after a hard kill. The symptoms are: The storage file of the table is missing while the system tables can still see the table. Looking at TRUNCATE (this is the only command which could potentially have caused this problem in my case) it seems as if the system tables are actually changed propery before the file on disk is truncated. My question is: What happens if the system is killed inside rebuild_relation or inside swap_relfilenodes which is called by rebuild_relation? many thanks and best regards, Hans -- Cybertec Geschwinde u Schoenig Schoengrabern 134, A-2020 Hollabrunn, Austria Tel: +43/664/393 39 74 www.cybertec.at, www.postgresql.at
Hans-Jürgen Schönig <postgres@cybertec.at> writes: > My question is: What happens if the system is killed inside > rebuild_relation or inside swap_relfilenodes which is called by > rebuild_relation? Nothing at all, because the system catalog updates aren't committed yet, and we haven't done anything to the relation's old physical file. If I were you I'd be looking into whether your disk hardware honors write ordering properly. This sounds like something allowed the directory change to reach disk before the transaction commit WAL record did; which is impossible if fsync is doing what it's supposed to. regards, tom lane
Tom Lane wrote: > Hans-Jürgen Schönig <postgres@cybertec.at> writes: > >>My question is: What happens if the system is killed inside >>rebuild_relation or inside swap_relfilenodes which is called by >>rebuild_relation? > > > Nothing at all, because the system catalog updates aren't committed yet, > and we haven't done anything to the relation's old physical file. This is actually what I expected. I have gone through the code and it looks correct. TRUNCATE is the only command in this application which can potentially cause the problem (it is very unlikely that INSERT removes a file). > If I were you I'd be looking into whether your disk hardware honors > write ordering properly. This sounds like something allowed the > directory change to reach disk before the transaction commit WAL record > did; which is impossible if fsync is doing what it's supposed to. > > regards, tom lane We are on sun Solaris (x86) box here. I am not sure what Sun has corrupted to make this error happen. Obviously it happens only once per 1.000.000 tries ... I am just trying to figure out whether the bug could potentially be inside PostgreSQL. It would have been surprised if somebody had overseen a problem like that. many thanks and best regards, Hans -- Cybertec Geschwinde u Schoenig Schoengrabern 134, A-2020 Hollabrunn, Austria Tel: +43/664/393 39 74 www.cybertec.at, www.postgresql.at
On Wed, 2005-05-25 at 21:24 +0200, Manfred Koizar wrote: > WAL replay does not apply changes to nonexistent blocks, > but it keeps a list (hash table, file, whatever) of those blocks. > When a truncate WAL record is found, all entries for blocks affected > by the truncation are removed from the list. Is it sufficient to > remember just the relation and the block number or do we need the > contents a well? > > If the list is non-empty at the end of WAL replay, this is evidence of > a serious problem (file system corruption or Postgres bug). Seems like a very neat solution. It has no side effects and seems fairly performant. Judging by the number of PANICs reported, the data structure would be mostly empty anyhow. Best Regards, Simon Riggs