Re: Why copy_relation_data only use wal when WALarchiving is enabled
От | Heikki Linnakangas |
---|---|
Тема | Re: Why copy_relation_data only use wal when WALarchiving is enabled |
Дата | |
Msg-id | 4715EDC7.1090408@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: Why copy_relation_data only use wal when WAL archiving is enabled (Simon Riggs <simon@2ndquadrant.com>) |
Ответы |
Re: Why copy_relation_data only use wal when WALarchiving
is enabled
Re: Why copy_relation_data only use wal when WALarchiving is enabled Re: Why copy_relation_data only use wal when WALarchiving is enabled |
Список | pgsql-hackers |
Simon Riggs wrote: > On Wed, 2007-10-17 at 17:18 +0800, Jacky Leng wrote: >> Second, suppose that no checkpoint has occured during the upper >> series--authough not quite possible; > > That part is irrelevant. It's forced out to disk and doesn't need > recovery, with or without the checkpoint. > > There's no hole that I can see. No, Jacky is right. The same problem exists at least with CLUSTER, and I think there's other commands that rely on immediate fsync as well. Attached is a shell script that demonstrates the problem on CVS HEAD with CLUSTER. It creates two tables, T1 and T2, both with one row. Then T1 is dropped, and T2 is CLUSTERed, so that the new T2 relation file happens to get the same relfilenode that T1 had. Then we crash the server, forcing a WAL replay. After that, T2 is empty. Oops. Unfortunately I don't see any easy way to fix it. One approach would be to avoid reusing the relfilenodes until next checkpoint, but I don't see any nice place to keep track of OIDs that have been dropped since last checkpoint. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: