Re: Backup and Recovery
От | ncm@zembu.com (Nathan Myers) |
---|---|
Тема | Re: Backup and Recovery |
Дата | |
Msg-id | 20010621160317.B1466@store.zembu.com обсуждение исходный текст |
Ответ на | Re: Backup and Recovery (Matthew Kirkwood <matthew@hairy.beasts.org>) |
Ответы |
Re: Backup and Recovery
|
Список | pgsql-hackers |
On Thu, Jun 21, 2001 at 11:01:29AM +0100, Matthew Kirkwood wrote: > On Wed, 20 Jun 2001, Naomi Walker wrote: > > > >You are aware that you can still lose up to (by default) 16Mb > > >worth of transactions in this scheme, I presume? > > > > I'm just starting with Postgresql, but, I thought with fsync on this > > was not the case. Is that not true or what else did I miss? > > I suppose that it rather depends on how you expected to > move the logs over. My approach was to archive the redo > when PG is done with them and only then to roll them > forward. > > If a catastrophe occurs, then I wouldn't be able to do > anything with a half-full log. > > Our Oracle setups use redo logs of only 1Mb for this > reason, and it doesn't seem to hurt too much (though > Oracle's datafile formats seem a fair bit denser than > Postgres's). The above makes no sense to me. A hot recovery that discards some random number of committed transactions is a poor sort of recovery. Ms. Walker might be able to adapt one of the several replication tools available for PG to do replayable logging, instead. It seems to me that for any replication regime (symmetric or not, synchronous or not, global or not), and also any hot-backup/recovery approach, an update-log mechanism that produces a high-level description of changes is essential. Using triggers to produce such a log seems to me to be too slow and too dependent on finicky administrative procedures. IIUC, the regular WAL records are optimized for a different purpose: speeding up normal operation. Also IIUC, the WAL cannot be applied to a database reconstructed from a dump. If augmented to enable such reconstruction, the WAL might be too bulky to serve well in that role; it currently only needs to keep enough data to construct the current database from a recent checkpoint, so compactness is has not been crucial. But there's much to be said for having just a single synchronous log mechanism. A high-level log mixed into the WAL, to be extracted asynchrously to a much more complact replay log, might be the ideal compromise. The same built-in high-level logging mechanism could make all the various kinds of disaster prevention, disaster recovery, and load sharing much easier to implement, because they all need much the same thing. Nathan Myers ncm@zembu.com
В списке pgsql-hackers по дате отправления: