Re: Big 7.1 open items
От | Tom Lane |
---|---|
Тема | Re: Big 7.1 open items |
Дата | |
Msg-id | 7782.961658265@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Big 7.1 open items ("Philip J. Warner" <pjw@rhyme.com.au>) |
Ответы |
Re: Big 7.1 open items
|
Список | pgsql-hackers |
"Philip J. Warner" <pjw@rhyme.com.au> writes: >> ... the thing that bothered me was this. Suppose you are trying to >> recover a corrupted database manually, and the only information you have >> about which table is which is a somewhat out-of-date listing of OIDs >> versus table names. > This worries me a little; in the Dec/RDB world it is a very long time since > database backups were done by copying the files. There is a database > backup/restore utility which runs while the database is on-line and makes > sure a valid snapshot is taken. Backing up storage areas (table spapces) > can be done separately by the same utility, and again, it records enough > information to ensure integrity. Maybe the thing to do is write a pg_backup > utility, which in a first pass could, presumably, be synonymous with pg_dump? pg_dump already does the consistent-snapshot trick (it just has to run inside a single transaction). > Am I missing something here? Is there a problem with backing up using > 'pg_dump | gzip'? None, as long as your ambition extends no further than restoring your data to where it was at your last pg_dump. I was thinking about the all-too-common-in-the-real-world scenario where you're hoping to recover some data more recent than your last backup from the fractured shards of your database... regards, tom lane
В списке pgsql-hackers по дате отправления: