Re: dealing with file size when archiving databases
От | Tino Wildenhain |
---|---|
Тема | Re: dealing with file size when archiving databases |
Дата | |
Msg-id | 1119334002.1183.125.camel@Andrea.peacock.de обсуждение исходный текст |
Ответ на | dealing with file size when archiving databases ("Andrew L. Gould" <algould@datawok.com>) |
Список | pgsql-general |
Am Montag, den 20.06.2005, 21:28 -0500 schrieb Andrew L. Gould: > I've been backing up my databases by piping pg_dump into gzip and > burning the resulting files to a DVD-R. Unfortunately, FreeBSD has > problems dealing with very large files (>1GB?) on DVD media. One of my > compressed database backups is greater than 1GB; and the results of a > gzipped pg_dumpall is approximately 3.5GB. The processes for creating > the iso image and burning the image to DVD-R finish without any > problems; but the resulting file is unreadable/unusable. > > My proposed solution is to modify my python script to: > > 1. use pg_dump to dump each database's tables individually, including > both the database and table name in the file name; > 3. use 'pg_dumpall -g' to dump the global information; and > 4. burn the backup directories, files and a recovery script to DVD-R. > > The script will pipe pg_dump into gzip to compress the files. I'd use pg_dump -Fc instead. It is compressed and you get some more options for restore for free (selective restore for example) > My questions are: > > 1. Will 'pg_dumpall -g' dump everything not dumped by pg_dump? Will I > be missing anything? > 2. Does anyone foresee any problems with the solution above? Yes, the files might be too big for one DVD at a time.
В списке pgsql-general по дате отправления: