Re: Pg_dumpall
От | Andrew Gould |
---|---|
Тема | Re: Pg_dumpall |
Дата | |
Msg-id | 20030611211839.45075.qmail@web13407.mail.yahoo.com обсуждение исходный текст |
Ответ на | Re: Pg_dumpall (<btober@seaworthysys.com>) |
Список | pgsql-general |
--- btober@seaworthysys.com wrote: > > > I have cron execute a Python script as the > database > > administrator to vacuum and backup all databases. > > Rather than dump all databases at once, however, > the > > script performs a 'pgsql -l' to get a current list > of > > databases. Each database is dumped and piped into > > gzip for compression into its own backup file. > > > > I should also mention that the script renames all > > previous backup files, all ending in *.gz, to > > *.gz.old; so that they survive the current > pg_dump. > > Of course, you could change the script to put the > date > > in the file name as to keep unlimited backup > versions. > > FWIW, another good way to handle the last paragraph > would be to use > logrotate. It would handle renaming files as *.1, > *.2,... and you could > specify the number of days you wanted it to retain > and you don't have to > go in periodically and delete ancient backups so > that your drive doesn't > fill up. > Thanks, I think I'll modify the script to manage a declared number of backups as described above. Logrotate sounds like FreeBSD's Newsyslog.conf. The reason I don't use it is that I would have to configure each database's backup file. The Python script adds new databases and backup files to the process automatically. This is one of those "if I get hit by a bus" features. As my databases do not have IS support, my boss insists on contingency planning. Best regards, Andrew Gould
В списке pgsql-general по дате отправления: