On 02/08/17, Steve Atkins (steve@blighty.com) wrote:
> > On Aug 2, 2017, at 9:02 AM, Edmundo Robles <edmundo@sw-argos.com> wrote:
> >
> > I mean, to verify the integrity of backup i do:
> >
> > gunzip -c backup_yesterday.gz | pg_restore -d my_database && echo
> > "backup_yesterday is OK"
> >
> > but my_database's size, uncompresed, is too big more than 15G and
> > sometimes i have no space to restore it, so always i must
> > declutter my disk first.
...
> If the gunzip completes successfully then the backups weren't
> corrupted and the disk is readable. They're very likely to be "good"
> unless you have a systematic problem with your backup script.
>
> You could then run that data through pg_restore, redirecting the
> output to /dev/null, to check that the compressed file actually came
> from pg_dump. (gunzip backup_yesterday.gz | pg_restore >/dev/null)
A couple of extra steps you can add to avoid a full restore (which is
best) is to do a file hash check as part of the verification, and do
something like add a token to the database just before dumping, then
verify that. We do something like this:
rory:~/db$ gpg -d dump_filename.sqlc.gpg | \
pg_restore -Fc --data-only --schema audit | \
grep -A 1 "COPY audit"
output >
COPY audit (tdate) FROM stdin;
2017-04-25
Cheers
Rory