Upgrading Postgres large databases with blobs

Поиск
Список
Период
Сортировка
От CAJ CAJ
Тема Upgrading Postgres large databases with blobs
Дата
Msg-id 467669b30703101140l6c572699ocba6e830f1fc56c8@mail.gmail.com
обсуждение исходный текст
Список pgsql-general
Hello,

For some reason, my first attempt to send this email to the list didn't get through ....

We have several independent database servers with ~50GB+ databases running postgres 8.0.x. We are planning to upgrade these databases to postgres 8.2.x over the weekend

We plan to use the following steps to upgrade each server,

1. Dump the 8.0.x database cluster using 8.2.x pg_dumpall
% ./pg_dumpall > pgdumpall_backup.sql

2.Dump the 8.0.x database  including large objects in  compressed custom format using 8.2.x pg_dump
% ./pg_dump -Fc -b -Z9 dbname > pgdump_lobs_backup


Restoring database
1. Initialize 8.2.x darabase
% initdb -D /data/pgdata

2. Restore template1 database from cluster dump
% ./psql -d template1 < pgdumpall_backup.sql

3. Delete database dbname else restoring will give error about existing dbname
% dropdb dbname

4. Create fresh dbname
% createdb -O dbowner dbname

5. Restore database with lobs
% ./pg_restore -v -Fc -d dbname -e -U dbowner < pgdumpall_lobs_backup

Some of the problems we have are,
1. We are not sure if all of the data will be available after dump/restore with above process
2. The dump and restore process is very very slow to be complete over the weekend (takes approx 1GB/hr to dump on a dual G5 PPC 2Ghz with 1GB RAM and RAID 1 disks)

What is the fastest way to upgrade postgres for large databases that has binary objects?

Thanks for all your help.

В списке pgsql-general по дате отправления:

Предыдущее
От: "David Legault"
Дата:
Сообщение: pl/pgsql FOR LOOP with function
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: HIPPA (was Re: Anyone know ...)