Re: Pg_dump
От | knut.suebert@web.de |
---|---|
Тема | Re: Pg_dump |
Дата | |
Msg-id | 20020311213725.GA16947@web.de обсуждение исходный текст |
Ответ на | Pg_dump ("Hunter, Ray" <rhunter@enterasys.com>) |
Список | pgsql-sql |
Hunter, Ray schrieb: > Question: > > How fast does pg_dump handle this scenario? pg_dump -h host1 dbname | psql > -h host2 dbname Depends on the speed of the network connection, I'd guess. Compression seems to be involved by using the option "-Fc". But I got no experience on this. For security reasons, tunneling over ssh could be interesting. > What is the difference between doing the above vs. doing rsync over a > secure connections? "rsync -e ssh -z" the dumped files? Than you got compressed security (with some speed penalties for encryption, depending on the relation between CPU power and network bandwidth). You have to involve remote commands on the source host before for dumping and afterwards on the target host to restore. If you got an older dump of the source on the target before, that could speed up a *very* lot. I don't know how perfect rsync is in finding very small differences in very large data. But there are options to tune it's behavior. If you think about "rsync /var/lib/postgres/data" of running postmasters, that should give total trash on the target. Only an option, if both postmasters are stopped before and all databases are to be synced. For me, it seems more efficent to login remotely on host1 or host2 before doing the "rsync" or "pg_dump | pgsql". It that case, the data has to be transmitted one way host1->host2 only instead of two way host1->host0->host2 (but that penalty could depend on the network's configuration and maybe the programs do it better automatically). Just my ideas about differences, Knut Sübert
В списке pgsql-sql по дате отправления: