Re: Practical maximums (was Re: PostgreSQL theoretical
От | Jeff Davis |
---|---|
Тема | Re: Practical maximums (was Re: PostgreSQL theoretical |
Дата | |
Msg-id | 1154970001.12968.17.camel@dogma.v10.wvs обсуждение исходный текст |
Ответ на | Re: Practical maximums (was Re: PostgreSQL theoretical (Ron Johnson <ron.l.johnson@cox.net>) |
Ответы |
Re: Practical maximums (was Re: PostgreSQL theoretical
|
Список | pgsql-general |
On Mon, 2006-07-31 at 09:53 -0500, Ron Johnson wrote: > > The evasive answer is that you probably don't run regular full pg_dump > > on such databases. > > Hmmm. > You might want to use PITR for incremental backup or maintain a standby system using Slony-I ( www.slony.info ). > >> Are there any plans of making a multi-threaded, or even > >> multi-process pg_dump? > > > > What do you hope to accomplish by that? pg_dump is not CPU bound. > > Write to multiple tape drives at the same time, thereby reducing the > total wall time of the backup process. pg_dump just produces output. You could pretty easily stripe that output across multiple devices just by using some scripts. Just make sure to write a script that can reconstruct the data again when you need to restore. You don't need multi-threaded pg_dump, you just need to use a script that produces multiple output streams. Multi-threaded design is only useful for CPU-bound applications. Doing full backups of that much data is always a challenge, and I don't think PostgreSQL has limitations that another database doesn't. Regards, Jeff Davis
В списке pgsql-general по дате отправления: