Re: performance database for backup/restore
От | ktm@rice.edu |
---|---|
Тема | Re: performance database for backup/restore |
Дата | |
Msg-id | 20130521154613.GE12507@aart.rice.edu обсуждение исходный текст |
Ответ на | Re: performance database for backup/restore (Evgeny Shishkin <itparanoia@gmail.com>) |
Список | pgsql-performance |
On Tue, May 21, 2013 at 05:28:31PM +0400, Evgeny Shishkin wrote: > > On May 21, 2013, at 5:18 PM, Jeison Bedoya <jeisonb@audifarma.com.co> wrote: > > > Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel,the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and takea lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the processof backup/restore. > > > > I'd recommend you to dump with > > pg_dump --format=c > > It will compress the output and later you can restore it in parallel with > > pg_restore -j 32 (for example) > > Right now you can not dump in parallel, wait for 9.3 release. Or may be someone will back port it to 9.2 pg_dump. > > Also during restore you can speed up a little more by disabling fsync and synchronous_commit. > If you have the space and I/O capacity, avoiding the compress option will be much faster. The current compression scheme using zlib type compression is very CPU intensive and limits your dump rate. On a system that we have, a dump without compression takes 20m and with compression 2h20m. The parallel restore make a big difference as well. Regards, Ken
В списке pgsql-performance по дате отправления: