Re: performance database for backup/restore
От | Jeff Janes |
---|---|
Тема | Re: performance database for backup/restore |
Дата | |
Msg-id | CAMkU=1zo9vtmm6X5XQxwDXPppzZnfxrc-eW3FHxS0WpcUzPX3g@mail.gmail.com обсуждение исходный текст |
Ответ на | performance database for backup/restore (Jeison Bedoya <jeisonb@audifarma.com.co>) |
Ответы |
Re: performance database for backup/restore
|
Список | pgsql-performance |
2013/5/21 Jeison Bedoya <jeisonb@audifarma.com.co>
Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and take a lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the process of backup/restore.
How many database objects do you have? A few large objects will dump and restore faster than a huge number of smallish objects.
Where is your bottleneck? "top" should show you whether it is CPU or IO.
I can pg_dump about 6GB/minute to /dev/null using all defaults with a small number of large objects.
Cheers,
Jeff
В списке pgsql-performance по дате отправления: