Re: PostgreSQL 8.4 performance tuning questions
От | Kevin Grittner |
---|---|
Тема | Re: PostgreSQL 8.4 performance tuning questions |
Дата | |
Msg-id | 4A719CB6020000250002910A@gw.wicourts.gov обсуждение исходный текст |
Ответ на | Re: PostgreSQL 8.4 performance tuning questions (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: PostgreSQL 8.4 performance tuning questions
Re: PostgreSQL 8.4 performance tuning questions |
Список | pgsql-performance |
Tom Lane <tgl@sss.pgh.pa.us> wrote: > "Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes: >> Since the dump to custom format ran longer than the full pg_dump >> piped directly to psql would have taken, the overall time to use >> this technique is clearly longer for our databases on our hardware. > > Hmmm ... AFAIR there isn't a good reason for dump to custom format > to take longer than plain text dump, except for applying > compression. Maybe -Z0 would be worth testing? Or is the problem > that you have to write the data to a disk file rather than just > piping it? I did some checking with the DBA who normally copies these around for development and test environments. He confirmed that when the source and target are on the same machine, a pg_dump piped to psql takes about two hours. If he pipes across the network, it runs more like three hours. My pg_dump to custom format ran for six hours. The single-transaction restore from that dump file took two hours, with both on the same machine. I can confirm with benchmarks, but this guy generally knows what he's talking about (and we do create a lot of development and test databases this way). Either the compression is tripling the dump time, or there is something inefficient about how pg_dump writes to the disk. All of this is on a RAID 5 array with 5 drives using xfs with noatime,nobarrier and a 256MB BBU controller. -Kevin
В списке pgsql-performance по дате отправления: