Re: Slow pg_dump
От | Tom Lane |
---|---|
Тема | Re: Slow pg_dump |
Дата | |
Msg-id | 25879.1208221128@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Slow pg_dump ("Phillip Smith" <phillip.smith@weatherbeeta.com.au>) |
Список | pgsql-admin |
"Phillip Smith" <phillip.smith@weatherbeeta.com.au> writes: >> Here's my interpretation of those results: the TOAST tables for >> our image files are compressed by Postgres. During the backup, >> pg_dump uncompresses them, and if compression is turned on, >> recompresses the backup. Please correct me if I'm wrong there. No, the TOAST tables aren't compressed, they're pretty much going to be the raw image data (plus a bit of overhead). What I think is happening is that COPY OUT is encoding the bytea data fairly inefficiently (one byte could go to \\nnn, five bytes) and the compression on the pg_dump side isn't doing very well at buying that back. I experimented a bit and noticed that pg_dump -Fc is a great deal smarter about storing large objects than big bytea fields --- it seems to be pretty nearly one-to-one with the original data size when storing a compressed file that was put into a large object. I dunno if it's practical for you to switch from bytea to large objects, but in the near term I think that's your only option if the dump file size is a showstopper problem for you. regards, tom lane
В списке pgsql-admin по дате отправления: