Re: Troubles dumping a very large table.
От | Merlin Moncure |
---|---|
Тема | Re: Troubles dumping a very large table. |
Дата | |
Msg-id | b42b73150812261150k6b17b819i704b6b7720243eb8@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Troubles dumping a very large table. (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Troubles dumping a very large table.
Re: Troubles dumping a very large table. |
Список | pgsql-performance |
On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Ted Allen <tallen@blackducksoftware.com> writes: >> 600mb measured by get_octet_length on data. If there is a better way to measure the row/cell size, please let me knowbecause we thought it was the >1Gb problem too. We thought we were being conservative by getting rid of the larger rowsbut I guess we need to get rid of even more. > > Yeah, the average expansion of bytea data in COPY format is about 3X :-( > So you need to get the max row length down to around 300mb. I'm curious > how you got the data in to start with --- were the values assembled on > the server side? Wouldn't binary style COPY be more forgiving in this regard? (if so, the OP might have better luck running COPY BINARY)... This also goes for libpq traffic..large (>1mb) bytea definately want to be passed using the binary switch in the protocol. merlin
В списке pgsql-performance по дате отправления: