Re: Troubles dumping a very large table.
| От | Ted Allen |
|---|---|
| Тема | Re: Troubles dumping a very large table. |
| Дата | |
| Msg-id | FD2D13C2E4A95C4499BF8AA8BAB85AEC20DD41D698@BDSEXCH2K7CLS.blackducksoftware.com обсуждение исходный текст |
| Ответ на | Re: Troubles dumping a very large table. (Tom Lane <tgl@sss.pgh.pa.us>) |
| Ответы |
Re: Troubles dumping a very large table.
|
| Список | pgsql-performance |
600mb measured by get_octet_length on data. If there is a better way to measure the row/cell size, please let me know
becausewe thought it was the >1Gb problem too. We thought we were being conservative by getting rid of the larger rows
butI guess we need to get rid of even more.
Thanks,
Ted
________________________________________
From: Tom Lane [tgl@sss.pgh.pa.us]
Sent: Wednesday, December 24, 2008 12:49 PM
To: Ted Allen
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Troubles dumping a very large table.
Ted Allen <tallen@blackducksoftware.com> writes:
> during the upgrade. The trouble is, when I dump the largest table,
> which is 1.1 Tb with indexes, I keep getting the following error at the
> same point in the dump.
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR: invalid string enlargement
> request size 1
> pg_dump: The command was: COPY public.large_table (id, data) TO stdout;
> As you can see, the table is two columns, one column is an integer, and
> the other is bytea. Each cell in the data column can be as large as
> 600mb (we had bigger rows as well but we thought they were the source of
> the trouble and moved them elsewhere to be dealt with separately.)
600mb measured how? I have a feeling the problem is that the value
exceeds 1Gb when converted to text form...
regards, tom lane
В списке pgsql-performance по дате отправления: