BUG #18775: PQgetCopyData always has an out-of-memory error if the table field stores bytea ~700 MB
От | PG Bug reporting form |
---|---|
Тема | BUG #18775: PQgetCopyData always has an out-of-memory error if the table field stores bytea ~700 MB |
Дата | |
Msg-id | 18775-e6af405ca08393cc@postgresql.org обсуждение исходный текст |
Ответы |
Re: BUG #18775: PQgetCopyData always has an out-of-memory error if the table field stores bytea ~700 MB
|
Список | pgsql-bugs |
The following bug has been logged on the website: Bug reference: 18775 Logged by: Ilya Email address: knuazev@gmail.com PostgreSQL version: 16.0 Operating system: windows 10 Description: Hello, I need your help. The essence of the problem is that I am trying to download a bytea field from a table row. The size of the data in the field is about 700 MB. In response, I receive an out-of-memory error. More details: 1) I execute the query COPY (SELECT %s FROM %s.%s WHERE %s=%s) TO STDOUT BINARY 2) I call the PQgetCopyData function. This function works with the pg_conn.inBuffer buffer. I did not go into detail on how memory is allocated for it, but in my case, about 300 MB is allocated. And the data from the database is successfully accumulated in this buffer. And when the buffer is almost full, the function int pqGetCopyData3(PGconn *conn, char **buffer, int async) tries to allocate a buffer almost the same size as pg_conn.inBuffer in order to give it to the user: *buffer = (char *) malloc(msgLength + 1); (fe-protosol.c LN. 1767). In my case, the malloc function consistently fails to do this. It returns NULL and the library generates an "out of memory" error. p.s. It seems to me that the only chance to make PQgetCopyData work correctly without changing the library interface is to somehow reduce the size of the pg_conn.inBuffer buffer to small values. And ideally, it would be good to add the PQgetCopyData2 function that would accept a buffer and simply fill it with data and return the filling size. I also tried to get around this problem using the outdated PQgetline function, but it does not work with binary data. And if you make a COPY (SELECT %s FROM %s.%s WHERE %s=%s) TO STDOUT query, an error will occur when trying to execute it. Because, perhaps, the library is trying to download everything at once into pg_conn.inBuffer, which is too small for this. And yes, I know that for storing large data you have "large object facility". But my method should also work for files up to 1 GB.
В списке pgsql-bugs по дате отправления: