pg_dump: largeobject behavior issues (possible bug)
От | Joshua D. Drake |
---|---|
Тема | pg_dump: largeobject behavior issues (possible bug) |
Дата | |
Msg-id | 5539483B.3040401@commandprompt.com обсуждение исходный текст |
Ответы |
Re: pg_dump: largeobject behavior issues (possible bug)
|
Список | pgsql-hackers |
Hello, I have been working a problem with Andrew Gierth (sp?) in regards to pg_dump. Here is the basic breakdown: FreeBSD 10.1 PostgreSQL 9.3.6 64GB ~ memory 500GB database 228G of largeobjects (106M objects) The database dumps fine as long as we don't dump large objects. However, if we try to dump the large objects, FreeBSD will kill pg_dump as it will consume all free memory and swap. With Andrew's help we were able to determine the following: There is a memory cost of about 160 bytes per largeobject. Based on the number of largeobjects we have that would be about 16GB of memory. Also when pg_dump is reading in the largobject list there is a point where pg_dump has a PGresult containing the entire contents of pg_largeobject_metadata and a malloc of an array where it is going to copy the data to. That could easily get above the 40G thus causeFreeBSD to kill the process. tl;dr The memory issue comes down to the fact that in the prep stage, pg_dump creates a TOC entry for every individual large object. It seems that pg_dump should be much more efficient about dumping these objects. Sincerely, JD -- Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 PostgreSQL Centered full stack support, consulting and development. Announcing "I'm offended" is basically telling the world you can't control your own emotions, so everyone else should do it for you.
В списке pgsql-hackers по дате отправления: