Re: Hash join in SELECT target list expression keeps consuming memory
От | Jaime Soler |
---|---|
Тема | Re: Hash join in SELECT target list expression keeps consuming memory |
Дата | |
Msg-id | CAKVUGgSdEY9aTD2nBpS+m49Ay_=wJj-4iVe3As6LnnL+LcetWw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Hash join in SELECT target list expression keeps consuming memory (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
Right now we are purging old LO objects because our production system run out of memory
Mem: 41154296k total, 40797560k used, 356736k free, 15748k buffers
Swap: 16777208k total, 1333260k used, 15443948k free, 35304844k cached
SELECT count(*) FROM pg_largeobject;
count
----------
52614842
(1 row)
----------
52614842
(1 row)
SELECT pg_size_pretty(pg_table_size('pg_largeobject'));
pg_size_pretty
----------------
15 GB
(1 row)
pg_size_pretty
----------------
15 GB
(1 row)
Regards
2018-03-21 16:51 GMT+01:00 Tom Lane <tgl@sss.pgh.pa.us>:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
> On 03/21/2018 02:18 PM, Jaime Soler wrote:
>> We still get out of memory error during pg_dump execution
>> pg_dump: reading large objects
>> out of memory
> Hmmmm ... that likely happens because of this for loop copying a lot of
> data:
> https://github.com/postgres/postgres/blob/master/src/bin/ pg_dump/pg_dump.c#L3258
The long and the short of it is that too many large objects *will*
choke pg_dump; this has been obvious since we decided to let it treat
large objects as heavyweight objects. See eg
https://www.postgresql.org/message-id/29613.1476969807@ sss.pgh.pa.us
I don't think there's any simple fix available. We discussed some
possible solutions in
https://www.postgresql.org/message-id/flat/5539483B. 3040401%40commandprompt.com
but none of them looked easy. The best short-term answer is "run
pg_dump in a less memory-constrained system".
regards, tom lane
В списке pgsql-hackers по дате отправления: