Re: BUG #12910: Memory leak with logical decoding
От | Peter Slavov |
---|---|
Тема | Re: BUG #12910: Memory leak with logical decoding |
Дата | |
Msg-id | 5537C2FE.7070907@gmail.com обсуждение исходный текст |
Ответ на | Re: BUG #12910: Memory leak with logical decoding (Andres Freund <andres@anarazel.de>) |
Список | pgsql-bugs |
Hi, Sorry for the late answer - I got mixed up with the wrong Postgres version and waste time testing on code that is not pached. I did testing after that on the patched version and basically I don't see difference when I use simple sql statement like before. Psql is putting all in RAM/swap before dumping it out ( the size is again ~15-16 GB - no change there). I tried with copy - much better memory footprint of course. I guess I will have to forget using the SQL interface. I will try using pg_recvlogical or some other way to connect my python script to the slot using the streaming protocol .. thanks Peter Ðа 9.04.2015 в 20:34, Andres Freund напиÑа: > Hi, > > On 2015-04-09 18:11:27 +0300, Peter Slavov wrote: >> I prepared a test case, that can reproduce the problem. > Yup. I can reproduce it... I did not (yet) have the time to run the test > to completion, but i believe the attached patch should fix the problem > (and also improve performance a bit...). > > Using the SQL interface for such large transactions isn't going to be > fun as all of the data, due to the nature of the set returning function > implementation in postgres, will be additionally written into a > tuplestore. The streaming interface doesn't have that behaviour. > > Additionally it's probably not a good idea to stream such a large > resultset via SELECT using psql - IIRC it'll try to store all that data > in memory :). Try something like > \copy (select * from pg_logical_slot_peek_changes('testing', null, 1)) TO /tmp/f > or such. > > Greetings, > > Andres Freund
В списке pgsql-bugs по дате отправления: