Re: is there a way to firmly cap postgres worker memory consumption?

Поиск
Список
Период
Сортировка
От Steve Kehlet
Тема Re: is there a way to firmly cap postgres worker memory consumption?
Дата
Msg-id CA+bfosEHQRtpySEc203XUq0bEz3jbgb=c0dRSZo6AOTzOasFSA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: is there a way to firmly cap postgres worker memory consumption?  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
On Tue, Apr 8, 2014 at 6:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Well, here's the problem:
>         ExprContext: 812638208 total in 108 blocks; 183520 free (171
> chunks); 812454688 used

So something involved in expression evaluation is eating memory.
Looking at the query itself, I'd have to bet on this:

>            ARRAY_TO_STRING(ARRAY_AGG(MM.ID::CHARACTER VARYING), ',')

My guess is that this aggregation is being done across a lot more rows
than you were expecting, and the resultant array/string therefore eats
lots of memory.  You might try replacing that with COUNT(*), or even
better SUM(LENGTH(MM.ID::CHARACTER VARYING)), just to get some definitive
evidence about what the query is asking to compute.

The devs have moved on and want to stick with their new query, so I'll just chalk this up to a bad query and let it go. But I'm glad to have learned a few new tricks, thanks.
 
Meanwhile, it seems like ulimit -v would provide the safety valve
you asked for originally.

Thank you Amador and Tom for the ulimit solution, that's exactly what I needed.

В списке pgsql-general по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: import .sql file into PostgreSQL database
Следующее
От: Amit Langote
Дата:
Сообщение: About upgrading a (tuple?) lock in a rollback'd sub-transaction