Re: is there a way to firmly cap postgres worker memory consumption?
От | Tom Lane |
---|---|
Тема | Re: is there a way to firmly cap postgres worker memory consumption? |
Дата | |
Msg-id | 23268.1396983974@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | is there a way to firmly cap postgres worker memory consumption? (Steve Kehlet <steve.kehlet@gmail.com>) |
Список | pgsql-general |
Steve Kehlet <steve.kehlet@gmail.com> writes: > But for now, the devs are asking me for some way to put a cap on a postgres > query's total memory consumption. I'm familiar with the available settings > on the "Resource Consumption" docs (and you can see my settings in my gist > above, including work_mem turned way down to 1MB for testing), but it seems > like there are things like Materialize that remain uncappable, since > they're not constrained by the shared_buffers and work_mem limits. Materialize nodes should honor the work_mem limit. That's per node of course, but you only have half a dozen of them in this query, so I doubt that's where the problem is. You mentioned array_agg ... how are you using that exactly, and how large a result are you expecting it to create? If you had an array_agg being used in a HashAggregate plan node, I'd figure for sure that was the source of your problem, but the explain output shows it's a GroupAggregate so there should only be one array_agg active at a time. Anyway, what I would try doing is starting the postmaster with a "ulimit -d" value corresponding to the max per-process data segment size you want. Something in the range of 50-100MB would probably be reasonable if your queries aren't too complex. regards, tom lane
В списке pgsql-general по дате отправления: