Re: Out of memory on SELECT in 8.3.5
От | Scott Marlowe |
---|---|
Тема | Re: Out of memory on SELECT in 8.3.5 |
Дата | |
Msg-id | dcc563d10902090128u1909c686h989af13692df9e74@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Out of memory on SELECT in 8.3.5 (John R Pierce <pierce@hogranch.com>) |
Список | pgsql-general |
On Mon, Feb 9, 2009 at 2:17 AM, John R Pierce <pierce@hogranch.com> wrote: > Matt Magoffin wrote: >> >> We have 100+ postgres processes running, so for an individual process, >> could the 1024 file limit be doing anything to this query? Or would I see >> an explicit error message regarding this condition? >> >> > > with 100 concurrent postgres connections, if they all did something > requiring large amounts of work_mem, you could allocate 100 * 125MB (I > believe thats what you said it was set to?) which is like 12GB :-O > > in fact a single query thats doing multiple sorts of large datasets for a > messy join (or other similar activity) can involve several instances of > workmem. multiply that by 100 queries, and ouch. > > have you considered using a connection pool to reduce the postgres process > count? No matter what I am pretty conservative with work_mem for these reasons. Plus, I tested most of our queries and raising work_mem above 16Meg had no real positive effect on most queries. If I have a single reporting query that can use work_mem over that I set it and run that query by itself (from things like cron jobs) rather than just leaving work_mem really high. High work_mem is a bit of a foot gun.
В списке pgsql-general по дате отправления: