Re: [INTERFACES] Back end memory consumption (was Front end memory consumption in SELECT)
От | Tom Lane |
---|---|
Тема | Re: [INTERFACES] Back end memory consumption (was Front end memory consumption in SELECT) |
Дата | |
Msg-id | 29989.944190601@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [INTERFACES] Back end memory consumption (was Front end memory consumption in SELECT) (Douglas Thomson <dougt@mugc.cc.monash.edu.au>) |
Список | pgsql-interfaces |
Douglas Thomson <dougt@mugc.cc.monash.edu.au> writes: > If I hadn't checked that I have indexes on > all the attributes (and attribute combinations) that make up the keys > needed for the join, I would think the back end was trying to read in > and sort a large part of the data just for the query! Well, of course it *is* ... the trick is to make sure that anything really large ends up on disk (in a temp file) and not in memory. Offhand the plan looks pretty reasonable for a 9-way join. I see that you have two sorts and five hashes going on here. Each of those will feel entitled to use as much memory as whatever your -S setting is, which IIRC is 512K by default. But 3.5meg altogether isn't enough to explain your memory usage ... unless you are using a larger-than-normal -S switch? Also, while the sorts are pretty certain to honor your -S limit (plus or minus some slop), the hashes are much more probabilistic affairs. If a table is much larger than the planner guesses, or if the distribution of tuple values is very uneven, a hash join might use a lot more memory than -S. It's hard to tell whether this might be happening without a lot more info than EXPLAIN provides, however. EXPLAIN shows that your tables being hashed are all pretty small, the largest being 't24 users' at 6955 rows. If that's accurate then I doubt hash overrun is the explanation... Anyone have any other ideas? regards, tom lane ************
В списке pgsql-interfaces по дате отправления: