Re: Hash aggregates blowing out memory
От | Mike Harding |
---|---|
Тема | Re: Hash aggregates blowing out memory |
Дата | |
Msg-id | 1109369054.86993.17.camel@bsd.mvh обсуждение исходный текст |
Ответ на | Re: Hash aggregates blowing out memory (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Hash aggregates blowing out memory
|
Список | pgsql-general |
Any way to adjust n_distinct to be more accurate? I don't think a 'disk spill' would be that bad, if you could re-sort the hash in place. If nothing else, if it could -fail- when it reaches the lower stratosphere, and re-start, it's faster than getting no result at all... sort of an auto disable of the hashagg. On Fri, 2005-02-25 at 16:55 -0500, Tom Lane wrote: > Mike Harding <mvh@ix.netcom.com> writes: > > I've been having problems where a HashAggregate is used because of a bad > > estimate of the distinct number of elements involved. > > If you're desperate, there's always enable_hashagg. Or reduce sort_mem > enough so that even the misestimate looks like it will exceed sort_mem. > > In the long run it would be nice if HashAgg could spill to disk. We > were expecting to see a contribution of code along that line last year > (from the CMU/Berkeley database class) but it never showed up. The > performance implications might be a bit grim anyway :-( > > regards, tom lane -- Mike Harding <mvh@ix.netcom.com>
В списке pgsql-general по дате отправления: