Re: [HACKERS] to-do item for explain analyze of hash aggregates?
| От | Andres Freund |
|---|---|
| Тема | Re: [HACKERS] to-do item for explain analyze of hash aggregates? |
| Дата | |
| Msg-id | 20170424210754.dwjgbf2w5ba2ejk6@alap3.anarazel.de обсуждение исходный текст |
| Ответ на | Re: [HACKERS] to-do item for explain analyze of hash aggregates? (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
| Список | pgsql-hackers |
On 2017-04-24 21:13:16 +0200, Tomas Vondra wrote: > On 04/24/2017 08:52 PM, Andres Freund wrote: > > On 2017-04-24 11:42:12 -0700, Jeff Janes wrote: > > > The explain analyze of the hash step of a hash join reports something like > > > this: > > > > > > -> Hash (cost=458287.68..458287.68 rows=24995368 width=37) (actual > > > rows=24995353 loops=1) > > > Buckets: 33554432 Batches: 1 Memory Usage: 2019630kB > > > > > > > > > Should the HashAggregate node also report on Buckets and Memory Usage? I > > > would have found that useful several times. Is there some reason this is > > > not wanted, or not possible? > > > > I've wanted that too. It's not impossible at all. > Why wouldn't that be possible? We probably can't use exactly the same > approach as Hash, because hashjoins use custom hash table while hashagg uses > dynahash IIRC. But why couldn't measure the amount of memory by looking at > the memory context, for example? Doesn't use dynahash anymore (but a simplehash.h style table) anymore, but that should actually make it simpler, not harder. - Andres
В списке pgsql-hackers по дате отправления: