Re: Spilling hashed SetOps and aggregates to disk
От | Andres Freund |
---|---|
Тема | Re: Spilling hashed SetOps and aggregates to disk |
Дата | |
Msg-id | 20180605125732.jdbvz54z6q36aud7@alap3.anarazel.de обсуждение исходный текст |
Ответ на | Re: Spilling hashed SetOps and aggregates to disk (David Rowley <david.rowley@2ndquadrant.com>) |
Ответы |
Re: Spilling hashed SetOps and aggregates to disk
Re: Spilling hashed SetOps and aggregates to disk |
Список | pgsql-hackers |
On 2018-06-06 00:53:42 +1200, David Rowley wrote: > On 6 June 2018 at 00:45, Andres Freund <andres@anarazel.de> wrote: > > On 2018-06-05 09:35:13 +0200, Tomas Vondra wrote: > >> I wonder if an aggregate might use a custom context > >> internally (I don't recall anything like that). The accounting capability > >> seems potentially useful for other places, and those might not use AllocSet > >> (or at least not directly). > > > > Yea, that seems like a big issue. > > Unfortunately, at least one of the built-in ones do. See initArrayResultArr. I think it's ok to only handle this gracefully if serialization is supported. But I think my proposal to continue use a hashtable for the already known groups, and sorting for additional groups would largely address that largely, right? We couldn't deal with groups becoming too large, but easily with the number of groups becoming too large. Greetings, Andres Freund
В списке pgsql-hackers по дате отправления: