Re: Spilling hashed SetOps and aggregates to disk
От | David Rowley |
---|---|
Тема | Re: Spilling hashed SetOps and aggregates to disk |
Дата | |
Msg-id | CAKJS1f-bnfCjMewwGf4nu1wAfFPv4bSch0qk7XfHDjFcbvmDLQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Spilling hashed SetOps and aggregates to disk (Andres Freund <andres@anarazel.de>) |
Ответы |
Re: Spilling hashed SetOps and aggregates to disk
Re: Spilling hashed SetOps and aggregates to disk Re: Spilling hashed SetOps and aggregates to disk |
Список | pgsql-hackers |
On 5 June 2018 at 06:52, Andres Freund <andres@anarazel.de> wrote: > That part has gotten a bit easier since, because we have serialize / > deserialize operations for aggregates these days. True. Although not all built in aggregates have those defined. > I wonder whether, at least for aggregates, the better fix wouldn't be to > switch to feeding the tuples into tuplesort upon memory exhaustion and > doing a sort based aggregate. We have most of the infrastructure to do > that due to grouping sets. It's just the pre-existing in-memory tuples > that'd be problematic, in that the current transition values would need > to serialized as well. But with a stable sort that'd not be > particularly problematic, and that could easily be achieved. Isn't there still a problem determining when the memory exhaustion actually happens though? As far as I know, we've still little knowledge how much memory each aggregate state occupies. Jeff tried to solve this in [1], but from what I remember, there was too much concern about the overhead of the additional accounting code. [1] https://www.postgresql.org/message-id/flat/CAKJS1f8yvvvj-sVDv_bcxkzcZKq0ZOTVhX0dHfnYDct2Mycq5Q%40mail.gmail.com#CAKJS1f8yvvvj-sVDv_bcxkzcZKq0ZOTVhX0dHfnYDct2Mycq5Q@mail.gmail.com -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: