Re: Back-patch change in hashed DISTINCT estimation?
От | Pavan Deolasee |
---|---|
Тема | Re: Back-patch change in hashed DISTINCT estimation? |
Дата | |
Msg-id | CABOikdP_c6j7NpQWaoxJJ4Zkq0uevxO0SZhpzn8CX=y+MAERJw@mail.gmail.com обсуждение исходный текст |
Ответ на | Back-patch change in hashed DISTINCT estimation? (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Wed, Aug 21, 2013 at 2:54 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
FWIW I recently investigated an out-of-memory issue in hash aggregation. That case was because of use of a large temp table which was not manually analysed and thus lead to a bad plan selection. But out of memory errors are very confusing to the users and I have seen them unnecessarily tinkering their memory settings to circumvent that issue. So +1 to fix the bug in back branches, even though I understand there could be some casualties on the border.
--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee
What I'm wondering is whether to back-patch this or leave well enough
alone. The risk of back-patching is that it might destabilize plan
choices that people like. (In Tomas' original example, the underestimate
of the table size leads it to choose a plan that is in fact better.)
The risk of not back-patching is that the error could lead to
out-of-memory failures because the hash aggregation uses more memory
than the planner expected.
Thanks,
Pavan
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee
В списке pgsql-hackers по дате отправления: