Re: DBT-3 with SF=20 got failed
От | Tomas Vondra |
---|---|
Тема | Re: DBT-3 with SF=20 got failed |
Дата | |
Msg-id | 55D53853.9050106@2ndquadrant.com обсуждение исходный текст |
Ответ на | Re: DBT-3 with SF=20 got failed (Kohei KaiGai <kaigai@kaigai.gr.jp>) |
Ответы |
Re: DBT-3 with SF=20 got failed
|
Список | pgsql-hackers |
Hello KaiGain-san, On 08/19/2015 03:19 PM, Kohei KaiGai wrote: > Unless we have no fail-safe mechanism when planner estimated too > large number of tuples than actual needs, a strange estimation will > consume massive amount of RAMs. It's a bad side effect. > My previous patch didn't pay attention to the scenario, so needs to > revise the patch. I agree we need to put a few more safeguards there (e.g. make sure we don't overflow INT when counting the buckets, which may happen with the amounts of work_mem we'll see in the wild soon). But I think we should not do any extensive changes to how we size the hashtable - that's not something we should do in a bugfix I think. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-hackers по дате отправления: