Re: tweaking NTUP_PER_BUCKET
От | Tom Lane |
---|---|
Тема | Re: tweaking NTUP_PER_BUCKET |
Дата | |
Msg-id | 521.1405794240@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: tweaking NTUP_PER_BUCKET (Tomas Vondra <tv@fuzzy.cz>) |
Ответы |
Re: tweaking NTUP_PER_BUCKET
|
Список | pgsql-hackers |
Tomas Vondra <tv@fuzzy.cz> writes: > I've reviewed the two test cases mentioned here, and sadly there's > nothing that can be 'fixed' by this patch. The problem here lies in the > planning stage, which decides to hash the large table - we can't fix > that in the executor. We've heard a couple reports before of the planner deciding to hash a larger table rather than a smaller one. The only reason I can think of for that is if the smaller table has many more duplicates, so that the planner thinks the executor might end up traversing long hash chains. The planner's estimates could easily be off in this area, of course. estimate_hash_bucketsize() is the likely culprit if it's wrong. Which test case are you seeing this in, exactly? regards, tom lane
В списке pgsql-hackers по дате отправления: