Re: Really bad blowups with hash outer join and nulls
От | Andrew Gierth |
---|---|
Тема | Re: Really bad blowups with hash outer join and nulls |
Дата | |
Msg-id | 87a90elk7f.fsf@news-spur.riddles.org.uk обсуждение исходный текст |
Ответ на | Re: Really bad blowups with hash outer join and nulls (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Ответы |
Re: Really bad blowups with hash outer join and nulls
|
Список | pgsql-hackers |
>>>>> "Tomas" == Tomas Vondra <tomas.vondra@2ndquadrant.com> writes: Tomas> Improving the estimates is always good, but it's not going toTomas> fix the case of non-NULL values (it shouldn'tbe all thatTomas> difficult to create such examples with a value whose hash startsTomas> with a bunch of zeroes). Right now, I can't get it to plan such an example, because (a) if there are no stats to work from then the planner makes fairly pessimistic assumptions about hash bucket filling, and (b) if there _are_ stats to work from, then a frequently-occurring non-null value shows up as an MCV and the planner takes that into account to calculate bucketsize. The problem could only be demonstrated for NULLs because the planner was ignoring NULL for the purposes of estimating bucketsize, which is correct for all join types except RIGHT and FULL (which, iirc, are more recent additions to the hashjoin repertoire). If you want to try testing it, you may find this useful: select i, hashint8(i) from unnest(array[1474049294, -1779024306, -1329041947]) u(i); i | hashint8 -------------+---------- 1474049294 | 0-1779024306 | 0-1329041947 | 0 (3 rows) (those are the only three int4 values that hash to exactly 0) It's probably possible to construct pathological cases by finding a lot of different values with zeros in the high bits of the hash, but that's something that wouldn't be likely to happen by chance. Tomas> I think this might be solved by relaxing the check a bit. Yeah, that looks potentially useful. -- Andrew (irc:RhodiumToad)
В списке pgsql-hackers по дате отправления: