Re: bad estimation together with large work_mem generates terrible slow hash joins
От | Robert Haas |
---|---|
Тема | Re: bad estimation together with large work_mem generates terrible slow hash joins |
Дата | |
Msg-id | CA+TgmoZq23h1SzSg82OKYJb+7zZ9FDAxvoApDhT7NK5M2hVx5g@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: bad estimation together with large work_mem generates terrible slow hash joins (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: bad estimation together with large work_mem generates terrible slow hash joins
|
Список | pgsql-hackers |
On Thu, Sep 11, 2014 at 9:59 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> With the exception of ExecChooseHashTableSize() and a lot of stylistic >> issues along the lines of what I've already complained about, this >> patch seems pretty good to me. It does three things: >> ... >> (3) It allows the number of batches to increase on the fly while the >> hash join is in process. This case arises when we initially estimate >> that we only need a small hash table, and then it turns out that there >> are more tuples than we expect. Without this code, the hash table's >> load factor gets too high and things start to suck. > > Pardon me for not having read the patch yet, but what part of (3) > wasn't there already? EINSUFFICIENTCAFFEINE. It allows the number of BUCKETS to increase, not the number of batches. As you say, the number of batches could already increase. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: