Re: bad estimation together with large work_mem generates terrible slow hash joins
От | Robert Haas |
---|---|
Тема | Re: bad estimation together with large work_mem generates terrible slow hash joins |
Дата | |
Msg-id | CA+TgmoYsXfrFeFoyz7SCqA7gi6nF6+qH8OGMvZM7_yovouWQrw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: bad estimation together with large work_mem generates terrible slow hash joins (Heikki Linnakangas <hlinnakangas@vmware.com>) |
Ответы |
Re: bad estimation together with large work_mem generates
terrible slow hash joins
Re: bad estimation together with large work_mem generates terrible slow hash joins |
Список | pgsql-hackers |
On Wed, Sep 10, 2014 at 2:25 PM, Heikki Linnakangas <hlinnakangas@vmware.com> wrote: > The dense-alloc-v5.patch looks good to me. I have committed that with minor > cleanup (more comments below). I have not looked at the second patch. Gah. I was in the middle of doing this. Sigh. >> * the chunks size is 32kB (instead of 16kB), and we're using 1/4 >> threshold for 'oversized' items >> >> We need the threshold to be >=8kB, to trigger the special case >> within AllocSet. The 1/4 rule is consistent with ALLOC_CHUNK_FRACTION. > > Should we care about the fact that if there are only a few tuples, we will > nevertheless waste 32kB of memory for the chunk? I guess not, but I thought > I'd mention it. The smallest allowed value for work_mem is 64kB. I think we should change the threshold here to 1/8th. The worst case memory wastage as-is ~32k/5 > 6k. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: