Re: bad estimation together with large work_mem generates terrible slow hash joins
От | Robert Haas |
---|---|
Тема | Re: bad estimation together with large work_mem generates terrible slow hash joins |
Дата | |
Msg-id | CA+TgmobVELQMdcPQ7Zvh5Q5kUwmTBAN7L2YqkK6BN5ir=Soogg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: bad estimation together with large work_mem generates terrible slow hash joins (Tomas Vondra <tv@fuzzy.cz>) |
Список | pgsql-hackers |
On Wed, Sep 10, 2014 at 3:02 PM, Tomas Vondra <tv@fuzzy.cz> wrote: > On 10.9.2014 20:31, Robert Haas wrote: >> On Wed, Sep 10, 2014 at 2:25 PM, Heikki Linnakangas >> <hlinnakangas@vmware.com> wrote: >>> The dense-alloc-v5.patch looks good to me. I have committed that with minor >>> cleanup (more comments below). I have not looked at the second patch. >> >> Gah. I was in the middle of doing this. Sigh. >> >>>> * the chunks size is 32kB (instead of 16kB), and we're using 1/4 >>>> threshold for 'oversized' items >>>> >>>> We need the threshold to be >=8kB, to trigger the special case >>>> within AllocSet. The 1/4 rule is consistent with ALLOC_CHUNK_FRACTION. >>> >>> Should we care about the fact that if there are only a few tuples, we will >>> nevertheless waste 32kB of memory for the chunk? I guess not, but I thought >>> I'd mention it. The smallest allowed value for work_mem is 64kB. >> >> I think we should change the threshold here to 1/8th. The worst case >> memory wastage as-is ~32k/5 > 6k. > > So you'd lower the threshold to 4kB? That may lower the wastage in the > chunks, but palloc will actually allocate 8kB anyway, wasting up to > additional 4kB. So I don't see how lowering the threshold to 1/8th > improves the situation ... Ah, OK. Well, never mind then. :-) -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: