Re: PATCH: hashjoin - gracefully increasing NTUP_PER_BUCKET instead of batching
От | Robert Haas |
---|---|
Тема | Re: PATCH: hashjoin - gracefully increasing NTUP_PER_BUCKET instead of batching |
Дата | |
Msg-id | CA+TgmoZT8LBsj6Stytpa5ePgWiTXS0DKEQYx2qzZBUN4dbN6kQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: PATCH: hashjoin - gracefully increasing NTUP_PER_BUCKET instead of batching (Tomas Vondra <tv@fuzzy.cz>) |
Список | pgsql-hackers |
On Fri, Dec 12, 2014 at 4:54 PM, Tomas Vondra <tv@fuzzy.cz> wrote: >>>> Well, this is sort of one of the problems with work_mem. When we >>>> switch to a tape sort, or a tape-based materialize, we're probably far >>>> from out of memory. But trying to set work_mem to the amount of >>>> memory we have can easily result in a memory overrun if a load spike >>>> causes lots of people to do it all at the same time. So we have to >>>> set work_mem conservatively, but then the costing doesn't really come >>>> out right. We could add some more costing parameters to try to model >>>> this, but it's not obvious how to get it right. >>> >>> Ummm, I don't think that's what I proposed. What I had in mind was a >>> flag "the batches are likely to stay in page cache". Because when it is >>> likely, batching is probably faster (compared to increased load factor). >> >> How will you know whether to set the flag? > > I don't know. I just wanted to make it clear that I'm not suggesting > messing with work_mem (increasing it or whatewer). Or maybe I got your > comments about memory overrun etc. wrong - now that I read it again, > maybe it's meant just as an example of how difficult problem it is? More or less, yeah. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: