Re: [HACKERS] Poor memory context performance in large hash joins
От | Tom Lane |
---|---|
Тема | Re: [HACKERS] Poor memory context performance in large hash joins |
Дата | |
Msg-id | 8652.1487962785@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [HACKERS] Poor memory context performance in large hash joins (Jeff Janes <jeff.janes@gmail.com>) |
Ответы |
Re: [HACKERS] Poor memory context performance in large hash joins
|
Список | pgsql-hackers |
Jeff Janes <jeff.janes@gmail.com> writes: > On Thu, Feb 23, 2017 at 2:28 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Maybe it's time to convert that to a doubly-linked list. > I don't think that would help. You would need some heuristic to guess > whether the chunk you are looking for is near the front, or near the end. Uh, what? In a doubly-linked list, you can remove an element in O(1) time, you don't need any searching. It basically becomes item->prev->next = item->next; item->next->prev = item->prev; modulo possible special cases for the head and tail elements. >> Although if the >> hash code is producing a whole lot of requests that are only a bit bigger >> than the separate-block threshold, I'd say It's Doing It Wrong. It should >> learn to aggregate them into larger requests. > Right now it is using compiled-in 32KB chunks. Should it use something > like max(32kb,work_mem/128) instead? I'd say it should double the size of the request each time. That's what we do in most places. regards, tom lane
В списке pgsql-hackers по дате отправления: