Re: rethinking dense_alloc (HashJoin) as a memory context
От | Tom Lane |
---|---|
Тема | Re: rethinking dense_alloc (HashJoin) as a memory context |
Дата | |
Msg-id | 979.1468442398@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: rethinking dense_alloc (HashJoin) as a memory context (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: rethinking dense_alloc (HashJoin) as a memory context
Re: rethinking dense_alloc (HashJoin) as a memory context |
Список | pgsql-hackers |
Robert Haas <robertmhaas@gmail.com> writes: > On Wed, Jul 13, 2016 at 1:10 PM, Tomas Vondra > <tomas.vondra@2ndquadrant.com> wrote: > What's not clear to me is to what extent slowing down pfree is an > acceptable price for improving the behavior in other ways. I wonder > how many of the pfree calls in our current codebase are useless or > even counterproductive, or could be made so. I think there's a lot, but I'm afraid most of them are in contexts (pun intended) where aset.c already works pretty well, ie it's a short-lived context anyway. The areas where we're having pain are where there are fairly long-lived contexts with lots of pfree traffic; certainly that seems to be the case in reorderbuffer.c. Because they're long-lived, you can't just write off the pfrees as ignorable. I wonder whether we could compromise by reducing the minimum "standard chunk header" to be just a pointer to owning context, with the other fields becoming specific to particular mcxt implementations. That would be enough to allow contexts to decide that pfree was a no-op, say, or that they wouldn't support GetMemoryChunkSpace(), without having to decree that misuse can lead to crashes. But that's still more than zero overhead per-chunk. regards, tom lane
В списке pgsql-hackers по дате отправления: