Re: Parallel tuplesort (for parallel B-Tree index creation)
От | Heikki Linnakangas |
---|---|
Тема | Re: Parallel tuplesort (for parallel B-Tree index creation) |
Дата | |
Msg-id | f907b291-116f-161b-cdde-35c2912bf6f1@iki.fi обсуждение исходный текст |
Ответ на | Re: Parallel tuplesort (for parallel B-Tree index creation) (Peter Geoghegan <pg@heroku.com>) |
Ответы |
Re: Parallel tuplesort (for parallel B-Tree index creation)
|
Список | pgsql-hackers |
On 09/07/2016 09:01 AM, Peter Geoghegan wrote: > On Tue, Sep 6, 2016 at 10:57 PM, Peter Geoghegan <pg@heroku.com> wrote: >> There isn't much point in that, because those buffers are never >> physically allocated in the first place when there are thousands. They >> are, however, entered into the tuplesort.c accounting as if they were, >> denying tuplesort.c the full benefit of available workMem. It doesn't >> matter if you USEMEM() or FREEMEM() after we first spill to disk, but >> before we begin the merge. (We already refund the >> unused-but-logically-allocated memory from unusued at the beginning of >> the merge (within beginmerge()), so we can't do any better than we >> already are from that point on -- that makes the batch memtuples >> growth thing slightly more effective.) > > The big picture here is that you can't only USEMEM() for tapes as the > need arises for new tapes as new runs are created. You'll just run a > massive availMem deficit, that you have no way of paying back, because > you can't "liquidate assets to pay off your creditors" (e.g., release > a bit of the memtuples memory). The fact is that memtuples growth > doesn't work that way. The memtuples array never shrinks. Hmm. But memtuples is empty, just after we have built the initial runs. Why couldn't we shrink, i.e. free and reallocate, it? - Heikki
В списке pgsql-hackers по дате отправления: