Re: Sort performance cliff with small work_mem
От | Peter Geoghegan |
---|---|
Тема | Re: Sort performance cliff with small work_mem |
Дата | |
Msg-id | CAH2-WzmtAuVzTkWEv-_W4+E063S-q-iErW_tR02qnDpo4qfwKw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Sort performance cliff with small work_mem (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Wed, May 2, 2018 at 11:06 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> -1 from me. What about the case where only some tuples are massive? > > Well, what about it? If there are just a few wide tuples, then the peak > memory consumption will depend on how many of those happen to be in memory > at the same time ... but we have zero control over that in the merge > phase, so why sweat about it here? I think Heikki's got a good idea about > setting a lower bound on the number of tuples we'll hold in memory during > run creation. We don't have control over it, but I'm not excited about specifically going out of our way to always use more memory in dumptuples() because it's no worse than the worst case for merging. I am supportive of the idea of making sure that the amount of memory left over for tuples is reasonably in line with memtupsize at the point that the sort starts, though. -- Peter Geoghegan
В списке pgsql-hackers по дате отправления: