Re: Make MemoryContextMemAllocated() more precise
От | Jeff Davis |
---|---|
Тема | Re: Make MemoryContextMemAllocated() more precise |
Дата | |
Msg-id | 412a3fbf306f84d8d78c4009e11791867e62b87c.camel@j-davis.com обсуждение исходный текст |
Ответ на | Make MemoryContextMemAllocated() more precise (Jeff Davis <pgsql@j-davis.com>) |
Ответы |
Re: Make MemoryContextMemAllocated() more precise
|
Список | pgsql-hackers |
On Mon, 2020-03-16 at 11:45 -0700, Jeff Davis wrote: > AllocSet allocates memory for itself in blocks, which double in size > up > to maxBlockSize. So, the current block (the last one malloc'd) may > represent half of the total memory allocated for the context itself. Narrower approach that doesn't touch memory context internals: If the blocks double up in size to maxBlockSize, why not just create the memory context with a smaller maxBlockSize? I had originally dismissed this as a hack that could slow down some workloads when work_mem is large. But we can simply make it proportional to work_mem, which makes a lot of sense for an operator like HashAgg that controls its memory usage. It can allocate in blocks large enough that we don't call malloc() too often when work_mem is large; but small enough that we don't overrun work_mem when work_mem is small. I have attached a patch to do this only for HashAgg, using a new entry point in execUtils.c called CreateWorkExprContext(). It sets maxBlockSize to 1/16th of work_mem (rounded down to a power of two), with a minimum of initBlockSize. This could be a good general solution for other operators as well, but that requires a bit more investigation, so I'll leave that for v14. The attached patch is narrow and solves the problem for HashAgg nicely without interfering with anything else, so I plan to commit it soon for v13. Regards, Jeff Davis
Вложения
В списке pgsql-hackers по дате отправления: