Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
От | Jeff Janes |
---|---|
Тема | Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize |
Дата | |
Msg-id | CAMkU=1y8ZBMMapk5i1BgsMHQZsaxDCO=UEKWnu6J=XEjQ-gpAw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize (Stephen Frost <sfrost@snowman.net>) |
Ответы |
Re: MemoryContextAllocHuge(): selectively bypassing
MaxAllocSize
|
Список | pgsql-hackers |
On Sat, Jun 22, 2013 at 12:46 AM, Stephen Frost <sfrost@snowman.net> wrote:
Noah,Nice! I've complained about this limit a few different times and just
* Noah Misch (noah@leadboat.com) wrote:
> This patch introduces MemoryContextAllocHuge() and repalloc_huge() that check
> a higher MaxAllocHugeSize limit of SIZE_MAX/2.
never got around to addressing it.That's frustratingly small. :(
> This was made easier by tuplesort growth algorithm improvements in commit
> 8ae35e91807508872cabd3b0e8db35fc78e194ac. The problem has come up before
> (TODO item "Allow sorts to use more available memory"), and Tom floated the
> idea[1] behind the approach I've used. The next limit faced by sorts is
> INT_MAX concurrent tuples in memory, which limits helpful work_mem to about
> 150 GiB when sorting int4.
I've added a ToDo item to remove that limit from sorts as well.
I was going to add another item to make nodeHash.c use the new huge allocator, but after looking at it just now it was not clear to me that it even has such a limitation. nbatch is limited by MaxAllocSize, but nbuckets doesn't seem to be.
Cheers,
Jeff
В списке pgsql-hackers по дате отправления: