Re: HASH_CHUNK_SIZE vs malloc rounding
От | Tom Lane |
---|---|
Тема | Re: HASH_CHUNK_SIZE vs malloc rounding |
Дата | |
Msg-id | 6214.1480354038@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | HASH_CHUNK_SIZE vs malloc rounding (Thomas Munro <thomas.munro@enterprisedb.com>) |
Ответы |
Re: [HACKERS] HASH_CHUNK_SIZE vs malloc rounding
Re: [HACKERS] HASH_CHUNK_SIZE vs malloc rounding |
Список | pgsql-hackers |
Thomas Munro <thomas.munro@enterprisedb.com> writes: > I bet other allocators also do badly with "32KB plus a smidgen". To > minimise overhead we'd probably need to try to arrange for exactly > 32KB (or some other power of 2 or at least factor of common page/chunk > size?) to arrive into malloc, which means accounting for both > nodeHash.c's header and aset.c's headers in nodeHash.c, which seems a > bit horrible. It may not be worth doing anything about. Yeah, the other problem is that without a lot more knowledge of the specific allocator, we shouldn't really assume that it's good or bad with an exact-power-of-2 request --- it might well have its own overhead. It is an issue though, and not only in nodeHash.c. I'm pretty sure that StringInfo also makes exact-power-of-2 requests for no essential reason, and there are probably many other places. We could imagine providing an mmgr API function along the lines of "adjust this request size to the nearest thing that can be allocated efficiently". That would avoid the need for callers to know about aset.c overhead explicitly. I'm not sure how it could deal with platform-specific malloc vagaries though :-( regards, tom lane
В списке pgsql-hackers по дате отправления: