Re: [BUGS] json(b)_array_elements use causes very large memory usage when also referencing entire json document
От | Tom Lane |
---|---|
Тема | Re: [BUGS] json(b)_array_elements use causes very large memory usage when also referencing entire json document |
Дата | |
Msg-id | 3403.1507337060@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [BUGS] json(b)_array_elements use causes very large memory usagewhen also referencing entire json document (Andres Freund <andres@anarazel.de>) |
Ответы |
Re: [BUGS] json(b)_array_elements use causes very large memory usagewhen also referencing entire json document
|
Список | pgsql-bugs |
Andres Freund <andres@anarazel.de> writes: > Hm. We've a bunch of places where we free detoasted data, just out of > efficiency concerns. And since the jsonb functions are quite likely to > detoast a lot, it doesn't seem unreasonable to do so for the most likely > offenders. I mean if you've a bit more complex expression involving a > few fields accessed, freeing in the accesses will reduce maximum memory > usage by quite a bit. I'm not suggesting to work towards leak free, just > towards reducing the lifetime of a few potentially large allocations. Dunno, for the common case of not-so-large values, this would just be a net loss. pfree'ing a value we can afford to ignore till the next per-tuple context reset is not a win. Maybe we need some kind of PG_FREE_IF_LARGE_COPY macro? Where it would kick in for values over 8K or thereabouts? (You might argue that any value that got detoasted at all would be large enough to be worth worrying about; but I think that falls down because we folded short-header unshorting into the detoast mechanism.) regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
В списке pgsql-bugs по дате отправления: