Re: Backend memory dump analysis
От | Andres Freund |
---|---|
Тема | Re: Backend memory dump analysis |
Дата | |
Msg-id | 20180323173759.dmui4gblxbjvaf3y@alap3.anarazel.de обсуждение исходный текст |
Ответ на | Backend memory dump analysis (Vladimir Sitnikov <sitnikov.vladimir@gmail.com>) |
Ответы |
Re: Backend memory dump analysis
|
Список | pgsql-hackers |
Hi, On 2018-03-23 16:18:52 +0000, Vladimir Sitnikov wrote: > Hi, > > I investigate an out of memory-related case for PostgreSQL 9.6.5, and it > looks like MemoryContextStatsDetail + gdb are the only friends there. > > MemoryContextStatsDetail does print some info, however it is rarely > possible to associate the used memory with business cases. > For insance: > CachedPlanSource: 146224 total in 8 blocks; 59768 free (3 chunks); 86456 > used > CachedPlanQuery: 130048 total in 7 blocks; 29952 free (2 chunks); > 100096 used > > It does look like a 182KiB has been spent for some SQL, however there's no > clear way to tell which SQL is to blame. > > Another case: PL/pgSQL function context: 57344 total in 3 blocks; 17200 > free (2 chunks); 40144 used > It is not clear what is there inside, which "cached plans" are referenced > by that pgsql context (if any), etc. > > It would be great if there was an ability to dump the memory in a > machine-readable format (e.g. Java's HPROF). > > Eclipse Memory Analyzer (https://www.eclipse.org/mat/) can visualize Java > memory dumps quite well, and I think HPROF format is trivial to generate > (the generation is easy, the hard part is to parse memory contents). > That is we could get analysis UI for free if PostgreSQL produces the dump. > > Is it something welcome or non-welcome? > Is it something worth including in-core? The overhead required for it (in cycles, in higher memory usage due to additional bookeeping, in maintenance) makes me highly doubtful it's worth going there. While I definitely can see the upside, it doesn't seem to justify the cost. Greetings, Andres Freund
В списке pgsql-hackers по дате отправления: