[HACKERS] Dynamic instrumentation of lwlock wait times (lwlock flamegraphs)
От | Andres Freund |
---|---|
Тема | [HACKERS] Dynamic instrumentation of lwlock wait times (lwlock flamegraphs) |
Дата | |
Msg-id | 20170622210845.d2hsbqv6rxu2tiye@alap3.anarazel.de обсуждение исходный текст |
Ответы |
Re: [HACKERS] Dynamic instrumentation of lwlock wait times (lwlockflamegraphs)
Re: [HACKERS] Dynamic instrumentation of lwlock wait times (lwlock flamegraphs) |
Список | pgsql-hackers |
Hi, At pgcon some people were talking about the difficulty of instrumenting the time actually spent waiting for lwlocks and related measurements. I'd mentioned that linux these days provides infrastructure to measure such things in unmodified binaries. Attached is a prototype of a script that measures the time spent inside PGSemaphoreLock(), aggregates that inside the kernel, grouped by pid and stacktrace. That allows one to generate nice flame graphs showing which part of the code waits how long for lwlocks. The attached script clearly needs improvements, but I thought I'd show some of the results it can get. To run it you need the the python library of the 'bcc' project [1], and a sufficiently new kernel. Some distributions, e.g. newer debian versions, package this as python-bpfcc and similar. The output of the script can be turned into a flamegraph with https://github.com/brendangregg/FlameGraph 's flamegraph.pl. Here's a few examples of a pgbench run. The number is the number of clients, sync/async indicates synchronous_commit on/off. All the numbers here were generated with the libpq & pgbench batch patches applied and in use, but that's just because that was the state of my working tree. http://anarazel.de/t/2017-06-22/pgsemwait_8_sync.svg http://anarazel.de/t/2017-06-22/pgsemwait_8_async.svg http://anarazel.de/t/2017-06-22/pgsemwait_64_sync.svg http://anarazel.de/t/2017-06-22/pgsemwait_64_async.svg A bonus, not that representative one is the wait for a pgbench readonly run after the above, with autovacuum previously disabled: http://anarazel.de/t/2017-06-22/pgsemwait_64_select.svg interesting to see how the backends themselves never end up having to flush WAL themselves, or at least not in a manner triggering contention. I plan to write a few more of these, because they're hugely useful to understand actual locking behaviour. Among them: - time beteen Acquire/Release of lwlocks, grouped by lwlock - time beteen Acquire/Release of lwlocks, grouped by stack - callstack of acquirer and waker of lwlocks, grouped by caller stack, waiter stack - ... I think it might be interesting to collect a few of these somewhere centrally once halfway mature. Maybe in src/tools or such. Greetings, Andres Freund [1] https://github.com/iovisor/bcc -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Вложения
В списке pgsql-hackers по дате отправления: