Re: profiling postgresql queries?
От | Michael Fuhr |
---|---|
Тема | Re: profiling postgresql queries? |
Дата | |
Msg-id | 20050412144359.GA88387@winnie.fuhr.org обсуждение исходный текст |
Ответ на | profiling postgresql queries? (hubert lubaczewski <hubert.lubaczewski@eo.pl>) |
Ответы |
Re: profiling postgresql queries?
|
Список | pgsql-performance |
On Tue, Apr 12, 2005 at 12:46:43PM +0200, hubert lubaczewski wrote: > > the problem is that both the inserts and updated operate on > heavy-tirggered tables. > and it made me wonder - is there a way to tell how much time of backend > was spent on triggers, index updates and so on? > like: > total query time: 1 secons > trigger a: 0.50 second > trigger b: 0.25 second > index update: 0.1 second EXPLAIN ANALYZE in 8.1devel (CVS HEAD) prints a few statistics for triggers: EXPLAIN ANALYZE UPDATE foo SET x = 10 WHERE x = 20; QUERY PLAN ------------------------------------------------------------------------------------------------------------------ Index Scan using foo_x_idx on foo (cost=0.00..14.44 rows=10 width=22) (actual time=0.184..0.551 rows=7 loops=1) Index Cond: (x = 20) Trigger row_trig1: time=1.625 calls=7 Trigger row_trig2: time=1.346 calls=7 Trigger stmt_trig1: time=1.436 calls=1 Total runtime: 9.659 ms (6 rows) 8.1devel changes frequently (sometimes requiring initdb) and isn't suitable for production, but if the trigger statistics would be helpful then you could set up a test server and load a copy of your database into it. Just beware that because it's bleeding edge, it might destroy your data and it might behave differently than released versions. -- Michael Fuhr http://www.fuhr.org/~mfuhr/
В списке pgsql-performance по дате отправления: