Re: Add min and max execute statement time in pg_stat_statement
От | Gavin Flower |
---|---|
Тема | Re: Add min and max execute statement time in pg_stat_statement |
Дата | |
Msg-id | 5265CE37.4060102@archidevsys.co.nz обсуждение исходный текст |
Ответ на | Re: Add min and max execute statement time in pg_stat_statement (Ants Aasma <ants@cybertec.at>) |
Ответы |
Re: Add min and max execute statement time in pg_stat_statement
|
Список | pgsql-hackers |
On 22/10/13 13:26, Ants Aasma wrote: > On Tue, Oct 22, 2013 at 1:09 AM, Alvaro Herrera > <alvherre@2ndquadrant.com> wrote: >> Gavin Flower wrote: >> >>> One way it could be done, but even this would consume far too much >>> storage and processing power (hence totally impractical), would be >>> to 'simply' store a counter for each value found and increment it >>> for each occurence... >> An histogram? Sounds like a huge lot of code complexity to me. Not >> sure the gain is enough. > I have a proof of concept patch somewhere that does exactly this. I > used logarithmic bin widths. With 8 log10 bins you can tell the > fraction of queries running at each order of magnitude from less than > 1ms to more than 1000s. Or with 31 bins you can cover factor of 2 > increments from 100us to over 27h. And the code is almost trivial, > just take a log of the duration and calculate the bin number from that > and increment the value in the corresponding bin. > > Regards, > Ants Aasma I suppose this has to be decided at compile time to keep the code both simple and efficient - if so, I like the binary approach. Curious, why start at 100us? I suppose this might be of interest if everything of note is in RAM and/or stuff is on SSD's. Cheers, Gavin
В списке pgsql-hackers по дате отправления: