Re: Doc update for pg_stat_statements normalization

Поиск
Список
Период
Сортировка
От Julien Rouhaud
Тема Re: Doc update for pg_stat_statements normalization
Дата
Msg-id 20230225080623.aatldhafgjg4ssmv@jrouhaud
обсуждение исходный текст
Ответ на Re: Doc update for pg_stat_statements normalization  (Michael Paquier <michael@paquier.xyz>)
Ответы Re: Doc update for pg_stat_statements normalization  ("Imseih (AWS), Sami" <simseih@amazon.com>)
Список pgsql-hackers
On Sat, Feb 25, 2023 at 02:58:36PM +0900, Michael Paquier wrote:
> On Fri, Feb 24, 2023 at 08:54:00PM +0000, Imseih (AWS), Sami wrote:
> > I think the only thing to do here is to call this out in docs with a
> > suggestion to increase pg_stat_statements.max to reduce the
> > likelihood. I also attached the suggested  doc enhancement as well.
> 
> Improving the docs about that sounds like a good idea.  This would be
> less surprising for users, if we had some details about that.
> 
> > Any thoughts?
> 
> The risk of deallocation of an entry between the post-analyze hook and
> the planner/utility hook represented with two calls of pgss_store()
> means that this will never be able to work correctly as long as we
> don't know the shape of the normalized query in the second code path
> (planner, utility execution) updating the entries with the call
> information, etc.  And we only want to pay the cost of normalization
> once, after the post-analyze where we do the query jumbling.

Note also that this is a somewhat wanted behavior (to evict queries that didn't
have any planning or execution stats record), per the
STICKY_DECREASE_FACTOR and related stuff.

> Could things be done in a more stable way?  For example, imagine that
> we have an extra Query field called void *private_data that extensions
> can use to store custom data associated to a query ID, then we could
> do something like that:
> - In the post-analyze hook, check if an entry with the query ID
> calculated exists.
> -- If the entry exists, grab a copy of the existing query string,
> which may be normalized or not, and save it into Query->private_data.
> -- If the entry does not exist, normalize the query, store it in
> Query->private_data but do not yet create an entry in the hash table.
> - In the planner/utility hook, fetch the normalized query from
> private_data, then use it if an entry needs to be created in the hash
> table.  The entry may have been deallocated since the post-analyze
> hook, in which case it is re-created with the normalized copy saved in
> the first phase.

I think the idea of a "private_data" like thing has been discussed before and
rejected IIRC, as it could be quite expensive and would also need to
accommodate for multiple extensions and so on.

Overall, I think that if the pgss eviction rate is high enough that it's
problematic for doing performance analysis, the performance overhead will be so
bad that simply removing pg_stat_statements will give you a ~ x2 performance
increase.  I don't see much point trying to make such a performance killer
scenario more usable.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: how does postgresql handle LOB/CLOB/BLOB column data that dies before the query ends
Следующее
От: Noel Grandin
Дата:
Сообщение: Re: how does postgresql handle LOB/CLOB/BLOB column data that dies before the query ends