Обсуждение: thousands of CachedPlan entry per backend

Поиск
Список
Период
Сортировка

thousands of CachedPlan entry per backend

От
"James Pang (chaolpan)"
Дата:

PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per backend, from Operating system and memorycontext dump “Grand total:”, both mached. But from details, we found almost of entry belong to  “CacheMemoryContext”,  from this line  CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,  but there are thousands of lines of it’s child, the sum of blocks much more than “8737352” total in 42 blocks

    CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used

        CachedPlan: 4096 total in 3 blocks; 888 free (0 chunks); 3208 used: xxxxxxx

        CachedPlanSource: 2048 total in 2 blocks; 440 free (0 chunks); 1608 used: xxxxxxx

             unnamed prepared statement: 8192 total in 1 blocks; 464 free (0 chunks); 7728 used

       CachedPlan: 66560 total in 7 blocks; 15336 free (0 chunks); 51224 used: xxxxxxx

       CachedPlan: 8192 total in 4 blocks; 2456 free (0 chunks); 5736 used: xxxxxxx

      CachedPlan: 33792 total in 6 blocks; 14344 free (1 chunks); 19448 used: xxxxxxx

      …

      SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used

       CachedPlanSource: 4096 total in 3 blocks; 1152 free (0 chunks); 2944 used: xxxxxxx

         CachedPlanQuery: 4096 total in 3 blocks; 848 free (0 chunks); 3248 used

    SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used

    CachedPlanSource: 4096 total in 3 blocks; 1472 free (0 chunks); 2624 used: xxxxxxx

        CachedPlanQuery: 4096 total in 3 blocks; 1464 free (0 chunks); 2632 used

    SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used

    index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: xxxxxxx

    index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: xxxxxxx

    index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: xxxxxxx

    index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: xxxxxxx

    index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: xxxxxxx

    index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: xxxxxxx

    index info: 3072 total in 2 blocks; 696 free (1 chunks); 2376 used: xxxxxxx

    index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: xxxxxxx

    index info: 2048 total in 2 blocks; 656 free (2 chunks); 1392 used: xxxxxxx

    index info: 3072 total in 2 blocks; 1160 free (2 chunks); 1912 used: xxxxxxx

    index info: 2048 total in 2 blocks; 904 free (1 chunks); 1144 used: xxxxxxx

    index info: 2048 total in 2 blocks; 904 free (1 chunks); 1144 used: xxxxxxx

    index info: 2048 total in 2 blocks; 904 free (1 chunks); 1144 used: xxxxxxx

  WAL record construction: 49768 total in 2 blocks; 6360 free (0 chunks); 43408 used

  PrivateRefCount: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used

  MdSmgr: 32768 total in 3 blocks; 10104 free (7 chunks); 22664 used

  LOCALLOCK hash: 65536 total in 4 blocks; 18704 free (13 chunks); 46832 used

  Timezones: 104120 total in 2 blocks; 2616 free (0 chunks); 101504 used

  ErrorContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used

Grand total: 34558032 bytes in 8798 blocks; 9206536 free (2484 chunks); 25351496 used

 

Our application use Postgresql JDBC driver with default parameters(maxprepared statement 256), there are many triggers, functions in this database, and a few functions run sql by an extension pg_background.  We have thousands of connections and have big concern why have thousands of entrys of cached SQL ?  that will consume huge memory ,  anyway to limit the cached plan entry to save memory consumption?  Or it looks like an abnormal behavior or bug to see so many cached plan lines.

  Attached please see details, the detail of SQL got masked sensitive information, this backend has huge lines so using MemoryContextStatsDetail(TopMemoryContext) instead to dump all lines.

 

 

 

 

Вложения

Re: thousands of CachedPlan entry per backend

От
Laurenz Albe
Дата:
On Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:
> PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per
> backend, from Operating system and memorycontext dump “Grand total:”, both mached.
> But from details, we found almost of entry belong to  “CacheMemoryContext”,
> from this line  CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,
> but there are thousands of lines of it’s child, the sum of blocks much more than “8737352” total in 42 blocks
>
> Our application use Postgresql JDBC driver with default parameters(maxprepared statement 256),
> there are many triggers, functions in this database, and a few functions run sql by an extension
> pg_background.  We have thousands of connections and have big concern why have thousands of entrys
> of cached SQL ?  that will consume huge memory ,  anyway to limit the cached plan entry to save memory
> consumption?  Or it looks like an abnormal behavior or bug to see so many cached plan lines.

If you have thousands of connections, that's your problem.  You need effective connection pooling.
Then 40MB per backend won't be a problem at all.  Having thousands of connections will cause
other, worse, problems for you.

See for example
https://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/

If you want to use functions, but don't want to benefit from plan caching, you can set
the configuration parameter "plan_cache_mode" to "force_custom_plan".

Yours,
Laurenz Albe



Re: thousands of CachedPlan entry per backend

От
Pavel Stehule
Дата:
Hi

čt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe <laurenz.albe@cybertec.at> napsal:
On Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:
> PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per
> backend, from Operating system and memorycontext dump “Grand total:”, both mached.
> But from details, we found almost of entry belong to  “CacheMemoryContext”,
> from this line  CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,
> but there are thousands of lines of it’s child, the sum of blocks much more than “8737352” total in 42 blocks
>
> Our application use Postgresql JDBC driver with default parameters(maxprepared statement 256),
> there are many triggers, functions in this database, and a few functions run sql by an extension
> pg_background.  We have thousands of connections and have big concern why have thousands of entrys
> of cached SQL ?  that will consume huge memory ,  anyway to limit the cached plan entry to save memory
> consumption?  Or it looks like an abnormal behavior or bug to see so many cached plan lines.

If you have thousands of connections, that's your problem.  You need effective connection pooling.
Then 40MB per backend won't be a problem at all.  Having thousands of connections will cause
other, worse, problems for you.

See for example
https://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/

If you want to use functions, but don't want to benefit from plan caching, you can set
the configuration parameter "plan_cache_mode" to "force_custom_plan".

The problem with too big of cached metadata can be forced by too long sessions too.

In this case it is good to throw a session (connect) after 1hour or maybe less.

Regards

Pavel
 

Yours,
Laurenz Albe


RE: thousands of CachedPlan entry per backend

От
"James Pang (chaolpan)"
Дата:

    Yes, too many cached metadata and we are thinking of a workaround to disconnect the sessions timely.

In addition, based on the dumped memory context, I have questions

   1) we found thousands of cached plan , since JDBC driver only allow max 256 cached prepared statements, how backend cache so many sql plans. If we have one function,  when application call that function will make backend to cache  every SQL statement plan in that function too?   and for table triggers, have similar caching behavior ?

  2) from  this line, we saw total 42 blocks ,215 chunks      CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,

      But from sum of it’s child level entrys,  total sum(child lines) block ,trunks show much more than “CacheMemoryContext,  is expected to see that?

 

 

Thanks,

 

James

  

   

    

 

From: Pavel Stehule <pavel.stehule@gmail.com>
Sent: Thursday, June 1, 2023 3:19 PM
To: Laurenz Albe <laurenz.albe@cybertec.at>
Cc: James Pang (chaolpan) <chaolpan@cisco.com>; pgsql-performance@lists.postgresql.org
Subject: Re: thousands of CachedPlan entry per backend

 

Hi

 

čt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe <laurenz.albe@cybertec.at> napsal:

On Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:
> PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory per
> backend, from Operating system and memorycontext dump “Grand total:”, both mached.
> But from details, we found almost of entry belong to  “CacheMemoryContext”,
> from this line  CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,
> but there are thousands of lines of it’s child, the sum of blocks much more than “8737352” total in 42 blocks
>
> Our application use Postgresql JDBC driver with default parameters(maxprepared statement 256),
> there are many triggers, functions in this database, and a few functions run sql by an extension
> pg_background.  We have thousands of connections and have big concern why have thousands of entrys
> of cached SQL ?  that will consume huge memory ,  anyway to limit the cached plan entry to save memory
> consumption?  Or it looks like an abnormal behavior or bug to see so many cached plan lines.

If you have thousands of connections, that's your problem.  You need effective connection pooling.
Then 40MB per backend won't be a problem at all.  Having thousands of connections will cause
other, worse, problems for you.

See for example
https://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/

If you want to use functions, but don't want to benefit from plan caching, you can set
the configuration parameter "plan_cache_mode" to "force_custom_plan".

 

The problem with too big of cached metadata can be forced by too long sessions too.

 

In this case it is good to throw a session (connect) after 1hour or maybe less.

 

Regards

 

Pavel

 


Yours,
Laurenz Albe

Re: thousands of CachedPlan entry per backend

От
Laurenz Albe
Дата:
On Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:
> we found thousands of cached plan , since JDBC driver only allow max 256 cached
> prepared statements, how backend cache so many sql plans. If we have one function,
> when application call that function will make backend to cache  every SQL statement
> plan in that function too?   and for table triggers, have similar caching behavior ?

Yes, as long as the functions are written in PL/pgSQL.
It only affects static SQL, that is, nothing that is run with EXECUTE.

Yours,
Laurenz Albe



RE: thousands of CachedPlan entry per backend

От
"James Pang (chaolpan)"
Дата:
these lines about "SPI Plan" are these PL/PGSQL functions related SPI_prepare plan entry, right?   Possible to set a
GUCto max(cached plan) per backend ? 
 

    SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used
    CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx
    CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx
      CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used

Thanks,

James

-----Original Message-----
From: Laurenz Albe <laurenz.albe@cybertec.at> 
Sent: Thursday, June 1, 2023 8:48 PM
To: James Pang (chaolpan) <chaolpan@cisco.com>; Pavel Stehule <pavel.stehule@gmail.com>
Cc: pgsql-performance@lists.postgresql.org
Subject: Re: thousands of CachedPlan entry per backend

On Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:
> we found thousands of cached plan , since JDBC driver only allow max 
> 256 cached prepared statements, how backend cache so many sql plans. 
> If we have one function, when application call that function will make 
> backend to cache  every SQL statement plan in that function too?   and for table triggers, have similar caching
behavior?
 

Yes, as long as the functions are written in PL/pgSQL.
It only affects static SQL, that is, nothing that is run with EXECUTE.

Yours,
Laurenz Albe

Re: thousands of CachedPlan entry per backend

От
Pavel Stehule
Дата:


pá 2. 6. 2023 v 3:45 odesílatel James Pang (chaolpan) <chaolpan@cisco.com> napsal:
   these lines about "SPI Plan" are these PL/PGSQL functions related SPI_prepare plan entry, right?   Possible to set a GUC to max(cached plan) per backend ?

There is no limit for size of system cache. You can use pgbouncer that implicitly refresh session after 1 hour (and this limit can be reduced)

Regards

Pavel


 

    SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used
    CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx
    CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx
      CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used

Thanks,

James

-----Original Message-----
From: Laurenz Albe <laurenz.albe@cybertec.at>
Sent: Thursday, June 1, 2023 8:48 PM
To: James Pang (chaolpan) <chaolpan@cisco.com>; Pavel Stehule <pavel.stehule@gmail.com>
Cc: pgsql-performance@lists.postgresql.org
Subject: Re: thousands of CachedPlan entry per backend

On Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:
> we found thousands of cached plan , since JDBC driver only allow max
> 256 cached prepared statements, how backend cache so many sql plans.
> If we have one function, when application call that function will make
> backend to cache  every SQL statement plan in that function too?   and for table triggers, have similar caching behavior ?

Yes, as long as the functions are written in PL/pgSQL.
It only affects static SQL, that is, nothing that is run with EXECUTE.

Yours,
Laurenz Albe

RE: thousands of CachedPlan entry per backend

От
"James Pang (chaolpan)"
Дата:

these lines about "SPI Plan" are these PL/PGSQL functions related through SPI_prepare plan entry, right?

 

SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used

    CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx

    CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx

      CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used

 

From: Pavel Stehule <pavel.stehule@gmail.com>
Sent: Friday, June 2, 2023 12:57 PM
To: James Pang (chaolpan) <chaolpan@cisco.com>
Cc: Laurenz Albe <laurenz.albe@cybertec.at>; pgsql-performance@lists.postgresql.org
Subject: Re: thousands of CachedPlan entry per backend

 

 

 

pá 2. 6. 2023 v 3:45 odesílatel James Pang (chaolpan) <chaolpan@cisco.com> napsal:

   these lines about "SPI Plan" are these PL/PGSQL functions related SPI_prepare plan entry, right?   Possible to set a GUC to max(cached plan) per backend ?

 

There is no limit for size of system cache. You can use pgbouncer that implicitly refresh session after 1 hour (and this limit can be reduced)

 

Regards

 

Pavel

 

 

 


    SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used
    CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxxxxxx
    CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: xxxxxxx
      CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used

Thanks,

James

-----Original Message-----
From: Laurenz Albe <laurenz.albe@cybertec.at>
Sent: Thursday, June 1, 2023 8:48 PM
To: James Pang (chaolpan) <chaolpan@cisco.com>; Pavel Stehule <pavel.stehule@gmail.com>
Cc: pgsql-performance@lists.postgresql.org
Subject: Re: thousands of CachedPlan entry per backend

On Thu, 2023-06-01 at 08:50 +0000, James Pang (chaolpan) wrote:
> we found thousands of cached plan , since JDBC driver only allow max
> 256 cached prepared statements, how backend cache so many sql plans.
> If we have one function, when application call that function will make
> backend to cache  every SQL statement plan in that function too?   and for table triggers, have similar caching behavior ?

Yes, as long as the functions are written in PL/pgSQL.
It only affects static SQL, that is, nothing that is run with EXECUTE.

Yours,
Laurenz Albe

Re: thousands of CachedPlan entry per backend

От
Jeff Janes
Дата:
On Thu, Jun 1, 2023 at 4:51 AM James Pang (chaolpan) <chaolpan@cisco.com> wrote:

  2) from  this line, we saw total 42 blocks ,215 chunks      CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 7715408 used,

      But from sum of it’s child level entrys,  total sum(child lines) block ,trunks show much more than “CacheMemoryContext,  is expected to see that?


Yes, that is expected.  The parent context reports only its own direct memory usage and blocks.  It does not include the sum of memory usage of its child contexts.

Cheers,

Jeff