RE: Postgresql JDBC process consumes more memory with partition tables update delete
От | James Pang (chaolpan) |
---|---|
Тема | RE: Postgresql JDBC process consumes more memory with partition tables update delete |
Дата | |
Msg-id | PH0PR11MB5191BD99AEA433E5731318C6D6409@PH0PR11MB5191.namprd11.prod.outlook.com обсуждение исходный текст |
Ответ на | Re: Postgresql JDBC process consumes more memory with partition tables update delete (Vladimir Sitnikov <sitnikov.vladimir@gmail.com>) |
Ответы |
RE: Postgresql JDBC process consumes more memory with partition tables update delete
|
Список | pgsql-jdbc |
Hi, When I convert the partitioned table to non-partitioned and copy all data to non-partitioned tables, then restart theload test , one backend server only consumes 25mb there. With partitioned tables , PGV13 , 160-170mb /per backend server, PGV14, 130-138mb/per backend server. So , it's partitioned tables make the memoryconsumption changes. The dumped stats is backend(session) level cached plans ,right? The test servers use sharedconnection pooling to run same insert/update/delete transaction by multiple connections(we simulate 300 connections), so each session see similar cached SQL plans, and part of table has trigger before UPDATE, so when UPDATE it trigger to call pl/pgsql function. I only use psql to make same prepared SQL and run that in a loop, I see stable memory usage, maybe my psql test is notsame as the JAVA test code. I will check the test code details and try to check if possible to dump more context details. Thanks, James -----Original Message----- From: Vladimir Sitnikov <sitnikov.vladimir@gmail.com> Sent: Thursday, September 8, 2022 5:56 PM To: James Pang (chaolpan) <chaolpan@cisco.com> Cc: pgsql-jdbc@lists.postgresql.org Subject: Re: Postgresql JDBC process consumes more memory with partition tables update delete > interesting thing is we only see this issue by JDBC driver client First of all, it turns out that a single UPDATE statement consumes 4M Then, it looks like you have **multiple** UPDATE statements in the server-side cache. It does sound strange that a single backend contains multiple entries for the same SQL text. 1) Would you please double-check that SQL text is the same. Do you use bind variables? 2) Would you please double-check that you close statements after use (e.g. try-with-resources). CachedPlan: 4204544 total in 13 blocks; 489400 free (4 chunks); 3715144 used: UPDATE WBXMEETINGINS Frankly speaking, I am not sure the JDBC driver is in a position to predict that a single-line statement would consume thatmuch server-side memory. It would be nice if backend devs could optimize the memory consumption of the cached plan. If optimization is not possible, then it would be nice if the backend could provide clients with memory consumption of thecached plan. In other words, it would be nice if there was a status message or something that says "ok, by the way, the prepared statementS_01 consumes 2M". James, the captured dump includes only the first 100 entries. Would you please try capturing more details via the following command? MemoryContextStatsDetail(TopMemoryContext, 1000, true) (see https://github.com/postgres/postgres/blob/adb466150b44d1eaf43a2d22f58ff4c545a0ed3f/src/backend/utils/mmgr/mcxt.c#L574-L591 ) Vladimir
В списке pgsql-jdbc по дате отправления: