Re: Debugging a backend stuck consuming CPU
От | ktm@rice.edu |
---|---|
Тема | Re: Debugging a backend stuck consuming CPU |
Дата | |
Msg-id | 20160520151833.GF32767@aart.rice.edu обсуждение исходный текст |
Ответ на | Re: Debugging a backend stuck consuming CPU (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-general |
On Thu, May 19, 2016 at 05:52:26PM -0400, Tom Lane wrote: > "ktm@rice.edu" <ktm@rice.edu> writes: > > The stack trace just appeared to be what I would expect while a 'DISCARD ALL' > > command was being run: > > > #0 0x000000000073bc7c in MemoryContextSetParent () > > #1 0x000000000073bde3 in MemoryContextDelete () > > #2 0x000000000054e3a9 in DropAllPreparedStatements () > > #3 0x00000000005365f3 in DiscardCommand () > > Hmm, what it seems from these traces is that you've got a whole heck of > a lot of prepared statements. > > > The backend does have a very large memory footprint (12GB). > > Um. > > The most likely explanation is that you are hitting O(N^2) behavior as > a consequence of MemoryContextSetParent being O(N) in the number of > sibling contexts of the context to be deleted. We fixed that for 9.6 > (commit 25c539233044c235e97fd7c9dc600fb5f08fe065) but there's no easy > solution in older branches, short of not using so many prepared > statements. I'm a bit surprised that you could have gotten up to 12GB > worth of prepared statements in an application that sends DISCARD ALL > periodically. > > regards, tom lane > Hi, The DISCARD ALL is only sent by pgbouncer at the end of the processing. The actual process builds up a cache to be used later whose size is proportional to the number of items. The initial run is large, but the regular runs are much smaller and cleanup quickly. I was more concerned with incorrect behavior leading to DB corruption. Thank you for your suggestions and assistance. Regards, Ken
В списке pgsql-general по дате отправления: