Re: hung backends stuck in spinlock heavy endless loop
От | Andres Freund |
---|---|
Тема | Re: hung backends stuck in spinlock heavy endless loop |
Дата | |
Msg-id | 20150113234209.GD5245@awork2.anarazel.de обсуждение исходный текст |
Ответ на | Re: hung backends stuck in spinlock heavy endless loop (Merlin Moncure <mmoncure@gmail.com>) |
Ответы |
Re: hung backends stuck in spinlock heavy endless loop
|
Список | pgsql-hackers |
On 2015-01-13 17:39:09 -0600, Merlin Moncure wrote: > On Tue, Jan 13, 2015 at 5:21 PM, Andres Freund <andres@2ndquadrant.com> wrote: > > On 2015-01-13 15:17:15 -0800, Peter Geoghegan wrote: > >> I'm inclined to think that this is a livelock, and so the problem > >> isn't evident from the structure of the B-Tree, but it can't hurt to > >> check. > > > > My guess is rather that it's contention on the freelist lock via > > StrategyGetBuffer's. I've seen profiles like this due to exactly that > > before - and it fits to parallel loading quite well. > > I think I've got it to pop again. s_lock is only showing 35% > (increasing very slowly if at all) but performance is mostly halted. > Frame pointer is compiled out. perf report attached. > 35.82% postgres [.] s_lock > 23.71% postgres [.] tas > 14.01% postgres [.] tas > 6.82% postgres [.] spin_delay > 5.93% postgres [.] LWLockRelease > 4.36% postgres [.] LWLockAcquireCommon Interesting. This profile looks quite different? What kind of hardware is this on? Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: