[7.0.2] spinlock problems reported earlier ...
От | The Hermit Hacker |
---|---|
Тема | [7.0.2] spinlock problems reported earlier ... |
Дата | |
Msg-id | Pine.BSF.4.21.0008271246480.1646-100000@thelab.hub.org обсуждение исходный текст |
Ответы |
Too many open files (was Re: spinlock problems reported earlier)
|
Список | pgsql-hackers |
Earlier this week, I reported getting core dumps with the following bt: (gdb) where #0 0x18271d90 in kill () from /usr/lib/libc.so.4 #1 0x182b2e09 in abort () from /usr/lib/libc.so.4 #2 0x80ee847 in s_lock_stuck (lock=0x20048065 "\001", file=0x816723c "spin.c", line=127) at s_lock.c:51 #3 0x80ee8c3 in s_lock (lock=0x20048065 "\001", file=0x816723c "spin.c", line=127) at s_lock.c:80 #4 0x80f1580 in SpinAcquire (lockid=7) at spin.c:127 #5 0x80f3903 in LockRelease (lockmethod=1, locktag=0xbfbfe968, lockmode=1) at lock.c:1044 I've been monitoring 'open files' on that machine, and after raising them to 8192, saw it hit "Open Files Peak: 8179" this morning and once more have a dead database ... Tom, you stated "That sure looks like you'd better tweak your kernel settings ... but offhand I don't see how it could lead to "stuck spinlock" errors.", so I'm wondering if maybe there is a bug, in that it should be handling running out of FDs better? I just raised mine to 32k so that it *hopefully* never happens again, if I hit *that* many open files I'll be surprised ... Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
В списке pgsql-hackers по дате отправления: