Re: Re: Too many open files (was Re: spinlock problems reported earlier)
От | Tom Lane |
---|---|
Тема | Re: Re: Too many open files (was Re: spinlock problems reported earlier) |
Дата | |
Msg-id | 16545.967483978@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Re: Too many open files (was Re: spinlock problems reported earlier) (Brook Milligan <brook@biology.nmsu.edu>) |
Ответы |
Re: Re: Too many open files (was Re: spinlock problems
reported earlier)
|
Список | pgsql-hackers |
Brook Milligan <brook@biology.nmsu.edu> writes: > In any case, if this really follows the POSIX standard, perhaps > PostgreSQL code should assume these semantics and work around other > cases that don't follow the standard (instead of work around the POSIX > cases). HP asserts that *they* follow the POSIX standard, and in this case I'm more inclined to believe them than the *BSD camp. A per-process limit on open files has existed in most Unices I've heard of; I had never heard of a per-userid limit until yesterday. (And I'm not yet convinced that that's actually what *BSD implements; are we sure it's not just a typo in the man page?) 64 or so for _SC_OPEN_MAX is not really what I'm worried about anyway. IIRC, we've heard reports that some platforms return values in the thousands, ie, essentially telling each process it can have the whole kernel FD table, and it's that behavior that I'm speculating is causing Marc's problem. Marc, could you check what is returned by sysconf(_SC_OPEN_MAX) on your box? And/or check to see how many files each backend is actually holding open? regards, tom lane
В списке pgsql-hackers по дате отправления: