Re: Re: Too many open files (was Re: spinlock problems reported earlier)
От | Tom Lane |
---|---|
Тема | Re: Re: Too many open files (was Re: spinlock problems reported earlier) |
Дата | |
Msg-id | 16863.967489472@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Re: Too many open files (was Re: spinlock problems reported earlier) (The Hermit Hacker <scrappy@hub.org>) |
Ответы |
Re: Re: Too many open files (was Re: spinlock problems
reported earlier)
|
Список | pgsql-hackers |
The Hermit Hacker <scrappy@hub.org> writes: >> An explicit parameter to the postmaster, setting the installation-wide >> open file count (with default maybe about 50 * MaxBackends) is starting >> to look like a good answer to me. Comments? > Okay, if I understand correctly, this would just result in more I/O as far > as having to close off "unused files" once that 50 limit is reached? Right, the cost is extra close() and open() kernel calls to release FDs temporarily. > Would it be installation-wide, or per-process? Ie. if I have 100 as > maxbackends, and set it to 1000, could one backend suck up all 1000, or > would each max out at 10? The only straightforward implementation is to take the parameter, divide by MaxBackends, and allow each backend to have no more than that many files open. Any sort of dynamic allocation would require inter-backend communication, which is probably more trouble than it's worth to avoid a few kernel calls. > (note. I'm running with 192 backends right now, > and have actually pushed it to run 188 simultaneously *grin*) ... Lessee, 8192 FDs / 192 backends = 42 per backend. No wonder you were running out. regards, tom lane
В списке pgsql-hackers по дате отправления: