Re: [HACKERS] file descriptors leak?
От | Tom Lane |
---|---|
Тема | Re: [HACKERS] file descriptors leak? |
Дата | |
Msg-id | 11404.941555895@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [HACKERS] file descriptors leak? ("Gene Sokolov" <hook@aktrad.ru>) |
Ответы |
Re: [HACKERS] file descriptors leak?
|
Список | pgsql-hackers |
"Gene Sokolov" <hook@aktrad.ru> writes: > We disconnected all clients and the number of descriptors dropped from 800 > to about 200, which is reasonable. We currently have 3 connections and ~300 > used descriptors. The "lsof -u postgres" is attached. Hmm, I see a postmaster with 8 open files and one backend with 34. Doesn't look out of the ordinary to me. > It seems ok except for a large number of open /dev/null. I see /dev/null at the stdin/stdout/stderr positions, which I suppose means that you started the postmaster with -S instead of directing its output to a logfile. It is true that on a system that'll let individual processes have as many open file descriptors as they want, Postgres can soak up a lot. Over time I'd expect each backend to acquire an FD for practically every file in the database directory (including system tables and indexes). So in a large installation you could be looking at thousands of open files. But the situation you're describing doesn't seem like it should reach those kinds of numbers. The number of open files per backend can be constrained by fd.c, but AFAIK there isn't any way to set a manually-specified upper limit; it's all automatic. Perhaps there should be a configuration option to add a limit. regards, tom lane
В списке pgsql-hackers по дате отправления: