Re: [HACKERS] max_files_per_processes vs others uses of file descriptors
От | Tom Lane |
---|---|
Тема | Re: [HACKERS] max_files_per_processes vs others uses of file descriptors |
Дата | |
Msg-id | 22579.1502139906@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [HACKERS] max_files_per_processes vs others uses of filedescriptors (Andres Freund <andres@anarazel.de>) |
Ответы |
Re: [HACKERS] max_files_per_processes vs others uses of filedescriptors
|
Список | pgsql-hackers |
Andres Freund <andres@anarazel.de> writes: > On 2017-08-07 16:52:42 -0400, Tom Lane wrote: >> No, I don't think so. If you're depending on the NUM_RESERVED_FDS >> headroom for anything meaningful, *you're doing it wrong*. You should be >> getting an FD via fd.c, so that there is an opportunity to free up an FD >> (by closing a VFD) if you're up against system limits. Relying on >> NUM_RESERVED_FDS headroom can only protect against EMFILE not ENFILE. > How would this work for libpq based stuff like postgres fdw? Or some > random PL doing something with files? There's very little headroom here. Probably the best we can hope for there is to have fd.c provide a function "close an FD please", which postgres_fdw could call if libpq fails because of ENFILE/EMFILE, and then retry. (Though I'm unsure how reliably postgres_fdw can detect that failure reason right now --- I don't know that we preserve errno on the way out of PQconnect.) regards, tom lane
В списке pgsql-hackers по дате отправления: