Re: [HACKERS] max_files_per_processes vs others uses of filedescriptors
| От | Andres Freund |
|---|---|
| Тема | Re: [HACKERS] max_files_per_processes vs others uses of filedescriptors |
| Дата | |
| Msg-id | 20170807211234.o3g6pzebj3khsgce@alap3.anarazel.de обсуждение исходный текст |
| Ответ на | Re: [HACKERS] max_files_per_processes vs others uses of file descriptors (Tom Lane <tgl@sss.pgh.pa.us>) |
| Ответы |
Re: [HACKERS] max_files_per_processes vs others uses of file descriptors
|
| Список | pgsql-hackers |
On 2017-08-07 17:05:06 -0400, Tom Lane wrote: > Andres Freund <andres@anarazel.de> writes: > > On 2017-08-07 16:52:42 -0400, Tom Lane wrote: > >> No, I don't think so. If you're depending on the NUM_RESERVED_FDS > >> headroom for anything meaningful, *you're doing it wrong*. You should be > >> getting an FD via fd.c, so that there is an opportunity to free up an FD > >> (by closing a VFD) if you're up against system limits. Relying on > >> NUM_RESERVED_FDS headroom can only protect against EMFILE not ENFILE. > > > How would this work for libpq based stuff like postgres fdw? Or some > > random PL doing something with files? There's very little headroom here. > > Probably the best we can hope for there is to have fd.c provide a function > "close an FD please", which postgres_fdw could call if libpq fails because > of ENFILE/EMFILE, and then retry. Unless that takes up a slot in fd.c while in use, that'll still leave us open to failures to open files in some critical parts, unless I miss something. And then we'd have to teach similar things to PLs etc. I agree that having some more slop isn't a proper solution, but only having ~30 fds as slop on the most common systems seems mightily small. > (Though I'm unsure how reliably postgres_fdw can detect that failure > reason right now --- I don't know that we preserve errno on the way > out of PQconnect.) Yea, probably not really... Regards, Andres
В списке pgsql-hackers по дате отправления: