Re: [HACKERS] max_files_per_processes vs others uses of file descriptors
От | Tom Lane |
---|---|
Тема | Re: [HACKERS] max_files_per_processes vs others uses of file descriptors |
Дата | |
Msg-id | 1923.1502141413@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [HACKERS] max_files_per_processes vs others uses of filedescriptors (Andres Freund <andres@anarazel.de>) |
Ответы |
Re: [HACKERS] max_files_per_processes vs others uses of filedescriptors
|
Список | pgsql-hackers |
Andres Freund <andres@anarazel.de> writes: > On 2017-08-07 17:05:06 -0400, Tom Lane wrote: >> Probably the best we can hope for there is to have fd.c provide a function >> "close an FD please", which postgres_fdw could call if libpq fails because >> of ENFILE/EMFILE, and then retry. > Unless that takes up a slot in fd.c while in use, that'll still leave us > open to failures to open files in some critical parts, unless I miss > something. Well, there's always a race condition there, in that someone else can eat the kernel FD as soon as you free it. That's why we do this in a retry loop. > And then we'd have to teach similar things to PLs etc. I agree that > having some more slop isn't a proper solution, but only having ~30 fds > as slop on the most common systems seems mightily small. Meh. The lack of field complaints about this doesn't indicate to me that we have a huge problem, and in any case, just increasing NUM_RESERVED_FDS would do nothing for the system-wide limits. regards, tom lane
В списке pgsql-hackers по дате отправления: