Re: killing pg_dump leaves backend process
От | Tom Lane |
---|---|
Тема | Re: killing pg_dump leaves backend process |
Дата | |
Msg-id | 28643.1376347300@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: killing pg_dump leaves backend process (Greg Stark <stark@mit.edu>) |
Ответы |
Re: killing pg_dump leaves backend process
|
Список | pgsql-hackers |
Greg Stark <stark@mit.edu> writes: > So I poked around a bit. It looks like Linux does send a SIGIO when a > tcp connection is closed (with POLL_HUP if it's closed and POLL_IN if > it's half-closed). So it should be possible to arrange to get a signal > which CHECK_FOR_INTERRUPTS could handle the normal way. > However this would mean getting a signal every time there's data > available from the client. I don't know how inefficient that would be > or how convenient it would be to turn it off and on all the time so we > aren't constantly receiving useless signals. That sounds like a mess --- race conditions all over the place, even aside from efficiency worries. > I'm not sure how portal this behaviour is either. There may well be > platforms where having the socket closed doesn't generate a SIGIO. AFAICS, the POSIX spec doesn't define SIGIO at all, so this worry is probably very real. What I *do* see standardized in POSIX is SIGURG (out-of-band data is available). If that's delivered upon socket close, which unfortunately POSIX doesn't say, then it'd avoid the race condition issue. We don't use out-of-band data in the protocol and could easily say that we'll never do so in future. Of course the elephant in the room is Windows --- does it support any of this stuff? regards, tom lane
В списке pgsql-hackers по дате отправления: