Re: patch for parallel pg_dump
От | Robert Haas |
---|---|
Тема | Re: patch for parallel pg_dump |
Дата | |
Msg-id | CA+TgmoaBbtaiQLmjgDqy=9aJJOFyA6Ugt2BY-B5ds2BuZ_pr_A@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: patch for parallel pg_dump (Joachim Wieland <joe@mcknight.de>) |
Список | pgsql-hackers |
On Wed, Mar 28, 2012 at 9:54 PM, Joachim Wieland <joe@mcknight.de> wrote: > On Wed, Mar 28, 2012 at 1:46 PM, Robert Haas <robertmhaas@gmail.com> wrote: >> I'm wondering if we really need this much complexity around shutting >> down workers. I'm not sure I understand why we need both a "hard" and >> a "soft" method of shutting them down. At least on non-Windows >> systems, it seems like it would be entirely sufficient to just send a >> SIGTERM when you want them to die. They don't even need to catch it; >> they can just die. > > At least on my Linux test system, even if all pg_dump processes are > gone, the server happily continues sending data. When I strace an > individual backend process, I see a lot of Broken pipe writes, but > that doesn't stop it from just writing out the whole table to a closed > file descriptor. This is a 9.0-latest server. Wow, yuck. At least now I understand why you're doing it like that. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: