Re: [HACKERS] parallel.c oblivion of worker-startup failures
От | Thomas Munro |
---|---|
Тема | Re: [HACKERS] parallel.c oblivion of worker-startup failures |
Дата | |
Msg-id | CAEepm=3PV9auk9jGwJxz3bSFyxwdP7bCkRisc87jysJy6PTw8Q@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] parallel.c oblivion of worker-startup failures (Amit Kapila <amit.kapila16@gmail.com>) |
Ответы |
Re: [HACKERS] parallel.c oblivion of worker-startup failures
|
Список | pgsql-hackers |
On Wed, Jan 24, 2018 at 5:43 PM, Amit Kapila <amit.kapila16@gmail.com> wrote: >> Hmm. Yeah. I can't seem to reach a stuck case and was probably just >> confused and managed to confuse Robert too. If you make >> fork_process() fail randomly (see attached), I see that there are a >> couple of easily reachable failure modes (example session at bottom of >> message): >> > > In short, we are good with committed code. Right? Yep. Sorry for the noise. > Yes, this is what I am trying to explain on parallel create index > thread. I think there we need to either use > WaitForParallelWorkersToFinish or WaitForParallelWorkersToAttach (a > new API as proposed in that thread) if we don't want to use barriers. > I see a minor disadvantage in using WaitForParallelWorkersToFinish > which I will say on that thread. Ah, I see. So if you wait for them to attach you can detect unexpected dead workers (via shm_mq_receive), at the cost of having the leader wasting time waiting around for forked processes to say hello when it could instead be sorting tuples. -- Thomas Munro http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: