Re: crashes due to setting max_parallel_workers=0
От | Robert Haas |
---|---|
Тема | Re: crashes due to setting max_parallel_workers=0 |
Дата | |
Msg-id | CA+TgmoboSyHn4=Poubj5yGw5-JDARy=vV1XmkkVLM5dFgUpYOA@mail.gmail.com обсуждение исходный текст |
Ответ на | crashes due to setting max_parallel_workers=0 (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Список | pgsql-hackers |
On Mon, Mar 27, 2017 at 12:36 PM, Rushabh Lathia <rushabh.lathia@gmail.com> wrote: > Hmm I agree that it's good idea, and I will work on that as separate patch. Maybe you want to start with what David already posted? >> Possibly we >> should fix the crash bug first, though, and then do that afterwards. >> What bugs me a little about Rushabh's fix is that it looks like magic. >> You have to know that we're looping over two things and freeing them >> up, but there's one more of one thing than the other thing. I think >> that at least needs some comments or something. >> > So in my second version of patch I change gather_merge_clear_slots() to > just clear the slot for the worker and some other clean up. Also throwing > NULL from gather_merge_getnext() when all the queues and heap are > exhausted - which earlier gather_merge_clear_slots() was returning clear > slot. This way we make sure that we don't run over freeing the slot for > the leader and gather_merge_getnext() don't need to depend on that > clear slot. Ah, I missed that. That does seem cleaner. Anybody see a problem with that approach? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: