Re: crashes due to setting max_parallel_workers=0
От | Robert Haas |
---|---|
Тема | Re: crashes due to setting max_parallel_workers=0 |
Дата | |
Msg-id | CA+TgmoZXZZtxrS-x5UWuB0ghA3or-dT=mVqcvnf+HOoq5jHjCQ@mail.gmail.com обсуждение исходный текст |
Ответ на | crashes due to setting max_parallel_workers=0 (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Ответы |
Re: crashes due to setting max_parallel_workers=0
Re: crashes due to setting max_parallel_workers=0 |
Список | pgsql-hackers |
On Mon, Mar 27, 2017 at 9:54 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> On Mon, Mar 27, 2017 at 1:29 AM, Rushabh Lathia >> <rushabh.lathia@gmail.com> wrote: >>> But it seems a bit futile to produce the parallel plan in the first place, >>> because with max_parallel_workers=0 we can't possibly get any parallel >>> workers ever. I wonder why compute_parallel_worker() only looks at >>> max_parallel_workers_per_gather, i.e. why shouldn't it do: >>> parallel_workers = Min(parallel_workers, max_parallel_workers); >>> Perhaps this was discussed and is actually intentional, though. > >> It was intentional. See the last paragraph of >> https://www.postgresql.org/message-id/CA%2BTgmoaMSn6a1780VutfsarCu0LCr%3DCO2yi4vLUo-JQbn4YuRA@mail.gmail.com > > Since this has now come up twice, I suggest adding a comment there > that explains why we're intentionally ignoring max_parallel_workers. Hey, imagine if the comments explained the logic behind the code! Good idea. How about the attached? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Вложения
В списке pgsql-hackers по дате отправления: