Re: crashes due to setting max_parallel_workers=0
От | Robert Haas |
---|---|
Тема | Re: crashes due to setting max_parallel_workers=0 |
Дата | |
Msg-id | CA+TgmoaophiK9by9DSxx-ssB_gHbr3m-n6FqGKcX6=cvzMQ6Zw@mail.gmail.com обсуждение исходный текст |
Ответ на | crashes due to setting max_parallel_workers=0 (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Список | pgsql-hackers |
On Mon, Mar 27, 2017 at 12:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> On Mon, Mar 27, 2017 at 9:54 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Since this has now come up twice, I suggest adding a comment there >>> that explains why we're intentionally ignoring max_parallel_workers. > >> Good idea. How about the attached? > > WFM ... but seems like there should be some flavor of this statement > in the user-facing docs too (ie, "max_parallel_workers_per_gather > > max_parallel_workers is a bad idea unless you're trying to test what > happens when a plan can't get all the workers it planned for"). The > existing text makes some vague allusions suggesting that the two > GUCs might be interrelated, but I think it could be improved. Do you have a more specific idea? I mean, this seems like a degenerate case of what the documentation for max_parallel_workers_per_gather says already. Even if max_parallel_workers_per_gather <= Min(max_worker_processes, max_parallel_workers), it's quite possible that you'll regularly be generating plans that can't obtain the budgeted number of workers. The only thing that is really special about the case where max_parallel_workers_per_gather > Min(max_worker_processes, max_parallel_workers) is that this can happen even on an otherwise-idle system. I'm not quite sure how to emphasize that without seeming to state the obvious. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: