Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)
От | Robert Haas |
---|---|
Тема | Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation) |
Дата | |
Msg-id | CA+TgmoYjtFyd+hbGdYK1kszPorv8LExgwgRMP=rxCLJ2qXg6dw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation) (Peter Geoghegan <pg@bowt.ie>) |
Ответы |
Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)
|
Список | pgsql-hackers |
On Thu, Jan 11, 2018 at 3:25 PM, Peter Geoghegan <pg@bowt.ie> wrote: > On Thu, Jan 11, 2018 at 12:06 PM, Peter Geoghegan <pg@bowt.ie> wrote: >> It might make sense to have the "minimum memory per participant" value >> come from a GUC, rather than be hard coded (it's currently hard-coded >> to 32MB). > >> What do you think of that idea? > > A third option here is to specifically recognize that > compute_parallel_worker() returned a value based on the table storage > param max_workers, and for that reason alone no "insufficient memory > per participant" decrementing/vetoing should take place. That is, when > the max_workers param is set, perhaps it should be completely > impossible for CREATE INDEX to ignore it for any reason other than an > inability to launch parallel workers (though that could be due to the > max_parallel_workers GUC's setting). > > You could argue that we should do this anyway, I suppose. Yes, I think this sounds like a good idea. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: