Re: Parallel threads in query
От | Tomas Vondra |
---|---|
Тема | Re: Parallel threads in query |
Дата | |
Msg-id | 971e7d3b-3f3f-5be2-63a2-e02a8b2e4689@2ndquadrant.com обсуждение исходный текст |
Ответ на | Re: Parallel threads in query (Andres Freund <andres@anarazel.de>) |
Ответы |
Re: Parallel threads in query
Re: Parallel threads in query |
Список | pgsql-hackers |
On 11/01/2018 06:15 PM, Andres Freund wrote: > On 2018-11-01 10:10:33 -0700, Paul Ramsey wrote: >> On Wed, Oct 31, 2018 at 2:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: >> >>> =?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> >>> writes: >>>> Question is, what's the best policy to allocate cores so we can play nice >>>> with rest of postgres? >>> >> >> >>> There is not, because we do not use or support multiple threads inside >>> a Postgres backend, and have no intention of doing so any time soon. >>> >> >> As a practical matter though, if we're multi-threading a heavy PostGIS >> function, presumably simply grabbing *every* core is not a recommended or >> friendly practice. My finger-in-the-wind guess would be that the value >> of max_parallel_workers_per_gather would be the most reasonable value to >> use to limit the number of cores a parallel PostGIS function should use. >> Does that make sense? > > I'm not sure that's a good approximation. Postgres' infrastructure > prevents every query from using max_parallel_workers_per_gather > processes due to the global max_worker_processes limit. I think you > probably would want something very very roughly approximating a global > limit - otherwise you'll either need to set the per-process limit way > too low, or overwhelm machines with context switches. > Yeah. Without a global limit it would be fairly trivial to create way too many threads - say when a query gets parallelized, and each worker creates a bunch of private threads. And then a bunch of such queries executed concurrently, and it's getting bad pretty fast. In theory, simulating such global limit should be possible using a bit of shared memory for the current total, per-process counter and probably some simple abort handling (say, just like contrib/openssl does using ResourceOwner). A better solution might be to start a bgworker managing a connection pool and forward the requests to it using IPC (and enforce the thread count limit there). regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-hackers по дате отправления: