Re: Choosing parallel_degree
От | James Sewell |
---|---|
Тема | Re: Choosing parallel_degree |
Дата | |
Msg-id | CANkGpBt1RWv8NR4WsxSM6AiHbYnSSpvP7-PUj0r9xaAjW1cV4w@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Choosing parallel_degree (David Rowley <david.rowley@2ndquadrant.com>) |
Ответы |
Re: Choosing parallel_degree
|
Список | pgsql-hackers |
OK cool, thanks.
Can we remove the minimum size limit when the per table degree setting is applied?
This would help for tables with 2 - 1000 pages combined with a high CPU cost aggregate.
Cheers,
James Sewell,
PostgreSQL Team Lead / Solutions Architect
______________________________________

On Sun, Mar 20, 2016 at 11:23 PM, David Rowley <david.rowley@2ndquadrant.com> wrote:
On 18 March 2016 at 10:13, James Sewell <james.sewell@lisasoft.com> wrote:
> This does bring up an interesting point I don't quite understand though. If I run parallel agg on a table with 4 rows with 2 workers will it run on two workers (2 rows each) or will the first one grab all 4 rows?
It works on a per page basis, workers just each grab the next page to
be scanned from a page counter that sits in shared memory, the worker
just increments the page number, releases the lock on the counter and
scans that page.
See heap_parallelscan_nextpage()
So the answer to your question is probably no. At least not unless the
the page only contained 2 rows.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.
В списке pgsql-hackers по дате отправления: