Re: parallel.c is not marked as test covered
От | Amit Kapila |
---|---|
Тема | Re: parallel.c is not marked as test covered |
Дата | |
Msg-id | CAA4eK1+Y6UmOw7h1KSwyitYbtirisE=_4ySvdTQtgnt7P-s7Eg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: parallel.c is not marked as test covered (Robert Haas <robertmhaas@gmail.com>) |
Список | pgsql-hackers |
On Tue, Jun 21, 2016 at 1:24 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Mon, Jun 20, 2016 at 1:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > Robert Haas <robertmhaas@gmail.com> writes:
> >> On Sun, Jun 19, 2016 at 10:23 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >>> Personally, I'm +1 for such tinkering if it makes the feature either more
> >>> controllable or more understandable. After reading the comments at the
> >>> head of nodeGather.c, though, I don't think that single_copy is either
> >>> understandable or useful, and merely renaming it won't help. Apparently,
> >>> it runs code in the worker, except when it doesn't, and even when it does,
> >>> it's absolutely guaranteed to be a performance loss because the leader is
> >>> doing nothing. What in the world is the point?
> >
> >> The single_copy flag allows a Gather node to have a child plan which
> >> is not intrinsically parallel.
> >
> > OK, but I'm very dubious of your claim that this has any use except for
> > testing purposes. It certainly has no such use today. Therefore, the
> > behavior of falling back to running in the leader seems like an
> > anti-feature to me. If we want to test running in a worker, then we
> > want to test that, not maybe test it.
> >
> > I propose that the behavior we actually want here is to execute in
> > a worker, full stop. If we can't get one, wait till we can. If we
> > can't get one because no slots have been allocated at all, fail.
> > That would make the behavior deterministic enough for the regression
> > tests to rely on it.
>
>
> On Mon, Jun 20, 2016 at 1:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > Robert Haas <robertmhaas@gmail.com> writes:
> >> On Sun, Jun 19, 2016 at 10:23 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >>> Personally, I'm +1 for such tinkering if it makes the feature either more
> >>> controllable or more understandable. After reading the comments at the
> >>> head of nodeGather.c, though, I don't think that single_copy is either
> >>> understandable or useful, and merely renaming it won't help. Apparently,
> >>> it runs code in the worker, except when it doesn't, and even when it does,
> >>> it's absolutely guaranteed to be a performance loss because the leader is
> >>> doing nothing. What in the world is the point?
> >
> >> The single_copy flag allows a Gather node to have a child plan which
> >> is not intrinsically parallel.
> >
> > OK, but I'm very dubious of your claim that this has any use except for
> > testing purposes. It certainly has no such use today. Therefore, the
> > behavior of falling back to running in the leader seems like an
> > anti-feature to me. If we want to test running in a worker, then we
> > want to test that, not maybe test it.
> >
> > I propose that the behavior we actually want here is to execute in
> > a worker, full stop. If we can't get one, wait till we can. If we
> > can't get one because no slots have been allocated at all, fail.
> > That would make the behavior deterministic enough for the regression
> > tests to rely on it.
>
+1.
> I agree that for force_parallel_mode testing, that behavior would be useful.
>
> I am also pretty confident we're going to want the behavior where the
> leader runs the plan if, and only if, no workers can be obtained for
> other purposes. However, I agree that's not essential today.
>
> > And while I don't know what this mode should be called, I'm pretty sure
> > that neither "single_copy" nor "pipeline" are useful descriptions.
>
> Maybe we should make this an enum rather than a boolean, since there
> seem to be more than two useful behaviors.
>
How about calling it as control_parallel/control_parallelism or override_parallel/override_parallelism?
В списке pgsql-hackers по дате отправления: