Re: Draft LIMIT pushdown to Append and MergeAppend patch

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: Draft LIMIT pushdown to Append and MergeAppend patch
Дата
Msg-id CAApHDvpUxuTsvOy0yh4Ye-V+YyAV4TNTK88pcgW=q6+mNtSPQw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Draft LIMIT pushdown to Append and MergeAppend patch  (Michał Kłeczek <michal@kleczek.org>)
Ответы Re: Draft LIMIT pushdown to Append and MergeAppend patch  (Andy Fan <zhihui.fan1213@gmail.com>)
Re: Draft LIMIT pushdown to Append and MergeAppend patch  (Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>)
Список pgsql-hackers
On Sun, 8 Oct 2023 at 18:32, Michał Kłeczek <michal@kleczek.org> wrote:
> On 8 Oct 2023, at 03:33, Andy Fan <zhihui.fan1213@gmail.com> wrote:
>> For the patches for performance improvement,  it is better to provide
>> an example to show how much benefits we can get.  As for this case,
>> I'm doubtful it can work as an improvement.

> Could you elaborate a little why you think it won’t work as an improvement?
> Is it because in practice LIMIT _is_ pushed down already during execution?
> From what I understand postgres_fdw does indeed fetch on demand.
> OTOH pushing down LIMIT is considered an improvement (as witnessed in the postgres_fdw code itself after d50d172e51)

In my opinion, analysis of where we can push LIMIT node deeper into
the plan is an interesting area for research and work.

The Append / MergeAppend case is certainly one place where pushing
LIMIT nodes down could be beneficial. Of course, if there was some
filter or join or aggregation/distinct, etc that occurred after the
Append/MergeAppend then you'd not be able to do this as the pushed
limit might be met before we've got enough rows at the top level of
the plan and that could result in fewer than rows being output than
what was requested in the query (aka wrong results).  Andy was working
around this area recently (see [1] and corresponding thread).  He had
to add a bunch of code that checked for operations that might mean
we'd need to read more than the tuple_fraction rows from the Append
node.  If we had nice way to know when building base rel paths if
there were or were not upper-level operations that affect if LIMIT
pushing can be done, then that might make such a patch cleaner.  Andy
in one of his proposed patches [2] added a field to PlannerInfo to
mark this.  That field wasn't well named or documented, so anything
you did would have to be an improvement on that.

Looking at your patch, I see you've solved this by delaying the
pushing down until the upper planner and just checking if the lower
planner (join search) produced an Append or MergeAppend path. I've not
really developed an opinion yet what's the best method. I feel
creating the correct paths up-front is likely more flexible and more
true to the path costing code.

It might also be worth taking a step backwards and seeing if there are
any other cases where we could push down LIMITs and try to see if
there's something more generic that could be built to do this in a way
that's beneficial to more cases. I can't think of any off the top of
my head, but I've not thought very hard about it.

FWIW, it's trivial to mock up the possible benefits of pushing LIMIT
nodes down with something like the following:

create table rp (a int) partition by range (a);
create table rp1 partition of rp for values from (0) to (1000000);
create table rp2 partition of rp for values from (1000000) to (2000000);
insert into rp select x from generate_series(0, 1999999)x;

-- limit not pushed:
explain analyze select * from rp order by a limit 10;
Execution Time: 148.041 ms

-- limit pushed (mocked):
explain analyze (select * from rp1 order by a limit 10) union all
(select * from rp2 order by a limit 10) limit 10;
Execution Time: 49.263 ms

about 3x faster for this case.

However, it may also be worth you reading over [3] and the ultimate
reason I changed my mind on that being a good idea. Pushing LIMITs
below an Append seems quite incomplete when we don't yet push sorts
below Appends, which is what that patch did.  I just was not
comfortable proceeding with [3] as nodeSort.c holds onto the tuplesort
until executor shutdown.  That'll be done for rescan reasons, but it
does mean if you pushed Sort below Append that we could have a very
large number of sort nodes holding onto work_mem all at once.   I find
that a bit scary, especially so given the excessive partitioning cases
I've seen and worked on recently.  I did consider if we maybe could
adjust nodeSort.c to do tuplesort_end() after the final row. We'd need
to only do that if we could be somehow certain there were going to be
no rescans.  I don't have a plan on how that would be detected.

Anyway, I don't think anything above is that useful to push you
forward with that. I just didn't want you running off thinking we
don't want to see improvements in this area.  I certainly do.

David

[1] https://postgr.es/m/CAApHDvoMGyc+1eb8g5rEMUUMeGWhe2c_f8yvJjUO1kUHZj0h7w@mail.gmail.com
[2] https://postgr.es/m/CAKU4AWqaTNPrYcb_cMEDDYWZVGfFxuUjr75F5LBZwnUQ0JvVPw@mail.gmail.com
[3] https://postgr.es/m/CAApHDvojKdBR3MR59JXmaCYbyHB6Q_5qPRU+dy93En8wm+XiDA@mail.gmail.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: David Rowley
Дата:
Сообщение: Re: pg16: XX000: could not find pathkey item to sort
Следующее
От: Yasuo Honda
Дата:
Сообщение: Re: pg_stat_statements and "IN" conditions