Re: BUG #16183: PREPARED STATEMENT slowed down by jit
От | Christian Quest |
---|---|
Тема | Re: BUG #16183: PREPARED STATEMENT slowed down by jit |
Дата | |
Msg-id | 01c9805b-b0a9-f4f0-9078-d6737f407200@cquest.org обсуждение исходный текст |
Ответ на | Re: BUG #16183: PREPARED STATEMENT slowed down by jit (Jeff Janes <jeff.janes@gmail.com>) |
Список | pgsql-bugs |
On Thu, Jan 2, 2020 at 5:03 PM Christian Quest <cquest@cquest.org> wrote:osm=# explain analyze execute mark_ways_by_node(1836953770);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on planet_osm_ways (cost=2468.37..305182.32 rows=301467 width=8) (actual time=0.039..0.042 rows=2 loops=1)
Recheck Cond: (nodes && '{1836953770}'::bigint[])I think your estimation here is falling victim to an deficiency in how stats are computed on array types when all values in the array (across all rows) are rare. See the discussion of this at https://www.postgresql.org/message-id/flat/CAMkU%3D1x2W1gpEP3AQsrSA30uxQk1Sau5VDOLL4LkhWLwrOY8Lw%40mail.gmail.com(My quick and dirty patch posted there still compiles and works, if you would like to test that it fixes the problem for you.)Because the number of rows is vastly overestimated, so is the cost. Which then causes JIT to kick in counter-productively, due to the deranged cost exceeding jit_above_cost.Cheers,Jeff
This wrong cost may have other side effect, like launching parallel workers.
Another person got the same problem, but my simple fix of disabling jit did not make it for him. My test were done on a smaller database (OpenStreetMap data extract only covering France), his was on a full planet dataset. The computed rows where 10x higher.
We found a workaround (disabling jit and parallel workers for the session), but a more global fix on this wrong evaluation of rows should be considered for other cases ;)
Thanks for your time on this issue.
Christian
В списке pgsql-bugs по дате отправления: