Обсуждение: Strange execution plan

Поиск
Список
Период
Сортировка

Strange execution plan

От
Joel Frid
Дата:
Hi,

On my production environment (PostgreSQL 13.3), one of my queries runs very slow, about 2 minutes.
I noticed that it does not use an execution plan that I anticapited it would.

The query is :

SELECT t.*
FROM test t 
WHERE  t."existe" IS true
and  t.json_data" @> '{"book":{"title":"In Search of Lost Time"}}'
ORDER BY t."id" DESC 
LIMIT 100 OFFSET 0

I know PostgreSQL is not very good at performing well with pagination and offsets but all my queries must end with "LIMIT 100 OFFSET 0", "LIMIT 100 OFFSET 1", ...

If I display actual Execution Plan, I get this :

Limit  (cost=0.43..1164.55 rows=100 width=632) (actual time=7884.056..121297.756 rows=1 loops=1)
  Buffers: shared hit=5311835 read=585741 dirtied=32
  ->  Index Scan Backward using test_pk on test  (cost=0.43..141104.29 rows=12121 width=632) (actual time=7884.053..121297.734 rows=1 loops=1)
        Filter: ((existe IS TRUE) AND (json_data @> '{"book": {"title": "In Search of Lost Time"}}'::jsonb))
        Rows Removed by Filter: 1215681
        Buffers: shared hit=5311835 read=585741 dirtied=32
Planning:
  Buffers: shared hit=1
Planning Time: 0.283 ms
Execution Time: 121297.878 ms

The query runs very slow from limit 1 to 1147.
If I change limit value to 1148, this query runs quite fast ( 0.190 ms) with a nice execution plan :

SELECT t.*
FROM test t 
WHERE  t."existe" IS true
and  t.json_data" @> '{"book":{"title":"In Search of Lost Time"}}'
ORDER BY t."id" DESC 
LIMIT 1148 OFFSET 0

Limit  (cost=13220.53..13223.40 rows=1148 width=632) (actual time=0.138..0.140 rows=1 loops=1)
  Buffers: shared hit=17
  ->  Sort  (cost=13220.53..13250.84 rows=12121 width=632) (actual time=0.137..0.138 rows=1 loops=1)
        Sort Key: id DESC
        Sort Method: quicksort  Memory: 27kB
        Buffers: shared hit=17
        ->  Bitmap Heap Scan on test  (cost=119.73..12543.88 rows=12121 width=632) (actual time=0.125..0.127 rows=1 loops=1)
              Recheck Cond: (json_data @> '{"book": {"title": "In Search of Lost Time"}}'::jsonb)
              Filter: (existe IS TRUE)
              Heap Blocks: exact=1
              Buffers: shared hit=17
              ->  Bitmap Index Scan on test_json_data_idx  (cost=0.00..116.70 rows=12187 width=0) (actual time=0.112..0.113 rows=1 loops=1)
                    Index Cond: (json_data @> '{"book": {"title": "In Search of Lost Time"}}'::jsonb)
                    Buffers: shared hit=16
Planning:
  Buffers: shared hit=1
Planning Time: 0.296 ms
Execution Time: 0.190 ms

Would you have any suggestions why Postgres chooses a so bad query plan ?


Server :
----------------------------------------------------------------------
CPU Model             : AMD EPYC 7281 16-Core Processor
CPU Cores             : 4
CPU Frequency         : 2096.060 MHz
CPU Cache             : 512 KB
Total Disk            : 888.1 GB (473.0 GB Used)
Total Mem             : 11973 MB (4922 MB Used)
Total Swap            : 0 MB (0 MB Used)
OS                    : Debian GNU/Linux 10
Arch                  : x86_64 (64 Bit)
Kernel                : 5.10.28
Virtualization        : Dedicated
----------------------------------------------------------------------
I/O Speed(1st run)    : 132 MB/s
I/O Speed(2nd run)    : 204 MB/s
I/O Speed(3rd run)    : 197 MB/s
Average I/O speed     : 177.7 MB/s


Postgresql.conf :
max_connections = 100
shared_buffers = 3840MB
huge_pages = on
work_mem = 9830kB
maintenance_work_mem = 960MB
effective_io_concurrency = 200
max_worker_processes = 3
max_parallel_maintenance_workers = 2
max_parallel_workers_per_gather = 2
max_parallel_workers = 3
max_wal_size = 4GB
min_wal_size = 1GB
checkpoint_completion_target = 0.9
effective_cache_size = 11520MB
default_statistics_target = 100
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 10000
pg_stat_statements.track = all


Table test : I have just over 1.2 million records on this table
CREATE TABLE test (
"source" varchar NOT NULL,
existe bool NULL,
json_data jsonb NULL
row_updated timestamp NOT NULL DEFAULT clock_timestamp(),
row_inserted timestamp NOT NULL DEFAULT clock_timestamp(),
id uuid NOT NULL,
CONSTRAINT test_pk PRIMARY KEY (id)
);
CREATE INDEX test_existe_idx ON test USING btree (existe);
CREATE INDEX test_id_idx ON test USING btree (id);
CREATE INDEX test_json_datae_idx ON test USING gin (json_data jsonb_path_ops);
CREATE INDEX test_row_inserted_idx ON test USING btree (row_inserted);
CREATE INDEX test_row_updated_idx ON production.test USING btree (row_updated);
CREATE INDEX test_source_idx ON production.test USING btree (source);


select * from pg_stat_all_tables where relname = 'test' :

relid|schemaname|relname|seq_scan|seq_tup_read|idx_scan|idx_tup_fetch|n_tup_ins|n_tup_upd|n_tup_del
16692|dev       |test   |1816    |724038305   |31413   |36863713     |1215682  |23127    |0       

n_tup_hot_upd|n_live_tup|n_dead_tup|n_mod_since_analyze|n_ins_since_vacuum|last_vacuum       
0            |1219730   |30        |121                |91                |2021-07-07 14:43:13

last_autovacuum    |last_analyze       |last_autoanalyze   |vacuum_count|autovacuum_count|analyze_count|autoanalyze_count
2021-06-10 09:29:54|2021-07-07 14:43:52|2021-06-10 09:31:32|2           |1               |1            |1

car-expresso-logo
Joël Frid
Directeur Général
+33 6 14 46 37 68





car-expresso-logo
Joël Frid
Directeur Général
+33 6 14 46 37 68



Вложения

Re: Strange execution plan

От
Manuel Weitzman
Дата:

> On 08-07-2021, at 04:26, Joel Frid <joel.frid@car-expresso.com> wrote:
>
> Hi,
>
> On my production environment (PostgreSQL 13.3), one of my queries runs very slow, about 2 minutes.
> I noticed that it does not use an execution plan that I anticapited it would.
>
> The query is :
>
> SELECT t.*
> FROM test t
> WHERE  t."existe" IS true
> and  t.json_data" @> '{"book":{"title":"In Search of Lost Time"}}'
> ORDER BY t."id" DESC
> LIMIT 100 OFFSET 0
>
> I know PostgreSQL is not very good at performing well with pagination and offsets but all my queries must end with
"LIMIT100 OFFSET 0", "LIMIT 100 OFFSET 1", ... 

I don't think any database performs well with LIMIT+OFFSET.
Using OFFSET requires doing a linear scan discarding all rows up to
the row in position OFFSET, then the scan continues for LIMIT rows.
The greater the value of OFFSET, the slowest the query will perform,
in general.
I'd recommend you using cursors for pagination in general (I know it
may not be possible for you, just wanted to explain as it could be
useful).

> If I display actual Execution Plan, I get this :
>
> Limit  (cost=0.43..1164.55 rows=100 width=632) (actual time=7884.056..121297.756 rows=1 loops=1)
>   Buffers: shared hit=5311835 read=585741 dirtied=32
>   ->  Index Scan Backward using test_pk on test  (cost=0.43..141104.29 rows=12121 width=632) (actual
time=7884.053..121297.734rows=1 loops=1) 
>         Filter: ((existe IS TRUE) AND (json_data @> '{"book": {"title": "In Search of Lost Time"}}'::jsonb))
>         Rows Removed by Filter: 1215681
>         Buffers: shared hit=5311835 read=585741 dirtied=32
> Planning:
>   Buffers: shared hit=1
> Planning Time: 0.283 ms
> Execution Time: 121297.878 ms
>
> The query runs very slow from limit 1 to 1147.
> If I change limit value to 1148, this query runs quite fast ( 0.190 ms) with a nice execution plan :
>
> SELECT t.*
> FROM test t
> WHERE  t."existe" IS true
> and  t.json_data" @> '{"book":{"title":"In Search of Lost Time"}}'
> ORDER BY t."id" DESC
> LIMIT 1148 OFFSET 0
>
> Limit  (cost=13220.53..13223.40 rows=1148 width=632) (actual time=0.138..0.140 rows=1 loops=1)
>   Buffers: shared hit=17
>   ->  Sort  (cost=13220.53..13250.84 rows=12121 width=632) (actual time=0.137..0.138 rows=1 loops=1)
>         Sort Key: id DESC
>         Sort Method: quicksort  Memory: 27kB
>         Buffers: shared hit=17
>         ->  Bitmap Heap Scan on test  (cost=119.73..12543.88 rows=12121 width=632) (actual time=0.125..0.127 rows=1
loops=1)
>               Recheck Cond: (json_data @> '{"book": {"title": "In Search of Lost Time"}}'::jsonb)
>               Filter: (existe IS TRUE)
>               Heap Blocks: exact=1
>               Buffers: shared hit=17
>               ->  Bitmap Index Scan on test_json_data_idx  (cost=0.00..116.70 rows=12187 width=0) (actual
time=0.112..0.113rows=1 loops=1) 
>                     Index Cond: (json_data @> '{"book": {"title": "In Search of Lost Time"}}'::jsonb)
>                     Buffers: shared hit=16
> Planning:
>   Buffers: shared hit=1
> Planning Time: 0.296 ms
> Execution Time: 0.190 ms
>
> Would you have any suggestions why Postgres chooses a so bad query plan ?

I can guess a bit about this,

One of the perks of the statistics collector is that it doesn't
collect too many statistics to properly estimate the number of rows
that will match the "@>" operator, as that is quite hard to do. In
general it over-estimates how many rows will match.
As you can see on both plans it estimates that 12121 rows will match
the "@>" clause, even if only 1 actually match.

This means that the planner is estimating that the cost of using
test_json_data_idx and then executing top-k heapsort (to get the
latest 100 rows ordered by id) is far greater than iterating over the
test_pk index where the first 100 rows that match will be the ones you
need and already sorted.
In practice, iterating the test_pk index has to read over a lot of
rows that didn't match the "@>" operator, as the actual number of rows
that match isn't as large as initially expected.

Once you increase the LIMIT value to a number high enough the cost of
iterating over any of the indexes becomes similar to the planner,
after that threshold it chooses to switch the plan.


So, some suggestions to improve the execution of that query:

Option A: Use a common table expression to "force" the usage of
test_json_data_idx

    WITH json_matching_rows AS (
        SELECT t.*
        FROM test ti
        WHERE t.json_data @> '{"book":{"title":"In Search of Lost Time"}}'
    )
    SELECT t.*
    FROM json_matching_rows t
    WHERE  t."existe" IS true
    ORDER BY t."id" DESC
    LIMIT 100 OFFSET 0;


Option B: Use the extension pg_hint_plan to hint the usage of
test_json_data_idx


Option C: Create a functional index for the book title and tweak the
query to use it.
This can also be a composite index (to have the values sorted by id
already) and partial (to only include rows where "existe" is true)

    CREATE INDEX test_json_data_composite_idx
    ON test
    USING BTREE ((json_data->'book'->>'title'), id DESC)
    WHERE (existe);

    SELECT t.*
    FROM test t
    WHERE t."existe" IS true
    and  t.json_data->'book'->>'title' = 'In Search of Lost Time'
    ORDER BY t."id" DESC
    LIMIT 100 OFFSET 0;

Be aware that partial indexes don't support HOT updates.


I hope this reply helps you.


Best regards,
Manuel Weitzman





Re: Strange execution plan

От
Manuel Weitzman
Дата:

> On 08-07-2021, at 17:13, Manuel Weitzman <manuelweitzman@gmail.com> wrote:
> 
> I'd recommend you using cursors for pagination in general (I know it
> may not be possible for you, just wanted to explain as it could be
> useful).

By the way, I mean cursor pagination as the general concept.
I'm not talking about Postgres cursors.

Best regards,
Manuel Weitzman



Re: Strange execution plan

От
Manuel Weitzman
Дата:

> On 08-07-2021, at 17:13, Manuel Weitzman <manuelweitzman@gmail.com> wrote:
> 
> Option A: Use a common table expression to "force" the usage of
> test_json_data_idx
> 
>    WITH json_matching_rows AS (
>        SELECT t.*
>        FROM test ti
>        WHERE t.json_data @> '{"book":{"title":"In Search of Lost Time"}}'
>    )
>    SELECT t.*
>    FROM json_matching_rows t
>    WHERE  t."existe" IS true
>    ORDER BY t."id" DESC
>    LIMIT 100 OFFSET 0;
> 

The first query line should be
    WITH MATERIALIZED json_matching_rows AS (

I had forgotten that Postgres 12 removed the optimization barrier on
common table expressions.
To introduce it again the MATERIALIZED clause is needed.

Apparently I need to work on reviewing my emails properly before
sending them.


Best regards,
Manuel Weitzman