Re: Advice needed: query performance deteriorates by 2000% within 1 minute
От | Tom Lane |
---|---|
Тема | Re: Advice needed: query performance deteriorates by 2000% within 1 minute |
Дата | |
Msg-id | 1351442.1645719051@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Advice needed: query performance deteriorates by 2000% within 1 minute (Peter Adlersburg <peter.adlersburg@gmail.com>) |
Ответы |
Re: Advice needed: query performance deteriorates by 2000% within 1 minute
|
Список | pgsql-performance |
Peter Adlersburg <peter.adlersburg@gmail.com> writes: > Limit (cost=0.00..804.97 rows=10 width=22) (actual > time=23970.845..25588.432 rows=1 loops=1) > -> Seq Scan on "order" (cost=0.00..3863.86 rows=48 width=22) (actual > time=23970.843..25588.429 rows=1 loops=1) > Filter: (jsonb_to_tsvector('english'::regconfig, content, > '["all"]'::jsonb) @@ '''1.20709841'''::tsquery) > Rows Removed by Filter: 9652 > Planning Time: 0.430 ms > Execution Time: 25588.448 ms I think the expense here comes from re-executing jsonb_to_tsvector a lot of times. By default that's estimated as 100 times more expensive than a simple function (such as addition), but these results make it seem like that's an understatement. You might try something like alter function jsonb_to_tsvector(regconfig, jsonb, jsonb) cost 1000; to further discourage the planner from picking this plan shape. Possibly the cost estimate for ts_match_tq (the function underlying this variant of @@) needs to be bumped up as well. (Bear in mind that pg_dump will not propagate such hacks on system-defined objects, so you'll need a note to reapply any such changes after dump/reload or pg_upgrade.) regards, tom lane
В списке pgsql-performance по дате отправления: