Re: ORDER BY ... LIMIT.. performance
От | Josh Berkus |
---|---|
Тема | Re: ORDER BY ... LIMIT.. performance |
Дата | |
Msg-id | web-2024310@davinci.ethosmedia.com обсуждение исходный текст |
Ответ на | Re: ORDER BY ... LIMIT.. performance ("john cartmell" <john.cartmell@mediaburst.co.uk>) |
Список | pgsql-performance |
John, > But when it is 20: > EXPLAIN ANALYZE SELECT * FROM tblcompany WHERE > lower(companyname) like 'a g m%' ORDER BY companyname LIMIT 20,0; > NOTICE: QUERY PLAN: > > Limit (cost=0.00..4864.92 rows=20 width=223) (actual > time=559.58..21895.02 rows=20 loops=1) > -> Index Scan using idx_tblcompany_companyname on tblcompany > (cost=0.00..1542006.83 rows=6339 width=223) (actual > time=559.57..21894.97 rows=20 loops=1) > Total runtime: 21895.13 msec That's extremely odd. From the look of it, Postgres is taking an extra 18 seconds just to find that 20th row. Does this table expereince very frequent deletions and updates, or perhaps mass record replacement from a file? Try running VACUUM FULL ANALYZE, and possibly even REINDEX on idx_tblcompany_companyname. Massive numbers of dead tuples could account for this performance irregularity. -Josh
В списке pgsql-performance по дате отправления: