Re: selects with large offset really slow
От | Richard Huxton |
---|---|
Тема | Re: selects with large offset really slow |
Дата | |
Msg-id | 200302071202.47771.dev@archonet.com обсуждение исходный текст |
Ответ на | selects with large offset really slow (John Smith <john_smith_45678@yahoo.com>) |
Ответы |
Re: selects with large offset really slow
Re: selects with large offset really slow |
Список | pgsql-general |
On Friday 07 Feb 2003 5:01 am, John Smith wrote: > There are 90K-100K records in each of two tables. This simple join is > really slow and the larger the offset, the longer it takes. Anything I can > do to speed it up (a lot)? I've double-checked and there are indexes on > everything used for joins and ordering. > QUERY PLAN > --------------------------------------------------------------------------- >----------------------- Limit (cost=19546.62..19546.87 rows=100 width=62) > (actual time=20557.00..20558.00 rows=100 loops=1) > -> Sort (cost=19321.62..19571.32 rows=99881 width=62) (actual > time=19775.00..20410.00 rows=90101 loops=1) > Sort Key: l.url > -> Hash Join (cost=2471.00..7662.54 rows=99881 width=62) (actual > time=3013.00..12002.00 rows=100000 loops=1) Hash Cond: ("outer".id = > "inner".link_id) It's the join and sort that's getting you. PG has to fetch and sort all the rows so it can discard 90,000 of them. I can't think of a good way for it to optimise this, though you might want to check your sort_mem is set high enough. > explain analyze select l.id, l.url > from links l > inner join stats s > on l.id = s.link_id > and s.referrer_id = 1 > order by l.url > limit 100 > offset 90000; There are three options you might want to look at: 1. Use a temporary table, then select from that for each page. 2. Use a cursor, and just fetch 100 records at a time from it. 3. Cheat and fetch where l.url>=X, remembering X as the highest url from the last set of results. This of course means pages of results will overlap. -- Richard Huxton
В списке pgsql-general по дате отправления: