Re: again on index usage
От | Daniel Kalchev |
---|---|
Тема | Re: again on index usage |
Дата | |
Msg-id | 200201090745.JAA15141@dcave.digsys.bg обсуждение исходный текст |
Ответ на | Re: again on index usage (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: again on index usage
|
Список | pgsql-hackers |
>>>Tom Lane said:> Daniel Kalchev <daniel@digsys.bg> writes:> > Same result (sorry, should have included this originally):>> > Aggregate (cost=47721.72..47721.72 rows=1 width=8)> > -> Seq Scan on iplog_gate200112 (cost=0.00..47579.54rows=56873 width= 8)> > >>> If you say "set enable_seqscan to off", does that change the plan?> >> Aggregate (cost=100359.71..100359.71 rows=1 width=8)> > -> Index Scan using iplog_gate200112_ipdate_idx on iplog_gate200112 > > (cost=0.00..100217.52 rows=56873 width=8)> > So, what we've got here is a difference of opinion: theplanner thinks> that the seqscan will be faster. How many rows are actually selected> by this WHERE clause? How longdoes each plan actually take?> > regards, tom lane 3-5 minutes with sequential scan; 10-15 sec with index scan. The query returns 4062 rows. Out of ca 1700000 rows. With only the datetime constraints (which relates to the index), the number of rows is 51764. In any case, sequential scan of millions of rows cannot be faster than index scan. The average number of records for each index key is around 25000 - perhaps the planner thinks because the number of tuples in this case is higher, it should prefer sequential scan. I guess the planner will do better if there is some scaling of these values with respect to the total number of rows. Daniel
В списке pgsql-hackers по дате отправления: