Re: Joining 2 tables with 300 million rows
От | Manfred Koizar |
---|---|
Тема | Re: Joining 2 tables with 300 million rows |
Дата | |
Msg-id | 0osrp1hs1s4vokl16l04u62n1pbsn8j77s@4ax.com обсуждение исходный текст |
Ответ на | Joining 2 tables with 300 million rows (Amit V Shah <ashah@tagaudit.com>) |
Список | pgsql-performance |
On Thu, 8 Dec 2005 11:59:24 -0500 , Amit V Shah <ashah@tagaudit.com> wrote: > CONSTRAINT pk_runresult_has_catalogtable PRIMARY KEY >(runresult_id_runresult, catalogtable_id_catalogtable, value) >' -> Index Scan using runresult_has_catalogtable_id_runresult >on runresult_has_catalogtable runresult_has_catalogtable_1 >(cost=0.00..76.65 rows=41 width=8) (actual time=0.015..0.017 rows=1 >loops=30)' >' Index Cond: >(runresult_has_catalogtable_1.runresult_id_runresult = >"outer".runresult_id_runresult)' >' Filter: ((catalogtable_id_catalogtable = 54) AND (value >= 1))' If I were the planner, I'd use the primary key index. You seem to have a redundant(?) index on runresult_has_catalogtable(runresult_id_runresult). Dropping it might help, or it might make things much worse. But at this stage this is pure speculation. Give us more information first. Show us the complete definition (including *all* indices) of all tables occurring in your query. What Postgres version is this? And please post EXPLAIN ANALYSE output of a *slow* query. Servus Manfred
В списке pgsql-performance по дате отправления: