Re: Rewriting DISTINCT and losing performance
От | Richard Huxton |
---|---|
Тема | Re: Rewriting DISTINCT and losing performance |
Дата | |
Msg-id | 4651D813.5050100@archonet.com обсуждение исходный текст |
Ответ на | Re: Rewriting DISTINCT and losing performance ("Chuck D." <pgsql-performance@nullmx.com>) |
Ответы |
Re: Rewriting DISTINCT and losing performance
|
Список | pgsql-performance |
Chuck D. wrote: > On Monday 21 May 2007 03:14, Josh Berkus wrote: >> Chuck, >> >> Can we see the plan? >> >> --Josh >> > > Sorry Josh, I guess I could have just used EXPLAIN instead of EXPLAIN > ANALYZE. > > # explain > SELECT country_id, country_name > FROM geo.country > WHERE country_id IN > (select country_id FROM geo.city) > ; > QUERY PLAN > -------------------------------------------------------------------- > Nested Loop IN Join (cost=0.00..1252.60 rows=155 width=15) > Join Filter: (country.country_id = city.country_id) > -> Seq Scan on country (cost=0.00..6.44 rows=244 width=15) > -> Seq Scan on city (cost=0.00..43409.12 rows=2122712 width=2) The only thing I can think of is that the CLUSTERing on city.country_id makes the system think it'll be cheaper to seq-scan the whole table. I take it you have got 2 million rows in "city"? -- Richard Huxton Archonet Ltd
В списке pgsql-performance по дате отправления: