Re: Help with a seq scan on multi-million row table
От | |
---|---|
Тема | Re: Help with a seq scan on multi-million row table |
Дата | |
Msg-id | 20060510203817.62210.qmail@web50301.mail.yahoo.com обсуждение исходный текст |
Ответ на | Re: Help with a seq scan on multi-million row table (Bruno Wolff III <bruno@wolff.to>) |
Список | pgsql-sql |
Hi, Thanks Bruno. That was indeed a redundant DISTINCT. It did reduce the amount of work, but as you said it doesn't get ridof the sequential scan, which is the real problem with this query. Otis ----- Original Message ---- From: Bruno Wolff III <bruno@wolff.to> To: ogjunk-pgjedan@yahoo.com Cc: pgsql-sql@postgresql.org Sent: Wednesday, May 10, 2006 3:23:29 PM Subject: Re: [SQL] Help with a seq scan on multi-million row table On Wed, May 10, 2006 at 13:13:59 -0500, ogjunk-pgjedan@yahoo.com wrote: > Hello, > > I have a little 2-table JOIN, GROUP BY, ORDER BY query that does a sequential scan on a multi-million row table. I _thought_I had all the appropriate indices, but apparently I do not. I was wondering if anyone can spot a way I can speedup this query. > The query currently takes... *gulp*: 381119.201 ms :( > > There are only 2 tables in the game: user_url and user_url_tag. The latter has FKs pointing to the former. The sequentialscan happens on the latter - user_url_tag: > > EXPLAIN ANALYZE select DISTINCT userurltag0_.tag as x0_0_, COUNT(*) as x1_0_ from user_url_tag userurltag0_, user_url userurl1_WHERE (((userurl1_.user_id=1 )) AND ((userurltag0_.user_url_id=userurl1_.id ))) GROUP BY userurltag0_.tag ORDERBY count(*) DESC; While this isn't a big issue, it looks like DISTINCT is redundant in your query and seems to be adding some extra work. ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
В списке pgsql-sql по дате отправления: