Re: small table, huge table, and a join = slow and tough query. cake inside!
От | Alban Hertroys |
---|---|
Тема | Re: small table, huge table, and a join = slow and tough query. cake inside! |
Дата | |
Msg-id | B7109EF1-BA6C-465F-B817-3484B6E627C0@solfertje.student.utwente.nl обсуждение исходный текст |
Ответ на | small table, huge table, and a join = slow and tough query. cake inside! (edfialk <edfialk@gmail.com>) |
Ответы |
Re: small table, huge table, and a join = slow and tough query. cake inside!
|
Список | pgsql-general |
On May 28, 2008, at 9:27 PM, edfialk wrote: > example rows from huge table (almost 900,000 rows): > fips, pollutant, value That's not really huge in comparison with other pgsql databases. There are terabyte installations out there ;) > SELECT small.fips, small.geom, small.name, SUM(huge.value) from small > JOIN huge on huge.fips = small.fips WHERE (SUM(huge.value)) > 500 AND > huge.pollutant='co'; > > wonder if that makes sense. Obviously, can't have an aggregate in > where clause, so I've tried a couple WHERE (SELECT) kind of things, > nothing working out too well. > > So first, if anyone has any idea on the best way I can do a WHERE > (sum(huge.value) > 500) > or... > any ideas on how I could speed up the query, I would be so extremely > grateful. That's what GROUP BY and HAVING are for: SELECT fips, small.geom, small.name, SUM(huge.value) FROM small JOIN huge USING (fips) WHERE huge.pollutant='co' GROUP BY fips, small.geom, small.name HAVING SUM(huge.value) > 500; Guessing from your performance problem you may not have an index on huge.fips? And did you vacuum/analyse those tables anytime recently? Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll see there is no forest. !DSPAM:737,483e4a67927663141972859!
В списке pgsql-general по дате отправления: