Re: Slow query with a lot of data
От | Moritz Onken |
---|---|
Тема | Re: Slow query with a lot of data |
Дата | |
Msg-id | 6F57A01F-F83F-477F-B86B-617D0F843FB1@houseofdesign.de обсуждение исходный текст |
Ответ на | Re: Slow query with a lot of data (Moritz Onken <onken@houseofdesign.de>) |
Ответы |
Re: Slow query with a lot of data
Re: Slow query with a lot of data |
Список | pgsql-performance |
Am 19.08.2008 um 17:23 schrieb Moritz Onken: > > Am 19.08.2008 um 16:49 schrieb Scott Carey: > >> What is your work_mem set to? The default? >> >> Try increasing it significantly if you have the RAM and seeing if >> that affects the explain plan. You may even want to set it to a >> number larger than the RAM you have just to see what happens. In >> all honesty, it may be faster to overflow to OS swap space than >> sort too many rows, but ONLY if it changes the plan to a >> significantly more efficient one. >> >> Simply type >> 'SET work_mem = '500MB'; >> before running your explain. Set it to even more RAM if you have >> the space for this experiment. >> >> In my experience the performance of aggregates on large tables is >> significantly affected by work_mem and the optimizer will chosse >> poorly without enough of it. It will rule out plans that may be >> fast enough when overflowing to disk in preference to colossal >> sized sorts (which likely also overflow to disk but take hours or >> days). > > Thanks for that advice but the explain is not different :-( > > moritz > > -- Hi, I started the query with work_mem set to 3000MB. The explain output didn't change but it runs now much faster (about 10 times). The swap isn't used. How can you explain that? moritz
В списке pgsql-performance по дате отправления: