Re: Slow query with a lot of data
От | Moritz Onken |
---|---|
Тема | Re: Slow query with a lot of data |
Дата | |
Msg-id | 979A6EAC-24DC-4D92-A4B6-F8702221C5EC@houseofdesign.de обсуждение исходный текст |
Ответ на | Re: Slow query with a lot of data ("Scott Carey" <scott@richrelevance.com>) |
Ответы |
Re: Slow query with a lot of data
|
Список | pgsql-performance |
Am 19.08.2008 um 16:49 schrieb Scott Carey: > What is your work_mem set to? The default? > > Try increasing it significantly if you have the RAM and seeing if > that affects the explain plan. You may even want to set it to a > number larger than the RAM you have just to see what happens. In > all honesty, it may be faster to overflow to OS swap space than sort > too many rows, but ONLY if it changes the plan to a significantly > more efficient one. > > Simply type > 'SET work_mem = '500MB'; > before running your explain. Set it to even more RAM if you have > the space for this experiment. > > In my experience the performance of aggregates on large tables is > significantly affected by work_mem and the optimizer will chosse > poorly without enough of it. It will rule out plans that may be > fast enough when overflowing to disk in preference to colossal sized > sorts (which likely also overflow to disk but take hours or days). Thanks for that advice but the explain is not different :-( moritz
В списке pgsql-performance по дате отправления: