Re: Slow query with a lot of data
От | Moritz Onken |
---|---|
Тема | Re: Slow query with a lot of data |
Дата | |
Msg-id | 74E8B5A3-F0E7-4B66-947B-AE7715A55B90@houseofdesign.de обсуждение исходный текст |
Ответ на | Re: Slow query with a lot of data ("Scott Carey" <scott@richrelevance.com>) |
Ответы |
Re: Slow query with a lot of data
|
Список | pgsql-performance |
Am 21.08.2008 um 16:39 schrieb Scott Carey: > It looks to me like the work_mem did have an effect. > > Your earlier queries had a sort followed by group aggregate at the > top, and now its a hash-aggregate. So the query plan DID change. > That is likely where the first 10x performance gain came from. But it didn't change as I added the sub select. Thank you guys very much. The speed is now ok and I hope I can finish tihs work soon. But there is another problem. If I run this query without the limitation of the user id, postgres consumes about 150GB of disk space and dies with ERROR: could not write block 25305351 of temporary file: No space left on device After that the avaiable disk space is back to normal. Is this normal? The resulting table (setup1) is not bigger than 1.5 GB. moritz
В списке pgsql-performance по дате отправления: