Re: Out of Memory and Configuration Problems (Big Computer)
От | Bill Moran |
---|---|
Тема | Re: Out of Memory and Configuration Problems (Big Computer) |
Дата | |
Msg-id | 20100528125449.4190f7f0.wmoran@potentialtech.com обсуждение исходный текст |
Ответ на | Out of Memory and Configuration Problems (Big Computer) (Tom Wilcox <hungrytom@googlemail.com>) |
Ответы |
Re: Out of Memory and Configuration Problems (Big Computer)
|
Список | pgsql-general |
In response to Tom Wilcox <hungrytom@googlemail.com>: > In addition, I have discovered that the update query that runs on each row > of a 27million row table and fails with Out of memory error will work when > limited to 1million rows in an extremely shorter period of time: > > EXPLAIN ANALYZE > UPDATE nlpg.match_data SET org = normalise(org) WHERE match_data_id > < 1000000; > > "Index Scan using match_data_pkey1 on match_data (cost=0.00..3285969.97 > rows=1147685 width=206) (actual time=0.280..18018.601 rows=999999 loops=1)" > " Index Cond: (match_data_id < 1000000)" > "Total runtime: 182732.207 ms" > > > Whereas this fails with Out of Memory: > > UPDATE nlpg.match_data SET org = normalise(org) WHERE match_data_id; You're not liable to get shit for answers if you omit the mailing list from the conversation, especially since I know almost nothing about tuning PostgreSQL installed on Windows. Are there multiple queries having this problem? The original query didn't have normalise() in it, and I would be highly suspicious that a custom function may have a memory leak or other memory-intensive side-effects. What is the code for that function? For example, does: UPDATE nlpg.match_data SET org = org WHERE match_data_id; finish in a reasonable amount of time or exhibit the same out of memory problem? It'd be nice to see a \d on that table ... does it have any triggers or cascading foreign keys? And stop -- Bill Moran http://www.potentialtech.com http://people.collaborativefusion.com/~wmoran/
В списке pgsql-general по дате отправления: