Re: Odd out of memory problem.
От | Andrew Dunstan |
---|---|
Тема | Re: Odd out of memory problem. |
Дата | |
Msg-id | 4F7098AB.4020107@dunslane.net обсуждение исходный текст |
Ответ на | Re: Odd out of memory problem. (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Odd out of memory problem.
|
Список | pgsql-hackers |
On 03/26/2012 12:11 PM, Tom Lane wrote: > Andrew Dunstan<andrew@dunslane.net> writes: >> On 03/26/2012 11:18 AM, Tom Lane wrote: >>> Could we see EXPLAIN output for this query? >> Currently it shows: >> Limit (cost=19443025.87..19443025.89 rows=10 width=8 >> -> Sort (cost=19443025.87..19446451.29 rows=1370168 width=8) >> Sort Key: (max(pageno)) >> -> GroupAggregate (cost=18537785.99..19413417.03 rows=1370168 >> width=8) >> -> Sort (cost=18537785.99..18823953.97 rows=114467192 >> width=8) >> Sort Key: loid >> -> Seq Scan on ldata (cost=0.00..1651163.92 >> rows=114467192 width=8) >> The table might have been analysed since I ran the query, though. > That plan should not create a tuple hash table, so I think it's almost > certain that the plan changed. It might be interesting to remove the > pg_statistic rows for the table and then see what plan you get. Yeah, that gets us: Limit (cost=2223492.78..2223492.81 rows=10 width=8) -> Sort (cost=2223492.78..2223493.28 rows=200 width=8) SortKey: (max(pageno)) -> HashAggregate (cost=2223485.96..2223488.46 rows=200 width=8) -> Seq Scanon ldata (cost=0.00..1651154.64 rows=114466264 width=8) > >> To answer Hans' question, we have seen the problem in other contexts. We >> first noticed this problem in a failure to restore large objects when >> running pg_restore. > [ scratches head... ] I don't understand how or why pg_restore would be > executing such a query. > > It's not. I was explaining that we have seen memory failures in *other* contexts, not just this query. The restore fails after many hours on a call to lo_write(). cheers andrew
В списке pgsql-hackers по дате отправления: