Re: vacuumlo issue
От | Tom Lane |
---|---|
Тема | Re: vacuumlo issue |
Дата | |
Msg-id | 25231.1332255187@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | vacuumlo issue (MUHAMMAD ASIF <anaeem.it@hotmail.com>) |
Ответы |
Re: vacuumlo issue
|
Список | pgsql-hackers |
MUHAMMAD ASIF <anaeem.it@hotmail.com> writes: > We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e. > WARNING: out of shared memoryFailed to remove lo 155987: ERROR: out of shared memory HINT: You might need toincrease max_locks_per_transaction. > Why do we need to increase max_locks_per_transaction/shared memory for > clean up operation, This seems to be a consequence of the 9.0-era decision to fold large objects into the standard dependency-deletion algorithm and hence take out locks on them individually. I'm not entirely convinced that that was a good idea. However, so far as vacuumlo is concerned, the only reason this is a problem is that vacuumlo goes out of its way to do all the large-object deletions in a single transaction. What's the point of that? It'd be useful to batch them, probably, rather than commit each deletion individually. But the objects being deleted are by assumption unreferenced, so I see no correctness argument why they should need to go away all at once. regards, tom lane
В списке pgsql-hackers по дате отправления: