Re: vacuumlo issue
От | Robert Haas |
---|---|
Тема | Re: vacuumlo issue |
Дата | |
Msg-id | CA+TgmoZo+KE9esmDqXuJVXgCEKa6mfoBfvdn9JFqNqBitmGq-g@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: vacuumlo issue (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: vacuumlo issue
|
Список | pgsql-hackers |
On Tue, Mar 20, 2012 at 11:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Josh Kupershmidt <schmiddy@gmail.com> writes: >> On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> I'm not entirely convinced that that was a good idea. However, so far >>> as vacuumlo is concerned, the only reason this is a problem is that >>> vacuumlo goes out of its way to do all the large-object deletions in a >>> single transaction. What's the point of that? It'd be useful to batch >>> them, probably, rather than commit each deletion individually. But the >>> objects being deleted are by assumption unreferenced, so I see no >>> correctness argument why they should need to go away all at once. > >> I think you are asking for this option: >> -l LIMIT stop after removing LIMIT large objects >> which was added in b69f2e36402aaa. > > Uh, no, actually that flag seems utterly brain-dead. Who'd want to > abandon the run after removing some arbitrary subset of the > known-unreferenced large objects? You'd just have to do all the search > work over again. What I'm thinking about is doing a COMMIT after every > N large objects. > > I see that patch has not made it to any released versions yet. > Is it too late to rethink the design? I propose (a) redefining it > as committing after every N objects, and (b) having a limit of 1000 > or so objects by default. I'll dispute the characterization of "utterly brain-dead"; it's better than what we had before, which was nothing. However, I think your proposal might be better still. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: