Re: vacuumlo issue
От | Tom Lane |
---|---|
Тема | Re: vacuumlo issue |
Дата | |
Msg-id | 26230.1332258653@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: vacuumlo issue (Josh Kupershmidt <schmiddy@gmail.com>) |
Ответы |
Re: vacuumlo issue
Re: vacuumlo issue |
Список | pgsql-hackers |
Josh Kupershmidt <schmiddy@gmail.com> writes: > On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> I'm not entirely convinced that that was a good idea. However, so far >> as vacuumlo is concerned, the only reason this is a problem is that >> vacuumlo goes out of its way to do all the large-object deletions in a >> single transaction. What's the point of that? It'd be useful to batch >> them, probably, rather than commit each deletion individually. But the >> objects being deleted are by assumption unreferenced, so I see no >> correctness argument why they should need to go away all at once. > I think you are asking for this option: > -l LIMIT stop after removing LIMIT large objects > which was added in b69f2e36402aaa. Uh, no, actually that flag seems utterly brain-dead. Who'd want to abandon the run after removing some arbitrary subset of the known-unreferenced large objects? You'd just have to do all the search work over again. What I'm thinking about is doing a COMMIT after every N large objects. I see that patch has not made it to any released versions yet. Is it too late to rethink the design? I propose (a) redefining it as committing after every N objects, and (b) having a limit of 1000 or so objects by default. regards, tom lane
В списке pgsql-hackers по дате отправления: