Обсуждение: vacuumlo issue

Поиск
Список
Период
Сортировка

vacuumlo issue

От
MUHAMMAD ASIF
Дата:
Hi,

We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e.
   WARNING:  out of shared memoryFailed to remove lo 155987:    ERROR:  out of shared memory   HINT:  You might need to
increasemax_locks_per_transaction. 
Why do we need to increase max_locks_per_transaction/shared memory for clean up operation, if there are huge number
recordshow can we tackle this situation with limited memory?. It is reproducible on postgresql-9.1.2. The steps are as
following(PFA vacuumlo-test_data.sql that generates dummy data)  i.e.  

Steps: 
1. ./bin/initdb -D data-vacuumlo_test12. ./bin/pg_ctl -D data-vacuumlo_test1 -l logfile_data-vacuumlo_test1 start3.
./bin/createdbvacuumlo_test4. bin/psql -d vacuumlo_test -f vacuumlo-test_data.sql5. bin/vacuumlo vacuumlo_test 

~/work/pg/postgresql-9.1.2/inst$ bin/psql -d vacuumlo_test -f vacuumlo-test_data.sql
CREATE FUNCTION
CREATE FUNCTION
 create_manylargeobjects
-------------------------
 
(1 row)
 count
-------
 13001
(1 row)

~/work/pg/postgresql-9.1.2/inst$ bin/vacuumlo vacuumlo_test

WARNING:  out of shared memory
Failed to remove lo 36726: ERROR:  out of shared memory

HINT:  You might need to increase max_locks_per_transaction.
Failed to remove lo 36727: ERROR:  current transaction is aborted, commands ignored until end of transaction block
Failed to remove lo 36728: ERROR:  current transaction is aborted, commands ignored until end of transaction block
Failed to remove lo 36729: ERROR:  current transaction is aborted, commands ignored until end of transaction block
....
....


Best Regards,Muhammad Asif Naeem


Вложения

Re: vacuumlo issue

От
MUHAMMAD ASIF
Дата:

I have reformatted the mail, sorry for inconvenience. Thanks.
We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e.
WARNING: out of shared memory      Failed to remove lo 155987:    ERROR:  out of shared memory         HINT:  You might
needto increase max_locks_per_transaction. 
Why do we need to increase max_locks_per_transaction/shared memory for clean up operation, if there are huge number
recordshow can we tackle this situation with limited memory?. It is reproducible on postgresql-9.1.2. The steps are as
following(PFA vacuumlo-test_data.sql that generates dummy data)  i.e. 
Steps:
1. ./bin/initdb -D data-vacuumlo_test12. ./bin/pg_ctl -D data-vacuumlo_test1 -l logfile_data-vacuumlo_test1 start3.
./bin/createdbvacuumlo_test4. bin/psql -d vacuumlo_test -f vacuumlo-test_data.sql5. bin/vacuumlo vacuumlo_test 
~/work/pg/postgresql-9.1.2/inst$ bin/psql -d vacuumlo_test -f vacuumlo-test_data.sqlCREATE FUNCTIONCREATE FUNCTION
create_manylargeobjects-------------------------(1 row) count------- 13001(1 row) 
~/work/pg/postgresql-9.1.2/inst$ bin/vacuumlo vacuumlo_test
WARNING:  out of shared memoryFailed to remove lo 36726: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.Failed to remove lo 36727: ERROR:  current transaction is
aborted,commands ignored until end of transaction blockFailed to remove lo 36728: ERROR:  current transaction is
aborted,commands ignored until end of transaction blockFailed to remove lo 36729: ERROR:  current transaction is
aborted,commands ignored until end of transaction block........ 
Best Regards,Asif Naeem

Re: vacuumlo issue

От
Tom Lane
Дата:
MUHAMMAD ASIF <anaeem.it@hotmail.com> writes:
> We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e.
>    WARNING:  out of shared memoryFailed to remove lo 155987:    ERROR:  out of shared memory   HINT:  You might need
toincrease max_locks_per_transaction.
 
> Why do we need to increase max_locks_per_transaction/shared memory for
> clean up operation,

This seems to be a consequence of the 9.0-era decision to fold large
objects into the standard dependency-deletion algorithm and hence
take out locks on them individually.

I'm not entirely convinced that that was a good idea.  However, so far
as vacuumlo is concerned, the only reason this is a problem is that
vacuumlo goes out of its way to do all the large-object deletions in a
single transaction.  What's the point of that?  It'd be useful to batch
them, probably, rather than commit each deletion individually.  But the
objects being deleted are by assumption unreferenced, so I see no
correctness argument why they should need to go away all at once.
        regards, tom lane


Re: vacuumlo issue

От
"Albe Laurenz"
Дата:
MUHAMMAD ASIF wrote:
> We have noticed the following issue with vacuumlo database that have millions of record
> in pg_largeobject i.e.

[...]

> ~/work/pg/postgresql-9.1.2/inst$ bin/vacuumlo vacuumlo_test
>
> WARNING:  out of shared memory
> Failed to remove lo 36726: ERROR:  out of shared memory
>
> HINT:  You might need to increase max_locks_per_transaction.

This is not a question for the hackers list.

vacuumlo handles all deletes in a single transaction, so
max_locks_per_transaction can be exhausted.

Yours,
Laurenz Albe


Re: vacuumlo issue

От
Josh Kupershmidt
Дата:
On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

> I'm not entirely convinced that that was a good idea.  However, so far
> as vacuumlo is concerned, the only reason this is a problem is that
> vacuumlo goes out of its way to do all the large-object deletions in a
> single transaction.  What's the point of that?  It'd be useful to batch
> them, probably, rather than commit each deletion individually.  But the
> objects being deleted are by assumption unreferenced, so I see no
> correctness argument why they should need to go away all at once.

I think you are asking for this option:
 -l LIMIT     stop after removing LIMIT large objects

which was added in b69f2e36402aaa.

Josh


Re: vacuumlo issue

От
Tom Lane
Дата:
Josh Kupershmidt <schmiddy@gmail.com> writes:
> On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I'm not entirely convinced that that was a good idea. However, so far
>> as vacuumlo is concerned, the only reason this is a problem is that
>> vacuumlo goes out of its way to do all the large-object deletions in a
>> single transaction. What's the point of that? It'd be useful to batch
>> them, probably, rather than commit each deletion individually.  But the
>> objects being deleted are by assumption unreferenced, so I see no
>> correctness argument why they should need to go away all at once.

> I think you are asking for this option:
>   -l LIMIT     stop after removing LIMIT large objects
> which was added in b69f2e36402aaa.

Uh, no, actually that flag seems utterly brain-dead.  Who'd want to
abandon the run after removing some arbitrary subset of the
known-unreferenced large objects?  You'd just have to do all the search
work over again.  What I'm thinking about is doing a COMMIT after every
N large objects.

I see that patch has not made it to any released versions yet.
Is it too late to rethink the design?  I propose (a) redefining it
as committing after every N objects, and (b) having a limit of 1000
or so objects by default.
        regards, tom lane


Re: vacuumlo issue

От
Robert Haas
Дата:
On Tue, Mar 20, 2012 at 11:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Josh Kupershmidt <schmiddy@gmail.com> writes:
>> On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> I'm not entirely convinced that that was a good idea. However, so far
>>> as vacuumlo is concerned, the only reason this is a problem is that
>>> vacuumlo goes out of its way to do all the large-object deletions in a
>>> single transaction. What's the point of that? It'd be useful to batch
>>> them, probably, rather than commit each deletion individually.  But the
>>> objects being deleted are by assumption unreferenced, so I see no
>>> correctness argument why they should need to go away all at once.
>
>> I think you are asking for this option:
>>   -l LIMIT     stop after removing LIMIT large objects
>> which was added in b69f2e36402aaa.
>
> Uh, no, actually that flag seems utterly brain-dead.  Who'd want to
> abandon the run after removing some arbitrary subset of the
> known-unreferenced large objects?  You'd just have to do all the search
> work over again.  What I'm thinking about is doing a COMMIT after every
> N large objects.
>
> I see that patch has not made it to any released versions yet.
> Is it too late to rethink the design?  I propose (a) redefining it
> as committing after every N objects, and (b) having a limit of 1000
> or so objects by default.

I'll dispute the characterization of "utterly brain-dead"; it's better
than what we had before, which was nothing.  However, I think your
proposal might be better still.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: vacuumlo issue

От
MUHAMMAD ASIF
Дата:
> > I think you are asking for this option:
> > -l LIMIT stop after removing LIMIT large objects
> > which was added in b69f2e36402aaa.

Thank you for informing about -l option in 9.2. Can I build/use this contrib with older pg versions i.e. pg 9.1 ? . Thanks.

> Uh, no, actually that flag seems utterly brain-dead. Who'd want to
> abandon the run after removing some arbitrary subset of the
> known-unreferenced large objects? You'd just have to do all the search
> work over again. What I'm thinking about is doing a COMMIT after every
> N large objects.
>
> I see that patch has not made it to any released versions yet.
> Is it too late to rethink the design? I propose (a) redefining it
> as committing after every N objects, and (b) having a limit of 1000
> or so objects by default.

That will be really nice and helpful if it automatically clean all of the orphan large objects. Thanks.

> regards, tom lane
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

Re: vacuumlo issue

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Tue, Mar 20, 2012 at 11:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I see that patch has not made it to any released versions yet.
>> Is it too late to rethink the design?  I propose (a) redefining it
>> as committing after every N objects, and (b) having a limit of 1000
>> or so objects by default.

> I'll dispute the characterization of "utterly brain-dead"; it's better
> than what we had before, which was nothing.  However, I think your
> proposal might be better still.

Not hearing any objections, I will go make that happen.
        regards, tom lane