Обсуждение: Vacuum analyse after a long time without one ...
I have a problem with a database. The last full vacuum analyse was made long time ago... So I tried to start launching a vacuum analyse and I get this error : $ vacuumdb -az vacuumdb: vacuuming database "postgres" VACUUM vacuumdb: vacuuming database "mexi" vacuumdb: vacuuming of database "mexi" failed: ERROR: out of memory DETAIL: Failed on request of size 134697600. Is there a way to recover/vacuum this database? Is there an option in the postgresql.conf that I can change? Thanks a lot, nm.
Nicolas Michel <nicolas.michel@lemail.be> writes: > I have a problem with a database. The last full vacuum analyse was made > long time ago... So I tried to start launching a vacuum analyse and I > get this error : > $ vacuumdb -az > vacuumdb: vacuuming database "postgres" > VACUUM > vacuumdb: vacuuming database "mexi" > vacuumdb: vacuuming of database "mexi" failed: ERROR: out of memory > DETAIL: Failed on request of size 134697600. What is your maintenance_work_mem setting? It rather looks like it's more than your system will really allow. regards, tom lane
Tom Lane a écrit : > Nicolas Michel <nicolas.michel@lemail.be> writes: >> I have a problem with a database. The last full vacuum analyse was made >> long time ago... So I tried to start launching a vacuum analyse and I >> get this error : > >> $ vacuumdb -az >> vacuumdb: vacuuming database "postgres" >> VACUUM >> vacuumdb: vacuuming database "mexi" >> vacuumdb: vacuuming of database "mexi" failed: ERROR: out of memory >> DETAIL: Failed on request of size 134697600. > > What is your maintenance_work_mem setting? It rather looks like it's > more than your system will really allow. > > regards, tom lane > I already tried to set the work_mem setting to the max value I can but it changed nothing.
Nicolas Michel <nicolas.michel@lemail.be> writes: > Tom Lane a �crit : >> What is your maintenance_work_mem setting? It rather looks like it's >> more than your system will really allow. > I already tried to set the work_mem setting to the max value I can but > it changed nothing. I did not say work_mem, and increasing the parameter is the wrong direction too. regards, tom lane
I think you've missed Tom's point. You need to set maintenance_work_mem based on the physical capacity of your system. If it (the parameter) is set too high, your operating system will encounter errors when trying to satisfy the requests that Postgres is making. Also as Tom just pointed out, there is a difference between work_mem and maintenance_work_mem. Thank you, Lewis Kapell Computer Operations Seton Home Study School ------------------------------------------------------- For a free subscription to the Seton Home Study School monthly electronic newsletter, send an e-mail to newsletter-sub@setonhome.org ------------------------------------------------------- Nicolas Michel wrote: > Tom Lane a écrit : >> Nicolas Michel <nicolas.michel@lemail.be> writes: >>> I have a problem with a database. The last full vacuum analyse was made >>> long time ago... So I tried to start launching a vacuum analyse and I >>> get this error : >> >>> $ vacuumdb -az >>> vacuumdb: vacuuming database "postgres" >>> VACUUM >>> vacuumdb: vacuuming database "mexi" >>> vacuumdb: vacuuming of database "mexi" failed: ERROR: out of memory >>> DETAIL: Failed on request of size 134697600. >> >> What is your maintenance_work_mem setting? It rather looks like it's >> more than your system will really allow. >> >> regards, tom lane >> > I already tried to set the work_mem setting to the max value I can but > it changed nothing. >
I think Mr. Lane meant that you may have set the maintenance_work_mem setting too high, try setting it to about 25% of thetotal RAM on your system for now. -----Original Message----- From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin-owner@postgresql.org] On Behalf Of Nicolas Michel Sent: Friday, September 11, 2009 10:37 AM To: Tom Lane Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] Vacuum analyse after a long time without one ... Tom Lane a écrit : > Nicolas Michel <nicolas.michel@lemail.be> writes: >> I have a problem with a database. The last full vacuum analyse was made >> long time ago... So I tried to start launching a vacuum analyse and I >> get this error : > >> $ vacuumdb -az >> vacuumdb: vacuuming database "postgres" >> VACUUM >> vacuumdb: vacuuming database "mexi" >> vacuumdb: vacuuming of database "mexi" failed: ERROR: out of memory >> DETAIL: Failed on request of size 134697600. > > What is your maintenance_work_mem setting? It rather looks like it's > more than your system will really allow. > > regards, tom lane > I already tried to set the work_mem setting to the max value I can but it changed nothing. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Lewis Kapell a écrit : > I think you've missed Tom's point. You need to set maintenance_work_mem > based on the physical capacity of your system. If it (the parameter) is > set too high, your operating system will encounter errors when trying to > satisfy the requests that Postgres is making. > > Also as Tom just pointed out, there is a difference between work_mem and > maintenance_work_mem. > > Thank you, > > Lewis Kapell > Computer Operations > Seton Home Study School > > ------------------------------------------------------- > For a free subscription to the Seton Home Study School monthly > electronic newsletter, send an e-mail to newsletter-sub@setonhome.org > ------------------------------------------------------- > > > Nicolas Michel wrote: >> Tom Lane a écrit : >>> Nicolas Michel <nicolas.michel@lemail.be> writes: >>>> I have a problem with a database. The last full vacuum analyse was made >>>> long time ago... So I tried to start launching a vacuum analyse and I >>>> get this error : >>> >>>> $ vacuumdb -az >>>> vacuumdb: vacuuming database "postgres" >>>> VACUUM >>>> vacuumdb: vacuuming database "mexi" >>>> vacuumdb: vacuuming of database "mexi" failed: ERROR: out of memory >>>> DETAIL: Failed on request of size 134697600. >>> >>> What is your maintenance_work_mem setting? It rather looks like it's >>> more than your system will really allow. >>> >>> regards, tom lane >>> >> I already tried to set the work_mem setting to the max value I can but >> it changed nothing. >> > > Yes, I made a mistake when replying to you. This is really maintenance_work_mem that I modified and not work_mem. I'm on postgres 8.1 on debian etch - I have 16Go of RAM on that server (but 32bits OS with bigmem kernel ; so I set shared buffer to 350000 (~2,7GB) for a shmmax of 4000000000 (~3,8GB) - the first maintenance_work_mem value I was : 64396 (~62MB). It didn't worked. - the max value I tryied was 2080000 (~2Go). It didn't work neither.
Nicolas Michel <nicolas.michel@lemail.be> writes: > - I have 16Go of RAM on that server (but 32bits OS with bigmem kernel ; > so I set shared buffer to 350000 (~2,7GB) for a shmmax of 4000000000 > (~3,8GB) On a 32-bit machine that's just insane. You've got something like 300MB left over in the process address space (assuming the typical 1Gb for kernel split). No wonder things are falling over. Try putting shared_buffers somewhere around 1Gb. Or switch to 64-bit. regards, tom lane
Tom Lane a écrit : > Nicolas Michel <nicolas.michel@lemail.be> writes: >> - I have 16Go of RAM on that server (but 32bits OS with bigmem kernel ; >> so I set shared buffer to 350000 (~2,7GB) for a shmmax of 4000000000 >> (~3,8GB) > > On a 32-bit machine that's just insane. You've got something like 300MB > left over in the process address space (assuming the typical 1Gb for > kernel split). No wonder things are falling over. Try putting > shared_buffers somewhere around 1Gb. Or switch to 64-bit. > > regards, tom lane > Thank you for your advices.I will follow them.
For a 64 bit machine..does the higher shared buffer setting really offer a significant improvement over a 32 bit lower setting coupled with linux caching ? Is the postgres shared buffer algorithm superior to the linux caching algorithm to favor a switch to 64 bit On Fri, Sep 11, 2009 at 9:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Nicolas Michel <nicolas.michel@lemail.be> writes: >> - I have 16Go of RAM on that server (but 32bits OS with bigmem kernel ; >> so I set shared buffer to 350000 (~2,7GB) for a shmmax of 4000000000 >> (~3,8GB) > > On a 32-bit machine that's just insane. You've got something like 300MB > left over in the process address space (assuming the typical 1Gb for > kernel split). No wonder things are falling over. Try putting > shared_buffers somewhere around 1Gb. Or switch to 64-bit. > > regards, tom lane > > -- > Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-admin >
Anj Adu <fotographs@gmail.com> writes: > For a 64 bit machine..does the higher shared buffer setting really > offer a significant improvement over a 32 bit lower setting coupled > with linux caching ? Is the postgres shared buffer algorithm superior > to the linux caching algorithm to favor a switch to 64 bit There are different schools of thought about that, but in any case the difference is not likely to be large. If you want to discuss it, pgsql-performance would be a better forum. regards, tom lane