Re: VACUUM ANALYZE out of memory
От | Michael Akinde |
---|---|
Тема | Re: VACUUM ANALYZE out of memory |
Дата | |
Msg-id | 475E74E3.9040801@met.no обсуждение исходный текст |
Ответ на | Re: VACUUM ANALYZE out of memory (Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>) |
Ответы |
Re: VACUUM ANALYZE out of memory
Re: VACUUM ANALYZE out of memory Re: VACUUM ANALYZE out of memory |
Список | pgsql-hackers |
Thanks for the rapid responses.
Stefan Kaltenbrunner wrote:
The way the process was running, it seems to have basically just continually allocated memory until (presumably) it broke through the slightly less than 1.2 GB shared memory allocation we had provided for PostgreSQL (at least the postgres process was still running by the time resident size had reached 1.1 GB).
Incidentally, in the first error of the two I posted, the shared memory setting was significantly lower (24 MB, I believe). I'll try with 128 MB before I leave in the evening, though (assuming the other tests I'm running complete by then).
Simon Riggs wrote:
so I like to know what (useful) tools we have available and stress the system as much as possible... :-)
I am currently running a VACUUM VERBOSE on the database. It isn't done yet, but it is running with a steady (low) resource usage.
Regards,
Michael A.
Stefan Kaltenbrunner wrote:
this seems simply a problem of setting maintenance_work_mem too high (ie higher than what your OS can support - maybe an ulimit/processlimit is in effect?) . Try reducing maintenance_work_mem to say 128MB and retry.I set up the system together with one of our Linux sysOps, so I think the settings should be OK. Kernel.shmmax is set to 1.2 GB, but I'll get him to recheck if there could be any other limits he has forgotten to increase.
If you promise postgresql that it can get 1GB it will happily try to use it ...
The way the process was running, it seems to have basically just continually allocated memory until (presumably) it broke through the slightly less than 1.2 GB shared memory allocation we had provided for PostgreSQL (at least the postgres process was still running by the time resident size had reached 1.1 GB).
Incidentally, in the first error of the two I posted, the shared memory setting was significantly lower (24 MB, I believe). I'll try with 128 MB before I leave in the evening, though (assuming the other tests I'm running complete by then).
Simon Riggs wrote:
I suspect so, though it has only happened a couple of times yet (as it does take a while) before it hits that 1.1 GB roof. But part of the reason for running the VACUUM FULL was of course to find out how long time it would take. Reliability is always a priority for us,On Tue, 2007-12-11 at 10:59 +0100, Michael Akinde wrote:I am encountering problems when trying to run VACUUM FULL ANALYZE on a particular table in my database; namely that the process crashes out with the following problem:Probably just as well, since a VACUUM FULL on an 800GB table is going to take a rather long time, so you are saved from discovering just how excessively long it will run for. But it seems like a bug. This happens consistently, I take it?
so I like to know what (useful) tools we have available and stress the system as much as possible... :-)
I ran just ANALYZE on the entire database yesterday, and that worked without any problems.Can you run ANALYZE and then VACUUM VERBOSE, both on just pg_largeobject, please? It will be useful to know whether they succeed.
I am currently running a VACUUM VERBOSE on the database. It isn't done yet, but it is running with a steady (low) resource usage.
Regards,
Michael A.
Вложения
В списке pgsql-hackers по дате отправления: