Re: Hitting the nfile limit

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Hitting the nfile limit
Дата
Msg-id 2455.1057341741@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Hitting the nfile limit  (Michael Brusser <michael@synchronicity.com>)
Ответы Re: Hitting the nfile limit
Список pgsql-hackers
Michael Brusser <michael@synchronicity.com> writes:
> Apparently we managed to run out of the open file descriptors on the host
> machine.

This is pretty common if you set a large max_connections value while
not doing anything to raise the kernel nfile limit.  Postgres will
follow what the kernel tells it is a safe number of open files per
process, but far too many kernels lie through their teeth about what
they can support :-(

You can reduce max_files_per_process in postgresql.conf to keep Postgres
from believing what the kernel says.  I'd recommend making sure that
max_connections * max_files_per_process is comfortably less than the
kernel nfiles setting (don't forget the rest of the system wants to have
some files open too ;-))

> I wonder how Postgres handles this situation.
> (Or power outage, or any hard system fault, at this point)

Theoretically we should be able to recover from this without loss of
committed data (assuming you were running with fsync on).  Is your QA
person certain that the record in question had been written by a
successfully-committed transaction?
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Vincent van Leeuwen
Дата:
Сообщение: pg_autovacuum bug and feature request
Следующее
От: Joe Conway
Дата:
Сообщение: Re: Compile error in current cvs (~1230 CDT July 4)