Re: csvlog gets crazy when csv file is not writable
От | Michael Paquier |
---|---|
Тема | Re: csvlog gets crazy when csv file is not writable |
Дата | |
Msg-id | 20180821022342.GD2897@paquier.xyz обсуждение исходный текст |
Ответ на | csvlog gets crazy when csv file is not writable (Alexander Kukushkin <cyberdemn@gmail.com>) |
Ответы |
Re: csvlog gets crazy when csv file is not writable
|
Список | pgsql-bugs |
On Mon, Aug 20, 2018 at 03:55:01PM +0200, Alexander Kukushkin wrote: > If for some reason postgres can't open 'postgresql-%Y-%m-%d.csv' file > for writing, it gets mad and outputs a few thousands of lines to > stderr: > > 2018-08-20 15:40:46.920 CEST [22069] PANIC: could not open log file Ah, this log message could be changed to be simply "could not open file", the file name offers enough context... > And so on. ERRORDATA_STACK_SIZE is presented in the output 3963 times > > Sure, it is entirely my fault, that csv file is not writable, but such > avalanche of PANIC lines is really scary. Yeah, this is a recursion in logfile_open -> open_csvlogfile. With stderr there is a much better effort, where the server just quits with a FATAL if the log file cannot be opened in SysLogger_Start. Could this be an argument for allowing logfile_open() to use write_stderr? I am not sure under the hood of the don't-do-that rule. And we make sure that log_destination is writable already at early stage, which would cover any scenarios like a kernel switching the log partition to be read-only. -- Michael
Вложения
В списке pgsql-bugs по дате отправления: