Re: BUG #5196: Excessive memory consumption when using csvlog
От | Thomas Poindessous |
---|---|
Тема | Re: BUG #5196: Excessive memory consumption when using csvlog |
Дата | |
Msg-id | 1e0e09af0911182159g5d87b27rcd3197cd49980beb@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: BUG #5196: Excessive memory consumption when using csvlog (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-bugs |
Hi, for csv output, we have a 750 Mo logfile. But on another site, we have an logfile of 1,6 Go and logger process was using more than 3 Go of RAM. Even with our configuration (log collector, silent mode and csv/stderr), we launched potsgresql daemon like this : pg_ctl -l ${HOME}/pgsql/logs/postgres.log start so we have three logfiles : postgresql.log (always empty) postgresql-YYYY-MM-DD.csv (big file if set to csvlog) postgresql-YYYY-MM-DD.log (always empty if set to csvlog) Thanks. 2009/11/19 Tom Lane <tgl@sss.pgh.pa.us>: > "Poindessous Thomas" <thomas@poindessous.com> writes: >> we have a weird bug. When using csvlog instead of stderr, the postgres >> logger process uses a lot of memory. We even had an OOM error with kerne= l. > > I poked at this a bit and noted that if only one of the two possible > output files is rotated, logfile_rotate() leaks a copy of the other > file's name. =A0At the default settings this would only amount to one > filename string for every 10MB of output ... how much log output > does your test scenario generate? > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0regards, tom lane >
В списке pgsql-bugs по дате отправления: