Re: Large pgstat.stat file causes I/O storm
От | Cristian Gafton |
---|---|
Тема | Re: Large pgstat.stat file causes I/O storm |
Дата | |
Msg-id | Pine.LNX.4.64.0801291557510.19796@alienpad.rpath.com обсуждение исходный текст |
Ответ на | Re: Large pgstat.stat file causes I/O storm (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Large pgstat.stat file causes I/O storm
|
Список | pgsql-hackers |
On Tue, 29 Jan 2008, Tom Lane wrote: > (Pokes around in the code...) I think the problem here is that the only > active mechanism for flushing dead stats-table entries is > pgstat_vacuum_tabstat(), which is invoked by a VACUUM command or an > autovacuum. Once-a-day VACUUM isn't gonna cut it for you under those > circumstances. What you might do is just issue a VACUUM on some > otherwise-uninteresting small table, once an hour or however often you > need to keep the stats file bloat to a reasonable level. I just ran a vacuumdb -a on the box - the pgstat file is still >90MB in size. If vacuum is supposed to clean up the cruft from pgstat, then I don't know if we're looking at the right cruft - I kind of expected the pgstat file to go down in size and the I/O storm to subside, but that is not happening after vacuum. I will try to instrument the application to record the oids of the temp tables it creates and investigate from that angle, but in the meantime is there any way to reset the stats collector altogether? Could this be a corrupt stat file that gets read and written right back on every loop without any sort of validation? Thanks, Cristian -- Cristian Gafton rPath, Inc.
В списке pgsql-hackers по дате отправления: