Re: pg_autovacuum next steps
От | Matthew T. O'Connor |
---|---|
Тема | Re: pg_autovacuum next steps |
Дата | |
Msg-id | 1079969862.14960.16.camel@zeudora.zeut.net обсуждение исходный текст |
Ответ на | Re: pg_autovacuum next steps (Karel Zak <zakkr@zf.jcu.cz>) |
Список | pgsql-hackers |
On Mon, 2004-03-22 at 04:23, Karel Zak wrote: > All. It's important do it as backend process. Because libpq has very, > very limited and slow resources for work with backend stuff. Agreed. > The base should be the standard backend with different "main loop" that > will instead socket checks some shared information about tables and > calls directly vacuum stuff. In this case you can omit work with > connections, parser etc. So am I to understand I can start up a postmaster subprocess and then be able to monitor the activity of all the databases? I guess that makes sense since I would be talking to the stats collector directly via a socket. But that doesn't solve the problem of issuing the vacuum to different databases, I would still create a new backend for every database that needs a vacuum or analyze issues. Also, we don't want to launch multiple simultaneous vacuums so we want the commands to be serialized (I know some people want to be able to do this if databases are located on different disks, but for now i'm keeping things simple). I have an idea for this that I just mentioned in another message to the list. > I thought about it in last days and I found perfect Tom's idea about > FSM tables usage: There has been lots of discusion of incorporating FSM data into the auto_vacuum decision process. I am interested in exploring this, but since I'm already biting off more than I can easily chew, I am going to try and leave the decision making process the same for now. BTW I think we need to use both tools (stats and FSM) since not all tables will be in the FSM, an insert only table still needs to be analyzed periodically and a lightly updated table will eventually need to be vacuumed.
В списке pgsql-hackers по дате отправления: