Re: Dump / restore for optimization?
От | Bill Montgomery |
---|---|
Тема | Re: Dump / restore for optimization? |
Дата | |
Msg-id | 40E4276E.9090101@lulu.com обсуждение исходный текст |
Ответ на | Re: Dump / restore for optimization? (Shridhar Daithankar <shridhar@frodo.hserus.net>) |
Список | pgsql-general |
Shridhar Daithankar wrote: > lec wrote: > >> I always have to dump & restore to make a database (which is in use >> for a few months) efficient again. I have scheduled nightly vacuum >> analyze, but only dump & restore can make the database efficient >> again. Has anyone experience this? > > > You could try several things if you want to avoid dump/restore > > > - Try vacuum full instead of vacuum analyze VACUUM FULL can be undesirable due to it's locking effects. If you can afford to be unable to write to your database for the duration of the vacuum full, go for it. Otherwise, make sure you have enough FSM (free space map) space. The postgresql.conf parameters that affect it are max_fsm_relations and max_fsm_pages. If you have tables that see a lot of UPDATE or DELETE action throughout the day, and tiny (e.g. default) FSM settings you may run out of space in your FSM to track "dead" tuples. When that happens, your tables will grow and grow, and will not be fully cleaned up by a regular vacuum. This will lead to slowly degrading performance that will only be fixed by a full vacuum or a dump/restore. Good Luck, Bill Montgomery
В списке pgsql-general по дате отправления: