Re: Vacuum VS Vacuum Analyze
От | Marek Pętlicki |
---|---|
Тема | Re: Vacuum VS Vacuum Analyze |
Дата | |
Msg-id | 20010325150951.F1221@marek.almaran.home обсуждение исходный текст |
Ответ на | Re: Vacuum VS Vacuum Analyze (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Vacuum VS Vacuum Analyze
|
Список | pgsql-general |
On Friday, March, 2001-03-23 at 17:42:37, Tom Lane wrote: > "Matt Friedman" <matt@daart.ca> writes: > > I currently running vacuum nighly using cron and once in a while I run > > vacuum analyze (as postgres). > > Any reason why I wouldn't just simply run vacuum analyze each night? > > If you can spare the cycles, you might as well make every vacuum a > vacuum analyze. I have experienced that vacuum, especially vacuum analyze on heavily used database sometimes seems to last forever. A very quick_and_dirty hack is to run it twice: first time I run simple vacuum, but before that I drop all the indices. After recreating of indices I run vacuum analyze. The whole process runs lightning fast (the longest process is to recreate the indices). The only problem is not to allow users to add anything to the database, because it may end up in broken unique-key indices. My solution to that is... temporary shutdown of services using the database (those are helper services for my WWW application) which simply makes my application refuse to work. The whole process is scheduled for a deep night (about 4:00 AM) so hardly anybody can notice ;-) (it takes approx. 5 minutes) The other solution would be not to drop the unique indices (but I don't know the speed penalty in this case). Question is: have I misspotted something? Is this routine of any danger that I fail to notice? regards -- Marek Pętlicki <marpet@buy.pl>
В списке pgsql-general по дате отправления: