Re: Shortening time of vacuum analyze
От | Tom Lane |
---|---|
Тема | Re: Shortening time of vacuum analyze |
Дата | |
Msg-id | 23778.1012407778@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Shortening time of vacuum analyze (Francisco Reyes <lists@natserv.com>) |
Список | pgsql-general |
Francisco Reyes <lists@natserv.com> writes: > Until 7.2 release is out I am looking for a way to optimize a vacuum > analyze. 7.2RC2 is going to mutate into 7.2 *real* soon now, probably next week. My best advice to you is not to wait any longer. > Nightly doing delete of about 6 million records and then re-merging. > Previously I was doing truncate, but this was an issue if a user tried to > use the system while we were loading. Now we are having a problem while > the server is running vacuum analyzes. > Does vacuum alone takes less time? Yes, but with so many deletes I'm sure that it's the space-compaction part that's killing you. The only useful workaround I can think of is to create a new table, fill it with the data you want, then DROP the old table and ALTER RENAME the new one into place. However this will not work if there are other tables with foreign-key references to the big table. You also have a problem if you can't shut off updates to the old table while this is going on. 7.2's lazy VACUUM ought to be perfect for you, though. regards, tom lane
В списке pgsql-general по дате отправления: