Re: Eternal vacuuming....

Поиск
Список
Период
Сортировка
От Thomas Lockhart
Тема Re: Eternal vacuuming....
Дата
Msg-id 391ADD26.B7522589@alumni.caltech.edu
обсуждение исходный текст
Ответ на Eternal vacuuming....  (Tim Perdue <tperdue@valinux.com>)
Ответы Re: Eternal vacuuming....  (Alfred Perlstein <bright@wintelcom.net>)
Re: Eternal vacuuming....  (Bruce Momjian <pgman@candle.pha.pa.us>)
Список pgsql-hackers
> In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -
> 1,000,000) then hit vacuum, the vacuum will run literally forever.
> ...before I finally killed the vacuum process, manually removed the
> pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.
> Will this be fixed?

Patches? ;)

Just thinking here: could we add an option to vacuum so that it would
drop and recreate indices "automatically"? We already have the ability
to chain multiple internal commands together, so that would just
require snarfing the names and properties of indices in the parser
backend and then doing the drops and creates on the fly.

A real problem with this is that those commands are currently not
rollback-able, so if something quits in the middle (or someone kills
the vacuum process; I've heard of this happening ;) then you are left
without indices in sort of a hidden way.

Not sure what the prospects are of making these DDL statements
transactionally secure though I know we've had some discussions of
this on -hackers.
                      - Thomas

-- 
Thomas Lockhart                lockhart@alumni.caltech.edu
South Pasadena, California


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tim Perdue
Дата:
Сообщение: Eternal vacuuming....
Следующее
От: The Hermit Hacker
Дата:
Сообщение: Re: Some CVS stuff, A 7.0-stable branch? and a mailing list?