Re: Big delete on big table... now what?
От | Fernando Hevia |
---|---|
Тема | Re: Big delete on big table... now what? |
Дата | |
Msg-id | 04cd01c906b7$cfa0bf10$8f01010a@iptel.com.ar обсуждение исходный текст |
Ответ на | Re: Big delete on big table... now what? (Gregory Stark <stark@enterprisedb.com>) |
Список | pgsql-performance |
> Gregory Stark <start@enterprisedb.com> writes: > > "Bill Moran" <wmoran@collaborativefusion.com> writes: > > > "Fernando Hevia" <fhevia@ip-tel.com.ar> wrote: > >> Hi list. > >> I have a table with over 30 million rows. Performance was dropping > >> steadily so I moved old data not needed online to an > historic table. > >> Now the table has about 14 million rows. I don't need the > disk space > >> returned to the OS but I do need to improve performance. > Will a plain > >> vacuum do or is a vacuum full necessary? > >> ¿Would a vacuum full improve performance at all? > > > > If you can afford the downtime on that table, cluster would be best. > > > > If not, do the normal vacuum and analyze. This is unlikely > to improve > > the performance much (although it may shrink the table _some_) but > > regular vacuum will keep performance from getting any worse. > > Note that CLUSTER requires enough space to store the new and > the old copies of the table simultaneously. That's the main > reason for VACUUM FULL to still exist. > > There is also the option of doing something like (assuming id > is already an integer -- ie this doesn't actually change the data): > > ALTER TABLE x ALTER id TYPE integer USING id; > > which will rewrite the whole table. This is effectively the > same as CLUSTER except it doesn't order the table according > to an index. It will still require enough space to hold two > copies of the table but it will be significantly faster. > Yes, I can afford a downtime on Sunday. Actually the clustering option would help since most of our slow queries use the same index. Thanks Bill and Gregory for the advice. Regards, Fernando.
В списке pgsql-performance по дате отправления: