Re: Massive delete from a live production DB
От | Tomas Vondra |
---|---|
Тема | Re: Massive delete from a live production DB |
Дата | |
Msg-id | 4DCC3E7E.6000208@fuzzy.cz обсуждение исходный текст |
Ответ на | Massive delete from a live production DB (Phoenix Kiula <phoenix.kiula@gmail.com>) |
Список | pgsql-general |
Dne 12.5.2011 16:23, Phoenix Kiula napsal(a): > Hi > > Been reading some old threads (pre 9.x version) and it seems that the > consensus is to avoid doing massive deletes from a table as it'll > create so much unrecoverable space/gaps that vacuum full would be > needed. Etc. > > Instead, we might as well do a dump/restore. Faster, cleaner. > > This is all well and good, but what about a situation where the > database is in production and cannot be brought down for this > operation or even a cluster? > > Any ideas on what I could do without losing all the live updates? I > need to get rid of about 11% of a 150 million rows of database, with > each row being nearly 1 to 5 KB in size... > > Thanks! Version is 9.0.4. One of the possible recipes in such case is usually a partitioning. If you can divide the data so that a delete is equal to a drop of a partition, then you don't need to worry about vacuum etc. But the partitioning has it's own problems - you can't reference the partitioned table using foreign keys, the query plans often are not as efficient as with a non-partitioned table etc. regards Tomas
В списке pgsql-general по дате отправления: