vaccuming very large table problem
От | if |
---|---|
Тема | vaccuming very large table problem |
Дата | |
Msg-id | f37dba8b0802150256x4bfe638duc921ef686b623e04@mail.gmail.com обсуждение исходный текст |
Ответы |
Re: vaccuming very large table problem
|
Список | pgsql-admin |
Hello list! We use postgresql as a backend to our email gateway, and keep al emails for in database. Using postgres version 7.4.8 (yes, i know it's old), and rather specific table schema (the application was desined that way) -- all emails split into 2kb parts and fed up into pg_largeobject. So, long story short, i now have a catch-22 situation -- database using about 0.7TB and we are running out of space ;-) I can delete some old stuff but i cannot run full vacuum to reclaim disk space (i takes way more than full weekend) and i also cannot dump/restore as there's no free space (2x database) So, with this restrictions aplied, i figured out that i can somehow zero out all old entries in pg_largeobject or even physically delete these files, and rebuild all neccesary indexes. What is the best way to do this? IMO, dd'ing /dev/zero to this files will cause postgres to reinitialize these empty blocks, and after this will still need to vacuum full over 0.7TB, am i right? And if i delete them, then start postmaster, there'll be lots of complaining but will the latest data be saved? How can i delete, for instance, first 70% of data reasonably fast? P.S. Please cc me, as i'm not subscribed yet. Thanks in advance! regards, if
В списке pgsql-admin по дате отправления: