Hi, all.
Just example. I have an table in ISP billing base, which every 2
months grows and contains up to 35,000,000 of records. That takes
13Gb of disk space. On that amount 'vacuum analyze' (every night) is
inefficient, cause after it the table continues to grow (but not
very fast).
When trying to do 'vacuum full' it takes too long- I can wait only
5-6 hours (and that is not enough), cause it locks the table and
the number of procecces, awaiting their inserts becomes too high.
So it is much faster (40-50 mins) to dump the entire database,
then drop it, recreate and resore it.
I know that 'vacuum_mem = 65536' is not enough to do 'vacuum full'
fast enough - but I wanna ask- if I dedcide to increase that number
- will be 512 megs for example be better ?
Is there any other init parameters that can helkp me ?
Or speaking of such amount of data dump/recreate/restore will be the
best way ?
WBR, Eugene.