Hi,
The system we are building is intended to be utilized in a number of different applications, so the testing we are doing is primarily directed at stressing the system by running it through its paces and uncovering any weaknesses. I prefer to find as many problems as possible now, rather than in production. ;-)
For the current application set I'm testing, I expect we won't need to do much VACUUMing, as it will be a fairly static dataset only used for querying (once all the data is loaded). I know that we will be running some databases with some pretty rapid throughput (100 GB/day), but if VACUUM will do (as I expect), then we'll probably just stick to that. I don't have time to do any testing on that until next month, though.
I do find it odd, however, that pgsql recommends using a VACUUM FULL (as a result of running the VACUUM). Especially if, as it seems, VACUUM FULL doesn't work for tables beyond a certain size. Assuming we have not set up something completely wrongly, this seems like a bug.
If this is the wrong mailing list to be posting this, then please let me know.
Regards,
Michael Akinde
Database Architect, Met.no
Usama Dar wrote:
On Jan 7, 2008 2:40 PM, Michael Akinde <michael.akinde@met.no> wrote:
As suggested, I tested a VACUUM FULL ANALYZE with 128MB shared_buffers
and 512 MB reserved for maintenance_work_mem (on a 32 bit machine with 4
GB RAM).
My Apologies if my question seems redundant and something you have already discussed with list members, but why do you need to do a VACUUM FULL? have you not vacuumed for a while? or some special requirement which requires very aggressive space re-claim? Vacuum Full is also known to cause some index bloat at times as well. most systems i know run regular vacuums and had never required to run a vacuum full.
--
Usama Munir Dar http://www.linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar