Re: Size for vacuum_mem
От | Francisco Reyes |
---|---|
Тема | Re: Size for vacuum_mem |
Дата | |
Msg-id | 200212051754.gB5HsTo15202@mx2.drf.com обсуждение исходный текст |
Ответ на | Size for vacuum_mem (Francisco Reyes <lists@natserv.com>) |
Список | pgsql-general |
On 5 Dec 2002, Neil Conway wrote: > For these, it would probably be faster to TRUNCATE the table and then > load the new data, then ANALYZE. I can't because those tables would not be usable during the load. Right now I do delete/copy from within a transaction. If the loads are still running while people start coming in the morning they can still do work. > > while other tables I delete/reload about 1/3 (ie > > 7 Million records table I delete/copy 1.5 Million records). > > For these, you can try just using a plain VACUUM and seeing how > effective that is at reclaiming space. I am not too concerned with space reclamation. In theory if I don't do vacuum fulls I may have some dead space, but it would get re-used daily. My concern is the performance hit I would suffer with the table scans. >If necessary, increase max_fsm_pages. What is this setting for? To what number could I increase it to? > You might also want to check and see if your indexes are growing in size > (btree indexes on incrementally increasing values like timestamps can > grow, even with VACUUM FULL); use REINDEX if that's the case. Every once in a while I truncate the tables and re-load the whole set. Probably about every couple of months.
В списке pgsql-general по дате отправления: