Re: Full vacuuming of BIG tables takes too long
От | Thierry Missimilly |
---|---|
Тема | Re: Full vacuuming of BIG tables takes too long |
Дата | |
Msg-id | 3ECCB856.C3DB2926@BULL.NET обсуждение исходный текст |
Ответ на | Full vacuuming of BIG tables takes too long ("Eugene M. Zheganin" <emz@norma.perm.ru>) |
Список | pgsql-admin |
Hi, I don't have a solution but : 1) Is your system spending time in Wait I/O, while vacuum analyze is running ? Perhaps, you can save time by incrising I/O throughput. 2) In the alternative dump/recreate/restore, do you recreate the Foreign Key ? This step takes long time (depending of your Database schema). I have try this scenario : Dump data / Drop Foreign Key / Truuncate Tables / restore / Recreate the Foreign Key The step Recreate FK takes 2 times the four first steps. Thierry "Eugene M. Zheganin" wrote: > Hi, all. > > Just example. I have an table in ISP billing base, which every 2 > months grows and contains up to 35,000,000 of records. That takes > 13Gb of disk space. On that amount 'vacuum analyze' (every night) is > inefficient, cause after it the table continues to grow (but not > very fast). > > When trying to do 'vacuum full' it takes too long- I can wait only > 5-6 hours (and that is not enough), cause it locks the table and > the number of procecces, awaiting their inserts becomes too high. > So it is much faster (40-50 mins) to dump the entire database, > then drop it, recreate and resore it. > > I know that 'vacuum_mem = 65536' is not enough to do 'vacuum full' > fast enough - but I wanna ask- if I dedcide to increase that number > - will be 512 megs for example be better ? > > Is there any other init parameters that can helkp me ? > > Or speaking of such amount of data dump/recreate/restore will be the > best way ? > > WBR, Eugene. > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faqs/FAQ.html
Вложения
В списке pgsql-admin по дате отправления: