Re: random observations while testing with a 1,8B row table
От | Tom Lane |
---|---|
Тема | Re: random observations while testing with a 1,8B row table |
Дата | |
Msg-id | 8043.1142020450@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: random observations while testing with a 1,8B row table (Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>) |
Ответы |
Re: random observations while testing with a 1,8B row table
Re: random observations while testing with a 1,8B row table |
Список | pgsql-hackers |
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes: >>> 3. vacuuming this table - it turned out that VACUUM FULL is completly >>> unusable on a table(which i actually expected before) of this size not >>> only to the locking involved but rather due to a gigantic memory >>> requirement and unbelievable slowness. > sure, that was mostly meant as an experiment, if I had to do this on a > production database I would most likely use CLUSTER to get the desired > effect (which in my case was purely getting back the diskspace wasted by > dead tuples) Yeah, the VACUUM FULL algorithm is really designed for situations where just a fraction of the rows have to be moved to re-compact the table. It might be interesting to teach it to abandon that plan and go to a CLUSTER-like table rewrite once the percentage of dead space is seen to reach some suitable level. CLUSTER has its own disadvantages though (2X peak disk space usage, doesn't work on core catalogs, etc). regards, tom lane
В списке pgsql-hackers по дате отправления: