Re: random observations while testing with a 1,8B row table
От | Stefan Kaltenbrunner |
---|---|
Тема | Re: random observations while testing with a 1,8B row table |
Дата | |
Msg-id | 4412906D.7060200@kaltenbrunner.cc обсуждение исходный текст |
Ответ на | Re: random observations while testing with a 1,8B row table (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
Tom Lane wrote: > Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes: > >>>>3. vacuuming this table - it turned out that VACUUM FULL is completly >>>>unusable on a table(which i actually expected before) of this size not >>>>only to the locking involved but rather due to a gigantic memory >>>>requirement and unbelievable slowness. > > >>sure, that was mostly meant as an experiment, if I had to do this on a >>production database I would most likely use CLUSTER to get the desired >>effect (which in my case was purely getting back the diskspace wasted by >>dead tuples) > > > Yeah, the VACUUM FULL algorithm is really designed for situations where > just a fraction of the rows have to be moved to re-compact the table. > It might be interesting to teach it to abandon that plan and go to a > CLUSTER-like table rewrite once the percentage of dead space is seen to > reach some suitable level. CLUSTER has its own disadvantages though > (2X peak disk space usage, doesn't work on core catalogs, etc). hmm very interesting idea, I for myself like it but from what i have seen people quite often use vacuum full to get their disk usage down _because_ they are running low on space (and because it's not that well known that CLUSTER could be much faster) - maybe we should add a note/hint about this to the maintenance/vacuum docs at least ? Stefan
В списке pgsql-hackers по дате отправления: