Re: Freezing tuples on pages dirtied by vacuum
От | Jim Nasby |
---|---|
Тема | Re: Freezing tuples on pages dirtied by vacuum |
Дата | |
Msg-id | 49A6BB64-F055-4694-B58A-2C780975E270@pervasive.com обсуждение исходный текст |
Ответ на | Re: Freezing tuples on pages dirtied by vacuum (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Jul 21, 2006, at 9:03 AM, Tom Lane wrote: >> One >> possibility is that early freeze is at 1B transactions and we push >> forced-freeze back to 1.5B transactions (the current forced-freeze >> at 1B >> transactions seems rather aggresive anyway, now that the server will >> refuse to issue new commands rather than lose data due to >> wraparound). > > No, the freeze-at-1B rule is the maximum safe delay. Read the docs. > But we could do early freeze at 0.5B and forced freeze at 1B and > probably still get the effect you want. > > However, I remain unconvinced that this is a good idea. You'll be > adding very real cycles to regular vacuum processing (to re-scan > tuples > already examined) in hopes of obtaining a later savings that is really > pretty hypothetical. Where is your evidence that writes caused solely > by tuple freezing are a performance issue? I didn't think vacuum would be a CPU-bound process, but is there any way to gather that evidence right now? What about adding some verbage to vacuum verbose that reports how many pages were dirtied to freeze tuples? It seems to be useful info to have, and would help establish if it's worth worrying about. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
В списке pgsql-hackers по дате отправления: