Re: freezing tuples ( was: Why is vacuum_freeze_min_age 100m? )
От | Greg Stark |
---|---|
Тема | Re: freezing tuples ( was: Why is vacuum_freeze_min_age 100m? ) |
Дата | |
Msg-id | 407d949e0908131631j3a96f1dbl7c467b6faa790b44@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: freezing tuples ( was: Why is vacuum_freeze_min_age 100m? ) (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Fri, Aug 14, 2009 at 12:21 AM, Tom Lane<tgl@sss.pgh.pa.us> wrote: >> I was envisioning, if the page is already dirty and in memory *for any >> reason*, the freeze rows at below some threshold. > > I believe we've had this discussion before. I do *NOT* want freezing > operations pushed into any random page access, and in particular will > do my best to veto any attempt to put them into the bgwriter. It's possible Josh accidentally waved this red flag and really meant just to make it conditional on whether the page is dirty rather than on whether vacuum dirtied it. However he did give me a thought.... With the visibility map vacuum currently only covers pages that are known to have in-doubt tuples. That's why we have the anti-wraparound vacuums. However it could also check if the pages its skipping are in memory and process them if they are even if they don't have in-doubt tuples. Or it could first go through ram and process any pages that are in cache before going to the visibility map and starting from page 0, which would hopefully avoid having to read them in later when we get to them and find they've been flushed out. I'm just brainstorming here. I'm not sure if either of these are actually worth the complexity and danger of finding new bottlenecks in special case optimization codepaths. -- greg http://mit.edu/~gsstark/resume.pdf
В списке pgsql-hackers по дате отправления: