Re: Bgwriter behavior
От | Tom Lane |
---|---|
Тема | Re: Bgwriter behavior |
Дата | |
Msg-id | 8110.1103643888@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Bgwriter behavior (Bruce Momjian <pgman@candle.pha.pa.us>) |
Ответы |
Re: Bgwriter behavior
Re: Bgwriter behavior |
Список | pgsql-hackers |
Bruce Momjian <pgman@candle.pha.pa.us> writes: > First, we remove the GUC bgwriter_maxpages because I don't see a good > way to set a default for that. A default value needs to be based on a > percentage of the full buffer cache size. This is nonsense. The admin knows what he set shared_buffers to, and so maxpages and percent of shared buffers are not really distinct ways of specifying things. The cases that make a percent spec useful are if (a) it is a percent of a non-constant number (eg, percent of total dirty pages as in the current code), or (b) it is defined in a way that lets it limit the amount of scanning work done (which it isn't useful for in the current code). But a maxpages spec is useful for (b) too. More to the point, maxpages is useful to set a hard limit on the amount of I/O generated by the bgwriter, and I think people will want to be able to do that. > Now, to control the bgwriter frequency we multiply the percent of the > list it had to span by the bgwriter_delay value to determine when to run > bgwriter next. I'm less than enthused about this. The idea of the bgwriter is to trickle out writes in a way that doesn't affect overall performance too much. Not to write everything in sight at any cost. I like the hybrid "keep the bottom of the ARC list clean, plus do a slow clock scan on the main buffer array" approach better. I can see that that directly impacts both of the goals that the bgwriter has. I don't see how a variable I/O rate really improves life on either score; it just makes things harder to predict. regards, tom lane
В списке pgsql-hackers по дате отправления: