Re: Checkpointer on hot standby runs without looking checkpoint_segments
От | Florian Pflug |
---|---|
Тема | Re: Checkpointer on hot standby runs without looking checkpoint_segments |
Дата | |
Msg-id | 91EF619D-8C38-4ABB-8F29-33FEEF600DEE@phlo.org обсуждение исходный текст |
Ответ на | Re: Checkpointer on hot standby runs without looking checkpoint_segments (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: Checkpointer on hot standby runs without looking checkpoint_segments
Re: Checkpointer on hot standby runs without looking checkpoint_segments |
Список | pgsql-hackers |
On Jun8, 2012, at 15:47 , Robert Haas wrote: > On Fri, Jun 8, 2012 at 5:02 AM, Simon Riggs <simon@2ndquadrant.com> wrote: >> On 8 June 2012 09:14, Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote: >> >>> The requirement for this patch is as follows. >>> >>> - What I want to get is similarity of the behaviors between >>> master and (hot-)standby concerning checkpoint >>> progression. Specifically, checkpoints for streaming >>> replication running at the speed governed with >>> checkpoint_segments. The work of this patch is avoiding to get >>> unexpectedly large number of WAL segments stay on standby >>> side. (Plus, increasing the chance to skip recovery-end >>> checkpoint by my another patch.) >> >> Since we want wal_keep_segments number of WAL files on master (and >> because of cascading, on standby also), I don't see any purpose to >> triggering more frequent checkpoints just so we can hit a magic number >> that is most often set wrong. > > This is a good point. Right now, if you set checkpoint_segments to a > large value, we retain lots of old WAL segments even when the system > is idle (cf. XLOGfileslop). I think we could be smarter about that. > I'm not sure what the exact algorithm should be, but right now users > are forced between setting checkpoint_segments very large to achieve > optimum write performance and setting it small to conserve disk space. > What would be much better, IMHO, is if the number of retained > segments could ratchet down when the system is idle, eventually > reaching a state where we keep only one segment beyond the one > currently in use. I'm a bit sceptical about this. It seems to me that you wouldn't actually be able to do anything useful with the conserved space, since postgres could re-claim it at any time. At which point it'd better be available, or your whole cluster comes to a screeching halt... best regards, Florian Pflug
В списке pgsql-hackers по дате отправления: