Re: Load Distributed Checkpoints, take 3
От | Tom Lane |
---|---|
Тема | Re: Load Distributed Checkpoints, take 3 |
Дата | |
Msg-id | 4749.1182384833@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Load Distributed Checkpoints, take 3 (Heikki Linnakangas <heikki@enterprisedb.com>) |
Ответы |
Re: Load Distributed Checkpoints, take 3
Re: Load Distributed Checkpoints, take 3 |
Список | pgsql-patches |
Heikki Linnakangas <heikki@enterprisedb.com> writes: > I added a spinlock to protect the signaling fields between bgwriter and > backends. The current non-locking approach gets really difficult as the > patch adds two new flags, and both are more important than the existing > ckpt_time_warn flag. That may be, but you could minimize the cost and notational ugliness by not using the spinlock where you don't have to. Put the sig_atomic_t fields back the way they were, and many of the uses of the spinlock go away. All you really need it for is the places where the additional flags are set or read. > In fact, I think there's a small race condition in CVS HEAD: Yeah, probably --- the original no-locking design didn't have any side flags. The reason you need the lock is for a backend to be sure that a newly-started checkpoint is using its requested flags. But the detection of checkpoint termination is still the same. Some other comments: I tend to agree with whoever said upthread that the combination of GUC variables proposed here is confusing and ugly. It'd make more sense to have min and max checkpoint rates in KB/s, with the max checkpoint rate only honored when we are predicting we'll finish before the next checkpoint time. The flow of control and responsibility between xlog.c, bgwriter.c and bufmgr.c seems to have reached the breaking point of unintelligibility. Can you think of a refactoring that would simplify it? We might have to resign ourselves to this much messiness, but I'd rather not... regards, tom lane
В списке pgsql-patches по дате отправления: