Re: Change BgWriterCommLock to spinlock
От | Qingqing Zhou |
---|---|
Тема | Re: Change BgWriterCommLock to spinlock |
Дата | |
Msg-id | Pine.LNX.4.58.0601081945110.4796@eon.cs обсуждение исходный текст |
Ответ на | Re: Change BgWriterCommLock to spinlock (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Change BgWriterCommLock to spinlock
|
Список | pgsql-patches |
On Sun, 8 Jan 2006, Tom Lane wrote: > > If you want the bgwriter to keep working in the face of an out-of-memory > condition in the hashtable, I think you'd have to change the coding so > that it takes requests one at a time from the queue. > Patched version will issue ERROR instead of PANIC at this condition, so the bgwriter can still keep running. I don't quite understand what do you mean by "want the bgwriter keep working" -- do you mean by not issuing an ERROR but do retry? An ERROR is not avoidable unless we change the out-of-memory handling logic inside hashtable. > > Another issue to keep in mind is that correct operation requires that > the bgwriter not declare a checkpoint complete until it's completed > every fsync request that was queued before the checkpoint started. > So if the bgwriter is to try to keep going after failing to absorb > all the pending requests, there would have to be some logic addition > to keep track of whether it's OK to complete a checkpoint or not. > As above, if bgwriter fails to absorb, it will quite the job (and checkpoint will not be finished). Do you suggest it makes sense that we continue to work on the patch or let it be? Regards, Qingqing
В списке pgsql-patches по дате отправления: