Re: Fix overflow of bgwriter's request queue
От | ITAGAKI Takahiro |
---|---|
Тема | Re: Fix overflow of bgwriter's request queue |
Дата | |
Msg-id | 20060126124821.48A8.ITAGAKI.TAKAHIRO@lab.ntt.co.jp обсуждение исходный текст |
Ответ на | Re: Fix overflow of bgwriter's request queue (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Fix overflow of bgwriter's request queue
|
Список | pgsql-patches |
Tom Lane <tgl@sss.pgh.pa.us> wrote: > ITAGAKI Takahiro <itagaki.takahiro@lab.ntt.co.jp> writes: > > Attached is a revised patch. It became very simple, but I worry that > > one magic number (BUFFERS_PER_ABSORB) is still left. > > Have you checked that this version of the patch fixes the problem you > saw originally? Does the problem come back if you change > BUFFERS_PER_ABSORB to too large a value? The problem on my machine was resolved by this patch. I tested it and logged the maximum of BgWriterShmem->num_requests for each checkpoint. Test condition was: - shared_buffers = 65536 - connections = 30 The average of maximums was 25857 and the while max was 31807. They didn't exceed the max_requests(= 65536). > I suspect it'd probably be sufficient to absorb requests every few times > through the fsync loop, too, if you want to experiment with that. In the above test, smgrsync took 50 sec for syncing 32 files. This means absorb are requested every 1.5 sec, which is less frequent than absorbs by normal activity of bgwriter (bgwriter_delay=200ms). So I assume absorb requests the fsync loop would not be a problem. BUFFERS_PER_ABSORB = 10 (absorb per 1/10 of shared_buffers) is enough at least on my machine, but it doesn't necessarily work well in all environments. If we need to set BUFFERS_PER_ABSORB to a reasonably value, I think the number of active backends might be useful; for example, half of num of backends. --- ITAGAKI Takahiro NTT Cyber Space Laboratories
В списке pgsql-patches по дате отправления: