Re: [PERFORM] 8.3beta1 testing on Solaris
От | Jignesh K. Shah |
---|---|
Тема | Re: [PERFORM] 8.3beta1 testing on Solaris |
Дата | |
Msg-id | 4721ED75.7090607@sun.com обсуждение исходный текст |
Ответ на | Re: [PERFORM] 8.3beta1 testing on Solaris (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: [PERFORM] 8.3beta1 testing on Solaris
|
Список | pgsql-hackers |
I agree with Tom.. somehow I think increasing NUM_CLOG_BUFFERS is just avoiding the symptom to a later value.. I promise to look more into it before making any recommendations to increase NUM_CLOG_BUFFERs. Because though "iGen" showed improvements in that area by increasing num_clog_buffers , EAStress had shown no improvements.. Plus the reason I think this is not the problem in 8.3beta1 since the Lock Output clearly does not show CLOGControlFile as to be the issue which I had seen in earlier case. So I dont think that increasing NUM_CLOG_BUFFERS will change thing here. Now I dont understand the code pretty well yet I see three hotspots and not sure if they are related to each other * ProcArrayLock waits - causing Waits as reported by 83_lockwait.d script * SimpleLRUReadPage - causing read IOs as reported by iostat/rsnoop.d * GetSnapshotData - causing CPU utilization as reported by hotuser But I will shut up and do more testing. Regards, Jignesh Tom Lane wrote: > Josh Berkus <josh@agliodbs.com> writes: > >> Actually, 32 made a significant difference as I recall ... do you still have >> the figures for that, Jignesh? >> > > I'd want to see a new set of test runs backing up any call for a change > in NUM_CLOG_BUFFERS --- we've changed enough stuff around this area that > benchmarks using code from a few months back shouldn't carry a lot of > weight. > > regards, tom lane >
В списке pgsql-hackers по дате отправления: