Re: Spinlocks, yet again: analysis and proposed patches
От | Douglas McNaught |
---|---|
Тема | Re: Spinlocks, yet again: analysis and proposed patches |
Дата | |
Msg-id | m2slw99brv.fsf@Douglas-McNaughts-Powerbook.local обсуждение исходный текст |
Ответ на | Re: Spinlocks, yet again: analysis and proposed patches (Greg Stark <gsstark@mit.edu>) |
Ответы |
Re: Spinlocks, yet again: analysis and proposed patches
|
Список | pgsql-hackers |
Greg Stark <gsstark@mit.edu> writes: > Tom Lane <tgl@sss.pgh.pa.us> writes: > >> No; that page still says specifically "So a process calling >> sched_yield() now must wait until all other runnable processes in the >> system have used up their time slices before it will get the processor >> again." I can prove that that is NOT what happens, at least not on >> a multi-CPU Opteron with current FC4 kernel. However, if the newer >> kernels penalize a process calling sched_yield as heavily as this page >> claims, then it's not what we want anyway ... > > Well it would be no worse than select or any other random i/o syscall. > > It seems to me what you've found is an outright bug in the linux scheduler. > Perhaps posting it to linux-kernel would be worthwhile. People have complained on l-k several times about the 2.6 sched_yield() behavior; the response has basically been "if you rely on any particular sched_yield() behavior for synchronization, your app is broken--it's not a synchronization primitive." -Doug
В списке pgsql-hackers по дате отправления: