Re: LWLock contention: I think I understand the problem
От | Tom Lane |
---|---|
Тема | Re: LWLock contention: I think I understand the problem |
Дата | |
Msg-id | 13571.1010079672@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: LWLock contention: I think I understand the problem (Bruce Momjian <pgman@candle.pha.pa.us>) |
Ответы |
Re: LWLock contention: I think I understand the problem
|
Список | pgsql-hackers |
Bruce Momjian <pgman@candle.pha.pa.us> writes: > OK, so now we know that while the new lock code handles the select(1) > problem better, we also know that on AIX the old select(1) code wasn't > as bad as we thought. It still seems that the select() blocking method should be a loser. I notice that for AIX, s_lock.h defines TAS() as a call on a system routine cs(). I wonder what cs() actually does and how long it takes. Tatsuo or Andreas, any info? It might be interesting to try the pgbench tests on AIX with s_lock.c's SPINS_PER_DELAY set to different values (try 10 and 1000 instead of the default 100). > I believe we don't see improvement on SMP machines using pgbench because > pgbench, at least at high scaling factors, is really testing disk i/o, > not backend processing speed. Good point. I suspect this is even more true on the PC-hardware setups that most of the rest of us are using: we've got these ridiculously fast processors and consumer-grade disks (with IDE interfaces, yet). Tatsuo's AIX setup might have a better CPU-to-IO throughput balance, but it's probably still ultimately I/O bound in this test. Tatsuo, can you report anything about CPU idle time percentage while you are running these tests? > It would be interesting to test pgbench > using scaling factors that allowed most of the tables to sit in shared > memory buffers. Then, we wouldn't be testing disk i/o and would be > testing more backend processing throughput. (Tom, is that true?) Unfortunately, at low scaling factors pgbench is guaranteed to look horrible because of contention for the "branches" rows. I think that it'd be necessary to adjust the ratios of branches, tellers, and accounts rows to make it possible to build a small pgbench database that didn't show a lot of contention. BTW, I realized over the weekend that the reason performance tails off for more clients is that if you hold tx/client constant, more clients means more total updates executed, which means more dead rows, which means more time spent in unique-index duplicate checks. We know we want to change the way that works, but not for 7.2. At the moment, the only way to make a pgbench run that accurately reflects the impact of multiple clients and not the inefficiency of dead index entries is to scale tx/client down as #clients increases, so that the total number of transactions is the same for all test runs. regards, tom lane
В списке pgsql-hackers по дате отправления: