Re: Spinlock performance improvement proposal
От | Tom Lane |
---|---|
Тема | Re: Spinlock performance improvement proposal |
Дата | |
Msg-id | 23098.1001773518@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Spinlock performance improvement proposal ("Vadim Mikheev" <vmikheev@sectorbase.com>) |
Список | pgsql-hackers |
"Vadim Mikheev" <vmikheev@sectorbase.com> writes: >> I have committed changes to implement this proposal. I'm not seeing >> any significant performance difference on pgbench on my single-CPU >> system ... but pgbench is I/O bound anyway on this hardware, so that's >> not very surprising. I'll be interested to see what other people >> observe. (Tatsuo, care to rerun that 1000-client test?) > What is your system? CPU, memory, IDE/SCSI, OS? > Scaling factor and # of clients? HP C180, SCSI-2 disks, HPUX 10.20. I used scale factor 10 and between 1 and 10 clients. Now that I think about it, I was running with the default NBuffers (64), which probably constrained performance too. > BTW1 - shouldn't we rewrite pgbench to use threads instead of > "libpq async queries"? At least as option. I'd say that with 1000 > clients current pgbench implementation is very poor. Well, it uses select() to wait for activity, so as long as all query responses arrive as single packets I don't see the problem. Certainly rewriting pgbench without making libpq thread-friendly won't help a bit. > BTW2 - shouldn't we learn if there are really portability/performance > issues in using POSIX mutex-es (and cond. variables) in place of > TAS (and SysV semaphores)? Sure, that'd be worth looking into on a long-term basis. regards, tom lane
В списке pgsql-hackers по дате отправления: