On 2014-10-10 16:41:39 +0200, Andres Freund wrote:
> FWIW, the profile always looks like
> - 48.61% postgres postgres [.] s_lock
> - s_lock
> + 96.67% StrategyGetBuffer
> + 1.19% UnpinBuffer
> + 0.90% PinBuffer
> + 0.70% hash_search_with_hash_value
> + 3.11% postgres postgres [.] GetSnapshotData
> + 2.47% postgres postgres [.] StrategyGetBuffer
> + 1.93% postgres [kernel.kallsyms] [k] copy_user_generic_string
> + 1.28% postgres postgres [.] hash_search_with_hash_value
> - 1.27% postgres postgres [.] LWLockAttemptLock
> - LWLockAttemptLock
> - 97.78% LWLockAcquire
> + 38.76% ReadBuffer_common
> + 28.62% _bt_getbuf
> + 8.59% _bt_relandgetbuf
> + 6.25% GetSnapshotData
> + 5.93% VirtualXactLockTableInsert
> + 3.95% VirtualXactLockTableCleanup
> + 2.35% index_fetch_heap
> + 1.66% StartBufferIO
> + 1.56% LockReleaseAll
> + 1.55% _bt_next
> + 0.78% LockAcquireExtended
> + 1.47% _bt_next
> + 0.75% _bt_relandgetbuf
>
> to me. Now that's with the client count 496, but it's similar with lower
> counts.
>
> BTW, that profile *clearly* indicates we should make StrategyGetBuffer()
> smarter.
Which is nearly trivial now that atomics are in. Check out the attached
WIP patch which eliminates the spinlock from StrategyGetBuffer() unless
there's buffers on the freelist.
Test:
pgbench -M prepared -P 5 -S -c 496 -j 496 -T 5000
on a scale=1000 database, with 4GB of shared buffers.
Before:
progress: 40.0 s, 136252.3 tps, lat 3.628 ms stddev 4.547
progress: 45.0 s, 135049.0 tps, lat 3.660 ms stddev 4.515
progress: 50.0 s, 135788.9 tps, lat 3.640 ms stddev 4.398
progress: 55.0 s, 135268.4 tps, lat 3.654 ms stddev 4.469
progress: 60.0 s, 134991.6 tps, lat 3.661 ms stddev 4.739
after:
progress: 40.0 s, 207701.1 tps, lat 2.382 ms stddev 3.018
progress: 45.0 s, 208022.4 tps, lat 2.377 ms stddev 2.902
progress: 50.0 s, 209187.1 tps, lat 2.364 ms stddev 2.970
progress: 55.0 s, 206462.7 tps, lat 2.396 ms stddev 2.871
progress: 60.0 s, 210263.8 tps, lat 2.351 ms stddev 2.914
Yes, no kidding.
The results are similar, but less extreme, for smaller client counts
like 80 or 160.
Amit, since your test seems to be currently completely bottlenecked
within StrategyGetBuffer(), could you compare with that patch applied to
HEAD and the LW_SHARED patch for one client count? That'll may allow us
to see a more meaningful profile...
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services