Re: pgbench unable to scale beyond 100 concurrent connections
От | Craig Ringer |
---|---|
Тема | Re: pgbench unable to scale beyond 100 concurrent connections |
Дата | |
Msg-id | CAMsr+YEjmG=T78-wU+nLy4nVoH9THM_NTP2p8MMFRxq+c8Jb8Q@mail.gmail.com обсуждение исходный текст |
Ответ на | pgbench unable to scale beyond 100 concurrent connections (Sachin Kotwal <kotsachin@gmail.com>) |
Ответы |
Re: pgbench unable to scale beyond 100 concurrent connections
|
Список | pgsql-hackers |
On 29 June 2016 at 18:47, Sachin Kotwal <kotsachin@gmail.com> wrote:
I am testing pgbench with more than 100 connections.also set max_connection in postgresql.conf more than 100.Initially pgbench tries to scale nearby 150 but later it come down to 100 connections and stable there.It this limitation of pgbench? or bug? or i am doing it wrong way?
What makes you think this is a pgbench limitation?
It sounds like you're benchmarking the client and server on the same system. Couldn't this be a limitation of the backend PostgreSQL server?
It also sounds like your method of counting concurrent connections is probably flawed. You're not allowing for setup and teardown time; if you want over 200 connections really running at very high rates of connection and disconnection you'll probably need to raise max_connections a bit to allow for the ones that're starting up or tearing down at any given time.
Really, though, why would you want to do this? I can measure my car's speed falling off a cliff, but that's not a very interesting benchmark for a car. I can't imagine any sane use of the database this way, with incredibly rapid setup and teardown of lots of connections. Look into connection pooling, either client side or in a proxy like pgbouncer.
В списке pgsql-hackers по дате отправления: