Re: Monitoring number of backends
От | Marko Kreen |
---|---|
Тема | Re: Monitoring number of backends |
Дата | |
Msg-id | 20131023184048.GA676@gmail.com обсуждение исходный текст |
Ответ на | Re: Monitoring number of backends (Andy Colson <andy@squeakycode.net>) |
Список | pgsql-general |
On Tue, Oct 22, 2013 at 09:45:24PM -0500, Andy Colson wrote: > On 10/22/2013 12:59 PM, Stephen Frost wrote: > >Andy, > > > >* andy (andy@squeakycode.net) wrote: > >>My website is about to get a little more popular. I'm trying to add > >>in some measurements to determine an upper limit of how many > >>concurrent database connections I'm currently using. > > > >PG is really *much* happier if you have only one backend per CPU in your > >system. The way to get there is by using a connection pooler like > >pg_bouncer and configuring it based on how many CPUs you have. > > > >pg_bouncer can also provide stats for you. > > > > Thanks, > > > > Stephen > > > > Ahh, bummer, man. PgBouncer doesn't work so well when you have lots of databases. I have about 90 databases, the websitecould connect to any one of them on any request. (They are all about as equally likely to be hit) > > In my pgbouncer.ini I set > [databases] > * = > [pgbouncer] > max_cleint_conn = 200 > default_pool_size = 20 > > but that made each database its own pool, so 90*20 possible (but then maxed out at 200, I assume). Postgres has only amax_connections of 40, so I started getting "FATAL sorry too many cleints already" errors. > > I set the max_cleint_conn = 40 > and default_pool_size = 3 > > so it should stop erroring out, but is this type of setup even going to help me? If pgbouncer has 40 connections opento 40 different databases, and then a request comes in for a db it doesnt have, it'll have to drop one and build a newconnection, yes? Won't that be slower than just connecting right to postgres in the first place? > > I need a Most-Recently-Used pool, not a per-database pool. Is there a way to do this with pgbouncer? (With a large numberof databases) PgBouncer does not do any cross-pool limiting, so you still can have 3*90 server connections open. The max_client_conn is meant for rescue limit when something is completely broken and should not be hit in normal work. 40000 would be reasonable number... With that many databases, you just need to accept you need few connections to each db open, otherwise you won't get much win from pooling. So Postgres max_connections should be indeed >= 3*90 for you. And you can limit server connection via server_idle_timeout. -- marko
В списке pgsql-general по дате отправления: