Re: Millisecond-precision connect_timeout for libpq
От | Merlin Moncure |
---|---|
Тема | Re: Millisecond-precision connect_timeout for libpq |
Дата | |
Msg-id | CAHyXU0xyj53agUgYUV2WUR9O7wLcnMg+RHZM-x39rcW+Z-vgbA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Millisecond-precision connect_timeout for libpq (Josh Berkus <josh@agliodbs.com>) |
Список | pgsql-hackers |
On Fri, Jul 5, 2013 at 3:01 PM, Josh Berkus <josh@agliodbs.com> wrote: > On 07/05/2013 12:26 PM, Tom Lane wrote: >> ivan babrou <ibobrik@gmail.com> writes: >>> If you can figure out that postgresql is overloaded then you may >>> decide what to do faster. In our app we have very strict limit for >>> connect time to mysql, redis and other services, but postgresql has >>> minimum of 2 seconds. When processing time for request is under 100ms >>> on average sub-second timeouts matter. >> >> If you are issuing a fresh connection for each sub-100ms query, you're >> doing it wrong anyway ... > > It's fairly common with certain kinds of apps, including Rails and PHP. > This is one of the reasons why we've discussed having a kind of > stripped-down version of pgbouncer built into Postgres as a connection > manager. If it weren't valuable to be able to relocate pgbouncer to > other hosts, I'd still say that was a good idea. for the record, I think this is a great idea. merlin
В списке pgsql-hackers по дате отправления: