Re: Millisecond-precision connect_timeout for libpq
От | Robert Haas |
---|---|
Тема | Re: Millisecond-precision connect_timeout for libpq |
Дата | |
Msg-id | 12157C19-315E-4392-9D21-840D6555273B@gmail.com обсуждение исходный текст |
Ответ на | Re: Millisecond-precision connect_timeout for libpq (ivan babrou <ibobrik@gmail.com>) |
Список | pgsql-hackers |
On Jul 8, 2013, at 1:31 PM, ivan babrou <ibobrik@gmail.com> wrote: > On 8 July 2013 20:40, David E. Wheeler <david@justatheory.com> wrote: >> On Jul 8, 2013, at 7:44 AM, ivan babrou <ibobrik@gmail.com> wrote: >> >>>> Can you tell me why having ability to specify more accurate connect >>>> timeout is a bad idea? >>> >>> Nobody answered my question yet. >> >> From an earlier post by Tom: >> >>> What exactly is the use case for that? It seems like extra complication >>> for something with little if any real-world usefulness. >> >> So the answer is: extra complication. >> >> Best, >> >> David > > I don't see any extra complication in backwards-compatible patch that > removes more lines that adds. Can you tell me, what exactly is extra > complicated? > > About pooling connections: we have 150 applications servers and 10 > postgresql servers. Each app connects to each server -> 150 > connections per server if I run pooler on each application server. > That's more than default setting and now we usually have not more than > 10 connections per server. What would happen if we have 300 app > servers? I thought connections consume some memory. Running pooler not > on every app server gives no advantage — I still may get network > blackhole and 2 seconds delay. Moreover, now I can guess that > postgresql is overloaded if it does not accept connections, with > pooler I can simply blow up disks with heavy io. > > Seriously, I don't get why running 150 poolers is easier. And my > problem is still here: server (pooler is this case) is down — 2 > seconds delay. 2000% slower. > > Where am I wrong? I agree with you. ...Robert
В списке pgsql-hackers по дате отправления: