Re: Urgent: 10K or more connections
От | Greg Stark |
---|---|
Тема | Re: Urgent: 10K or more connections |
Дата | |
Msg-id | 87lluvhy6q.fsf@stark.dyndns.tv обсуждение исходный текст |
Ответ на | Re: Urgent: 10K or more connections (Sean Chittenden <sean@chittenden.org>) |
Ответы |
Re: Urgent: 10K or more connections
|
Список | pgsql-general |
Sean Chittenden <sean@chittenden.org> writes: > Some light weight multi-threaded proxy that > relays active connections to the backend and holds idle connections > more efficiently than PostgreSQL... What excuse is there for postgres connections being heavyweight to begin with? The only real resource they ought to represent is a single TCP connection. Servers that manage 10,000 TCP connections are a dime a dozen these days. Any database context that has to be stored for the connection, the state of binary/text or autocommit mode or whatever, will have to be maintained by any pooling interface anyways. And I think both of those examples are now much cleaner more or less stateless per-request flags anyways. Basically what I'm asking is, hypothetically, if postgres were implemented using threads instead of processes, are there any per-connection resources that really couldn't be completely disposed of when the connection was completely idle between (ie at the start of) transactions? Ideally if every per-connection resource could be completely disposed of whenever the connection was completely idle then you wouldn't need a whole extra layer for the communication to traverse and a whole extra layer of complexity for the protocol semantics to be maintained. A multithreaded server could easily handle 10k-40k mostly idle connections without any unusual resource needs. -- greg
В списке pgsql-general по дате отправления: