Re: Connections per second?
От | wsheldah@lexmark.com |
---|---|
Тема | Re: Connections per second? |
Дата | |
Msg-id | 200204231627.MAA00155@interlock2.lexmark.com обсуждение исходный текст |
Ответ на | Connections per second? (Alejandro Fernandez <ale@electronic-group.com>) |
Ответы |
Re: Connections per second?
|
Список | pgsql-general |
Depending on the size of the table and how much RAM you have, and on your OS, you may find the entire table cached in RAM, which would be ideal. You should use persistent connections if at all possible. You didn't mention what web server you're using, but if you're using Apache, you may want to write an Apache module that will maintain a persistent connection for each apache child process; that will also keep your program loaded in memory so it doesn't have to be reloaded on each request. I would also be concerned about write speed to the log file; not sure where that will peak. Hope this helps, Wes Sheldahl Alejandro Fernandez <ale%electronic-group.com@interlock.lexmark.com> on 04/23/2002 10:51:40 AM To: pgsql-general%postgresql.org@interlock.lexmark.com cc: (bcc: Wesley Sheldahl/Lex/Lexmark) Subject: [GENERAL] Connections per second? Hi, I'm writing a small but must-be-fast cgi program that for each hit it gets, it reads an indexed table in a postgres database and writes a log to a file based on the result. Any idea how many hits a second I can get to before things start crashing, or queuing up too much, etc? And will postgres be one of the first to fall? Do any of you think it can handle 2000 hits a second (what I think I could get at peak times) - and what would it need to do so? Persistent connections? Are there any examples or old threads on writing a similar program in C with libpq? Thanks, Ale -- Alejandro Fernandez Electronic Group Interactive --+34-65-232-8086-- ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
В списке pgsql-general по дате отправления: