Max backend limits cleaned up
От | Tom Lane |
---|---|
Тема | Max backend limits cleaned up |
Дата | |
Msg-id | 13829.919407499@sss.pgh.pa.us обсуждение исходный текст |
Список | pgsql-hackers |
I have just checked in code changes (no doc updates yet :-() that address our recent discussions about how many backend processes can be used. Specifically: configure takes a --with-maxbackends=N switch that sets the hard limit on the maximum number of backends per postmaster. (It's still a hard limit because several arrays are sized by MAXBACKENDS. I didn't think it was worth trying to change that.) The default is still 64. The postmaster can be started with a "-N backends" switch that sets a smaller limit on the number of backends for this postmaster. The only cost of having a large MAXBACKENDS constant is a few dozen bytes of shared memory per array slot, so if you want you can configure MAXBACKENDS pretty large and then set the effective limit with -N at postmaster startup. When the postmaster is started, it will immediately acquire enough semaphores to support min(MAXBACKENDS, -N) backend processes. If your kernel sema parameters are too low to allow that, you get an immediate failure, rather than failure under peak load. The postmaster just refuses to start up, with a log message like this:IpcSemaphoreCreate: semget failed (No space left on device) key=5440026,num=16, permission=600 (Right at this instant, it looks like it fails to release whatever semas it did acquire. Ugh. Think I can fix that though.) I have verified that I can start more than 64 backends after suitable configuration, but I am not in a position to check that things work smoothly with a really large number of backends. I found one parameter (MAX_PROC_SEMS) that was hard-wired at 128 rather than set equal to MaxBackendIds, so I am a little worried that there might be others. If anyone has the time and interest to push the envelope with a few hundred backends, please report back! regards, tom lane
В списке pgsql-hackers по дате отправления: