Re: performance tuning: shared_buffers, sort_mem; swap
От | Thomas O'Connell |
---|---|
Тема | Re: performance tuning: shared_buffers, sort_mem; swap |
Дата | |
Msg-id | tfo-225E19.11143913082002@news.hub.org обсуждение исходный текст |
Ответ на | Re: performance tuning: shared_buffers, sort_mem; swap (Bruce Momjian <pgman@candle.pha.pa.us>) |
Ответы |
Re: performance tuning: shared_buffers, sort_mem; swap
Re: performance tuning: shared_buffers, sort_mem; swap |
Список | pgsql-admin |
In article <200208131556.g7DFuH008873@candle.pha.pa.us>, pgman@candle.pha.pa.us (Bruce Momjian) wrote: > Well, it doesn't really matter who is causing the swapping. If you have > more of a load on your machine that RAM can hold, you are better off > reducing your PostgreSQL shared buffers. So the idea would be: 1. start with the numbers above. 2. benchmark postgres on the machine with those numbers set (creating enough load to require plenty of resource use in shared_buffers/sort_mem) 3. monitor swap. 4. if heavy swapping occurs, reduce the amount of shared memory allocated to shared_buffers/sort_mem. right? on sort of a side note, here's the situation i've got: i'm currently running postgres on a couple of boxes with decent RAM and processors. each postgres box, though, is also running several Apache servers. the Apache servers are running web applications that hit postgres, so when load on the box is high, it's caused by both Apache and postgres. we've had the issue before where postgres will die under heavy load (meaning Apache is logging several requests per minute and stressing postgres, too) with the error about how probably we don't have shared memory configured appropriately. is it possible to set the kernel resources and shared_buffers such that postgres won't be the point of failure when trying to access more shared memory than is currently available? i guess the issue is: when kernel resources are maxed out, does postgres' architecture mean that when an IPC call fails, it will be the piece of the system to go down? e.g., if SHMALL/SHMMAX are configured to allow 128MB shared memory on a box with 512MB RAM, plus a little extra to provide for Apache, then if postgres is set to have 128MB shared memory, is it a problem with our settings if postgres crashes when load is high? meaning, could it be that Apache is using up the extra SHMALL/SHMMAX and postgres doesn't really have 128MB of shared memory to work with? the trick, then, would seem to be to monitor swapping, but also to monitor overall shared memory usage at the upper limits of available resources. sorry to ramble on. i'm just trying to get a high performance database running in a stable environment... :) -tfo
В списке pgsql-admin по дате отправления: