Re: how to estimate shared_buffers...
От | Scott Marlowe |
---|---|
Тема | Re: how to estimate shared_buffers... |
Дата | |
Msg-id | dcc563d10807120605j7dbfe12ek4dc6b7cfdfdee0a5@mail.gmail.com обсуждение исходный текст |
Ответ на | how to estimate shared_buffers... (Jessica Richard <rjessil@yahoo.com>) |
Список | pgsql-performance |
On Sat, Jul 12, 2008 at 5:30 AM, Jessica Richard <rjessil@yahoo.com> wrote: > On a running production machine, we have 900M configured on a 16G-memory > Linux host. The db size for all dbs combined is about 50G. There are many > transactions going on all the times (deletes, inserts, updates). We do not > have a testing environment that has the same setup and the same amount of > workload. I want to evaluate on the production host if this 900M is enough. > If not, we still have room to go up a little bit to speed up all Postgres > activities. I don't know enough about the SA side. I just would imagine, if > something like "top" command or other tools can measure how much total > memory Postgres is actually using (against the configured 900M shared > buffers), and if Postgres is using almost 900M all the time, I would take > this as an indication that the shared_buffers can go up for another 100M... > > What is the best way to tell how much memory Postgres (all Postgres related > things) is actually using? If you've got a 50G data set, then postgresql is most likely using whatever memory you give it for shared buffers. top should show that easily. I'd say start at 25% ~ 4G (this is a 64 bit machine, right?). That leaves plenty of memory for the OS to cache data, and for postgresql to allocate work_mem type stuff from.
В списке pgsql-performance по дате отправления: