Re: Reading data in bulk - help?
От | William Yu |
---|---|
Тема | Re: Reading data in bulk - help? |
Дата | |
Msg-id | bjo7db$1v53$1@news.hub.org обсуждение исходный текст |
Ответ на | Re: Reading data in bulk - help? (Chris Huston <chuston@bangjafwac.com>) |
Список | pgsql-performance |
> 1) Memory - clumsily adjusted shared_buffer - tried three values: 64, > 128, 256 with no discernible change in performance. Also adjusted, > clumsily, effective_cache_size to 1000, 2000, 4000 - with no discernible > change in performance. I looked at the Admin manual and googled around > for how to set these values and I confess I'm clueless here. I have no > idea how many kernel disk page buffers are used nor do I understand what > the "shared memory buffers" are used for (although the postgresql.conf > file hints that it's for communication between multiple connections). > Any advice or pointers to articles/docs is appreciated. The standard procedure is 1/4 of your memory for shared_buffers. Easiest way to calculate would be ###MB / 32 * 1000. E.g. if you have 256MB of memory, your shared_buffers should be 256 / 32 * 1000 = 8000. The remaining memory you have leftover should be "marked" as OS cache via the effective_cache_size setting. I usually just multiply the shared_buffers value by 3 on systems with a lot of memory. With less memory, OS/Postgres/etc takes up a larger percentage of memory so values of 2 or 2.5 would be more accurate.
В списке pgsql-performance по дате отправления: