Re: Configuration Recommendations
От | Greg Smith |
---|---|
Тема | Re: Configuration Recommendations |
Дата | |
Msg-id | 4F9892AB.1010100@2ndQuadrant.com обсуждение исходный текст |
Ответ на | Configuration Recommendations (Jan Nielsen <jan.sture.nielsen@gmail.com>) |
Список | pgsql-performance |
On 04/23/2012 10:56 PM, Jan Nielsen wrote: > We are planning to rebuild our production 50GB PG 9.0 database serving > our application platform on the new hardware below. The web-applications > are 80/20 read/write and the data gateways are even mix 50/50 > read/write; one of the gateways nightly exports & imports ~20% of our > data. With enough RAM to hold the database, but that much churn in the nightly processing, you're most likely to run into VACUUM issues here. The trigger point for autovacuum to kick off is at just around 20%, so you might see problems come and go based on the size of the changed set. You might consider making your own benchmark test out of a change like the gateway introduces. Consider doing your own manual VACUUM or maybe even VACUUM FREEZE cleanup in sync with the nightly processing if you want that to be predictable. > If there are "obviously correct" choices in PG configuration, this would > be tremendously helpful information to me. I'm planning on using pgbench > to test the configuration options. The info at http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server is as useful a checklist for getting started as any. Note that pgbench is a very insensitive tool for testing configuration changes usefully. Results there will bounce around if you change shared_buffers and checkpoint_segments, but not much else. And even the changes that test positive with it don't necessarily translate into better real-world performance. For example, you might set shared_buffers to 8GB based on pgbench TPS numbers going up as it increases, only to find that allows way too much memory to get dirty between a checkpoint in production--resulting in slow periods on the server. And many of the more interesting and tricky parameters to try and tweak in production, such as work_mem, don't even matter to what pgbench does. It's easy to get lost trying pgbench tests without making clear forward progress for weeks. Once you've validated the hardware seems to be delivering reasonable performance, consider running your own more application-like benchmarks instead. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
В списке pgsql-performance по дате отправления: