One more question .. I could not set wal_sync_method to anything else but fsync .. is that expected or should other choices be also available ? I am not sure how the EC2 SSD cache flushing is handled on EC2, but I hope it is flushing the whole cache on every sync .. As a side note, I got corrupted databases (errors about pg_xlog directories not found, etc) at first when running my tests, and I suspect it was because of vfs.zfs.cache_flush_disable=1, though I cannot prove it for sure.
Sébastien
On Wed, Sep 12, 2012 at 8:49 PM, Sébastien Lorion
<sl@thestrangefactory.com> wrote:
Is dedicating 2 drives for WAL too much ? Since my whole raid is comprised of SSD drives, should I just put it in the main pool ?
SébastienOn Wed, Sep 12, 2012 at 8:28 PM, Sébastien Lorion
<sl@thestrangefactory.com> wrote:
Ok, make sense .. I will update that as well and report back. Thank you for your advice.
SébastienOn Wed, Sep 12, 2012 at 8:04 PM, John R Pierce
<pierce@hogranch.com> wrote:
On 09/12/12 4:49 PM, Sébastien Lorion wrote:
You set shared_buffers way below what is suggested in Greg Smith book (25% or more of RAM) .. what is the rationale behind that rule of thumb ? Other values are more or less what I set, though I could lower the effective_cache_size and vfs.zfs.arc_max and see how it goes.
I think those 25% rules were typically created when ram was no more than 4-8GB.
for our highly transactional workload, at least, too large of a shared_buffers seems to slow us down, perhaps due to higher overhead of managing that many 8k buffers. I've heard other read-mostly workloads, such as data warehousing, can take advantage of larger buffer counts.