Re: performance for high-volume log insertion
От | Greg Smith |
---|---|
Тема | Re: performance for high-volume log insertion |
Дата | |
Msg-id | alpine.GSO.2.01.0904211955040.23035@westnet.com обсуждение исходный текст |
Ответ на | Re: performance for high-volume log insertion (david@lang.hm) |
Список | pgsql-performance |
On Tue, 21 Apr 2009, david@lang.hm wrote: >> 1) Disk/controller has a proper write cache. Writes and fsync will be >> fast. You can insert a few thousand individual transactions per second. >> > in case #1 would you expect to get significant gains from batching? doesn't > it suffer from problems similar to #2 when checkpoints hit? Typically controllers with a write cache are doing elevator sorting across a much larger chunk of working memory (typically >=256MB instead of <32MB on the disk itself) which means a mix of random writes will average better performance--on top of being able to aborb a larger chunk of them before blocking on writes. You get some useful sorting in the OS itself, but every layer of useful additional cache helps significantly here. Batching is always a win because even a write-cached commit is still pretty expensive, from the server on down the chain. > I'll see about setting up a test in the next day or so. should I be able to > script this through psql? or do I need to write a C program to test this? You can easily compare things with psql, like in the COPY BINARY vs. TEXT example I gave earlier, that's why I was suggesting you run your own tests here just to get a feel for things on your data set. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
В списке pgsql-performance по дате отправления: