Re: performance for high-volume log insertion
От | Simon Riggs |
---|---|
Тема | Re: performance for high-volume log insertion |
Дата | |
Msg-id | 1240413547.3978.48.camel@ebony.fara.com обсуждение исходный текст |
Ответ на | performance for high-volume log insertion (david@lang.hm) |
Список | pgsql-performance |
On Mon, 2009-04-20 at 14:53 -0700, david@lang.hm wrote: > the big win is going to be in changing the core of rsyslog so that it can > process multiple messages at a time (bundling them into a single > transaction) That isn't necessarily true as a single "big win". The reason there is an overhead per transaction is because of commit delays, which can be removed by executing SET synchronous_commit = off; after connecting to PostgreSQL 8.3+ You won't need to do much else. This can also be enabled for a PostgreSQL user without even changing the rsyslog source code, so it should be easy enough to test. And this type of application is *exactly* what it was designed for. Some other speedups should also be possible, but this is easiest. I would guess that batching inserts will be a bigger win than simply using prepared statements because it will reduce network roundtrips to a centralised log server. Preparing statements might show up well on tests because people will do tests against a local database, most likely. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support
В списке pgsql-performance по дате отправления: