Re: performance for high-volume log insertion
От | James Mansion |
---|---|
Тема | Re: performance for high-volume log insertion |
Дата | |
Msg-id | 49EEAA6F.9030003@mansionfamily.plus.com обсуждение исходный текст |
Ответ на | Re: performance for high-volume log insertion (Stephen Frost <sfrost@snowman.net>) |
Ответы |
Re: performance for high-volume log insertion
|
Список | pgsql-performance |
Stephen Frost wrote: > apart again. That's where the performance is going to be improved by > going that route, not so much in eliminating the planning. > Fine. But like I said, I'd suggest measuring the fractional improvement for this when sending multi-row inserts before writing something complex. I think the big will will be doing multi-row inserts at all. If you are going to prepare then you'll need a collection of different prepared statements for different batch sizes (say 1,2,3,4,5,10,20,50) and things will get complicated. A multi-row insert with unions and dynamic SQL is actually rather universal. Personally I'd implement that first (and it should be easy to do across multiple dbms types) and then return to it to have a more complex client side with prepared statements etc if (and only if) necessary AND the performance improvement were measurably worthwhile, given the indexing and storage overheads. There is no point optimising away the CPU of the simple parse if you are just going to get hit with a lot of latency from round trips, and forming a generic multi-insert SQL string is much, much easier to get working as a first step. Server CPU isn't a bottleneck all that often - and with something as simple as this you'll hit IO performance bottlenecks rather easily. James
В списке pgsql-performance по дате отправления: