Multiple insert performance trick or performance misunderstanding?
От | Ron Mayer |
---|---|
Тема | Multiple insert performance trick or performance misunderstanding? |
Дата | |
Msg-id | dh4e88$25qe$1@news.hub.org обсуждение исходный текст |
Ответы |
Re: Multiple insert performance trick or performance misunderstanding?
|
Список | pgsql-performance |
When I need to insert a few hundred or thousand things in a table from a 3-tier application, it seems I'm much better off creating a big string of semicolon separated insert statements rather than sending them one at a time - even when I use the obvious things like wrapping the statements in a transaction and using the library's prepared statements. I tried both Ruby/DBI and C#/Npgsql; and in both cases sets of inserts that took 3 seconds when run individually took about 0.7 seconds when concatenated together. Is it expected that I'd be better off sending big concatenated strings like "insert into tbl (c1,c2) values (v1,v2);insert into tbl (c1,c2) values (v3,v4);..." instead of sending them one at a time? db.ExecuteSQL("BEGIN"); sql = new System.Text.StringBulder(10000); for ([a lot of data elements]) { sql.Append( "insert into user_point_features (col1,col2)"+ " values (" +obj.val1 +","+obj.val2+");" ); } db.ExecuteSQL(sql.ToString()); db.ExecuteSQL("COMMIT");
В списке pgsql-performance по дате отправления: