Re: performance of insert/delete/update
От | Josh Berkus |
---|---|
Тема | Re: performance of insert/delete/update |
Дата | |
Msg-id | 200211231125.45663.josh@agliodbs.com обсуждение исходный текст |
Ответ на | Re: performance of insert/delete/update (Ron Johnson <ron.l.johnson@cox.net>) |
Ответы |
Re: performance of insert/delete/update
|
Список | pgsql-performance |
Ron, > As for bundling multiple statements into a transaction to increase > performance, I think the questions are: > - how much disk IO does one BEGIN TRANSACTION do? If it *does* > do disk IO, then "bundling" *will* be more efficient, since > less disk IO will be performed. > - are, for example, 500 COMMITs of small amounts of data more or > less efficient than 1 COMMIT of a large chunk of data? On the > proprietary database that I use at work, efficiency goes up, > then levels off at ~100 inserts per transaction. That's because some commercial databases (MS SQL, Sybase) use an "unwinding transaction log" method of updating. That is, during a transaction, changes are written only to the transaction log, and those changes are "played" to the database only on a COMMIT. It's an approach that is more efficient for large transactions, but has the unfortuate side effect of *requiring* read and write row locks for the duration of the transaction. In Postgres, with MVCC, changes are written to the database immediately with a new transaction ID and the new rows are "activated" on COMMIT. So the changes are written to the database as the statements are executed, regardless. This is less efficient for large transactions than the "unwinding log" method, but has the advantage of eliminating read locks entirely and most deadlock situations. Under MVCC, then, I am not convinced that bundling a bunch of writes into one transaction is faster until I see it demonstrated. I certainly see no performance gain on my system. -- -Josh Berkus Aglio Database Solutions San Francisco
В списке pgsql-performance по дате отправления: