Re: Savepoints in transactions for speed?
От | Steve Atkins |
---|---|
Тема | Re: Savepoints in transactions for speed? |
Дата | |
Msg-id | 332684FA-4859-4B13-9EC7-94B75EE7BB99@blighty.com обсуждение исходный текст |
Ответ на | Savepoints in transactions for speed? (Mike Blackwell <mike.blackwell@rrd.com>) |
Ответы |
Re: Savepoints in transactions for speed?
|
Список | pgsql-performance |
On Nov 27, 2012, at 2:04 PM, Mike Blackwell <mike.blackwell@rrd.com> wrote: > I need to delete about 1.5 million records from a table and reload it in one transaction. The usual advice when loadingwith inserts seems to be group them into transactions of around 1k records. Committing at that point would leavethe table in an inconsistent state. I'd probably just do the whole thing in one transaction. Do you have specific reasons you want to avoid a long transaction, or just relying on rules of thumb? Postgresql isn't goingto run out of resources doing a big transaction, in the way some other databases will. Long running transactions will interfere with vacuuming, but inserting a couple of million rows shouldn't take that long. > Would issuing a savepoint every 1k or so records negate whatever downside there is to keeping a transaction open for all1.5 million records, or just add more overhead? Savepoints are going to increase overhead and have no effect on the length of the transaction. If you want to catch errorsand not have to redo the entire transaction, they're great, but that's about it. > The data to reload the table is coming from a Perl DBI connection to a different database (not PostgreSQL) so I'm not surethe COPY alternative applies here. COPY works nicely from perl: $dbh->do("COPY foo FROM STDIN"); $dbh->pg_putcopydata("foo\tbar\tbaz\n"); $dbh->pg_putcopyend(); The details are in DBD::Pg. I use this a lot for doing big-ish (tens of millions of rows) bulk inserts. It's not as fastas you can get, but it's probably as fast as you can get with perl. Cheers, Steve
В списке pgsql-performance по дате отправления: