Re: Savepoints in transactions for speed?
От | Claudio Freire |
---|---|
Тема | Re: Savepoints in transactions for speed? |
Дата | |
Msg-id | CAGTBQpZ-NLHeZPnD9m2O-UZfMba14t1aU2K73c0tOvz_w232LQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Savepoints in transactions for speed? (Mike Blackwell <mike.blackwell@rrd.com>) |
Ответы |
Re: Savepoints in transactions for speed?
Re: Savepoints in transactions for speed? |
Список | pgsql-performance |
On Tue, Nov 27, 2012 at 10:08 PM, Mike Blackwell <mike.blackwell@rrd.com> wrote: > > > Postgresql isn't going to run out of resources doing a big transaction, in the way some other databases will. > > I thought I had read something at one point about keeping the transaction size on the order of a couple thousand becausethere were issues when it got larger. As that apparently is not an issue I went ahead and tried the DELETE and COPYin a transaction. The load time is quite reasonable this way. Updates, are faster if batched, if your business logic allows it, because it creates less bloat and creates more opportunities for with HOT updates. I don't think it applies to inserts, though, and I haven't heard it either. In any case, if your business logic doesn't allow it (and your case seems to suggest it), there's no point in worrying.
В списке pgsql-performance по дате отправления: