Re: COPY enhancements
От | Tom Lane |
---|---|
Тема | Re: COPY enhancements |
Дата | |
Msg-id | 26693.1255022770@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: COPY enhancements (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: COPY enhancements
|
Список | pgsql-hackers |
Robert Haas <robertmhaas@gmail.com> writes: > On Thu, Oct 8, 2009 at 12:21 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Another approach that was discussed earlier was to divvy the rows into >> batches. �Say every thousand rows you sub-commit and start a new >> subtransaction. �Up to that point you save aside the good rows somewhere >> (maybe a tuplestore). �If you get a failure partway through a batch, >> you start a new subtransaction and re-insert the batch's rows up to the >> bad row. �This could be pretty awful in the worst case, but most of the >> time it'd probably perform well. �You could imagine dynamically adapting >> the batch size depending on how often errors occur ... > Yeah, I think that's promising. There is of course the possibility > that a row which previously succeeded could fail the next time around, > but most of the time that shouldn't happen, and it should be possible > to code it so that it still behaves somewhat sanely if it does. Actually, my thought was that failure to reinsert a previously good tuple should cause us to abort the COPY altogether. This is a cheap-and-easy way of avoiding sorceror's apprentice syndrome. Suppose the failures are coming from something like out of disk space, transaction timeout, whatever ... a COPY that keeps on grinding no matter what is *not* ideal. regards, tom lane
В списке pgsql-hackers по дате отправления: