Re: [HACKERS] GSOC'17 project introduction: Parallel COPY executionwith errors handling
От | Nicolas Barbier |
---|---|
Тема | Re: [HACKERS] GSOC'17 project introduction: Parallel COPY executionwith errors handling |
Дата | |
Msg-id | CAP-rdTY_=n6hbe2Shg9qu0PxSw2NcA0yiWBwEwDCkzbBMh7tEA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] GSOC'17 project introduction: Parallel COPY executionwith errors handling (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: [HACKERS] GSOC'17 project introduction: Parallel COPY executionwith errors handling
|
Список | pgsql-hackers |
2017-04-11 Robert Haas <robertmhaas@gmail.com>: > There's a nasty trade-off here between XID consumption (and the > aggressive vacuums it eventually causes) and preserving performance in > the face of errors - e.g. if you make k = 100,000 you consume 100x > fewer XIDs than if you make k = 1000, but you also have 100x the work > to redo (on average) every time you hit an error. You could make it dynamic: Commit the subtransaction even when not encountering any error after N lines (N starts out at 1), then double N and continue. When encountering an error, roll back the current subtransaction back and re-insert all the known good rows that have been rolled back (plus maybe the erroneous row into a separate table or whatever) in one new subtransaction and commit; then reset N to 1 and continue processing the rest of the file. That would work reasonable well whenever the ratio of erroneous rows is not extremely high: whether the erroneous rows are all clumped together, entirely randomly spread out over the file, or a combination of both. > If the data quality is poor (say, 50% of lines have errors) it's > almost impossible to avoid runaway XID consumption. Yup, that seems difficult to work around with anything similar to the proposed. So the docs might need to suggest not to insert a 300 GB file with 50% erroneous lines :-). Greetings, Nicolas
В списке pgsql-hackers по дате отправления: