Re: Bulkloading using COPY - ignore duplicates?
От | Lee Kindness |
---|---|
Тема | Re: Bulkloading using COPY - ignore duplicates? |
Дата | |
Msg-id | 15385.58057.662013.530035@elsick.csl.co.uk обсуждение исходный текст |
Ответ на | Re: Bulkloading using COPY - ignore duplicates? (Peter Eisentraut <peter_e@gmx.net>) |
Список | pgsql-hackers |
Peter Eisentraut writes:> I think allowing this feature would open up a world of new> dangerous ideas, such as ignoring checkcontraints or foreign keys> or magically massaging other tables so that the foreign keys are> satisfied, or ignoringdefault values, or whatever. The next step> would then be allowing the same optimizations in INSERT. I feel> COPYshould load the data and that's it. If you don't like the> data you have then you have to fix it first. I agree that PostgreSQL's checks during COPY are a bonus and I wouldn't dream of not having them. Many database systems provide a fast bulkload by ignoring these constraits and cross references - that's a tricky/horrid situation. However I suppose the question is should such 'invalid data' abort the transaction, it seems a bit drastic... I suppose i'm not really after a IGNORE DUPLICATES option, but rather a CONTINUE ON ERROR kind of thing. Regards, Lee.
В списке pgsql-hackers по дате отправления: