Re: Bulkloading using COPY - ignore duplicates?
От | Peter Eisentraut |
---|---|
Тема | Re: Bulkloading using COPY - ignore duplicates? |
Дата | |
Msg-id | Pine.LNX.4.30.0112131724100.647-100000@peter.localdomain обсуждение исходный текст |
Ответ на | Re: Bulkloading using COPY - ignore duplicates? (Lee Kindness <lkindness@csl.co.uk>) |
Ответы |
Re: Bulkloading using COPY - ignore duplicates?
|
Список | pgsql-hackers |
Lee Kindness writes: > Yes, in an ideal world the input to COPY should be clean and > consistent with defined indexes. However this is only really the case > when COPY is used for database/table backup and restore. It misses the > point that a major use of COPY is in speed optimisation on bulk > inserts... I think allowing this feature would open up a world of new dangerous ideas, such as ignoring check contraints or foreign keys or magically massaging other tables so that the foreign keys are satisfied, or ignoring default values, or whatever. The next step would then be allowing the same optimizations in INSERT. I feel COPY should load the data and that's it. If you don't like the data you have then you have to fix it first. -- Peter Eisentraut peter_e@gmx.net
В списке pgsql-hackers по дате отправления: