Re: Bulkloading using COPY - ignore duplicates?

Поиск
Список
Период
Сортировка
От Jim Buttafuoco
Тема Re: Bulkloading using COPY - ignore duplicates?
Дата
Msg-id 200112161412.fBGECER20364@dual.buttafuoco.net
обсуждение исходный текст
Ответ на Bulkloading using COPY - ignore duplicates?  (Lee Kindness <lkindness@csl.co.uk>)
Ответы Re: Bulkloading using COPY - ignore duplicates?  (Peter Eisentraut <peter_e@gmx.net>)
Список pgsql-hackers
I agree with Lee,  I also like Oracle's options for a discard file, so
you can look at what was rejected, fix your problem and reload if
necessary just the rejects.

Jim


> Peter Eisentraut writes:
>  > I think allowing this feature would open up a world of new
>  > dangerous ideas, such as ignoring check contraints or foreign keys
>  > or magically massaging other tables so that the foreign keys are
>  > satisfied, or ignoring default values, or whatever.  The next step
>  > would then be allowing the same optimizations in INSERT.  I feel
>  > COPY should load the data and that's it.  If you don't like the
>  > data you have then you have to fix it first.
> 
> I agree that PostgreSQL's checks during COPY are a bonus and I
> wouldn't dream of not having them. Many database systems provide a
> fast bulkload by ignoring these constraits and cross references -
> that's a tricky/horrid situation.
> 
> However I suppose the question is should such 'invalid data' abort the
> transaction, it seems a bit drastic...
> 
> I suppose i'm not really after a IGNORE DUPLICATES option, but rather
> a CONTINUE ON ERROR kind of thing.
> 
> Regards, Lee.
> 
> 




В списке pgsql-hackers по дате отправления:

Предыдущее
От: mlw
Дата:
Сообщение: Explicit config patch 7.2B4
Следующее
От: Doug McNaught
Дата:
Сообщение: Re: Explicit config patch 7.2B4