Re: Avoiding duplicates (or at least marking them as such) in a "cumulative" transaction table.
От | Allan Kamau |
---|---|
Тема | Re: Avoiding duplicates (or at least marking them as such) in a "cumulative" transaction table. |
Дата | |
Msg-id | ab1ea6541003072231n25c49ab8m4ea1f1c83621662e@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Avoiding duplicates (or at least marking them as such) in a "cumulative" transaction table. (Scott Marlowe <scott.marlowe@gmail.com>) |
Ответы |
Re: Avoiding duplicates (or at least marking them as such)
in a "cumulative" transaction table.
|
Список | pgsql-general |
On Mon, Mar 8, 2010 at 5:49 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Sun, Mar 7, 2010 at 1:45 AM, Allan Kamau <kamauallan@gmail.com> wrote: >> Hi, >> I am looking for an efficient and effective solution to eliminate >> duplicates in a continuously updated "cumulative" transaction table >> (no deletions are envisioned as all non-redundant records are >> important). Below is my situation. > > Is there a reason you can't use a unique index and detect failed > inserts and reject them? > I think it would have been possible make use of a unique index as you have suggested, and silently trap the uniqueness violation. But in my case (as pointed out in my previous lengthy mail) I am inserting multiple records at once, which implicitly means a single transaction. I think in this scenario a violation of uniqueness by even a single record will lead to all the other records (in this batch) being rejected either. Is there perhaps a way to only single out the unique constraint violating record(s) without having to perform individual record inserts, I am following the example found here "http://www.postgresql.org/docs/8.4/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING". Allan.
В списке pgsql-general по дате отправления: