Re: [NOVICE] Normalizing Unnormalized Input
От | David G. Johnston |
---|---|
Тема | Re: [NOVICE] Normalizing Unnormalized Input |
Дата | |
Msg-id | CAKFQuwbEMiORC8cAm3AmvQGdSYG9usBA541DsDC1zKN1JvV-Ww@mail.gmail.com обсуждение исходный текст |
Ответ на | [NOVICE] Normalizing Unnormalized Input (Stephen Froehlich <s.froehlich@cablelabs.com>) |
Ответы |
Re: [NOVICE] Normalizing Unnormalized Input
|
Список | pgsql-novice |
On Tue, Jun 20, 2017 at 3:50 PM, Stephen Froehlich <s.froehlich@cablelabs.com> wrote: > The part of the problem that I haven’t solved conceptually yet is how to > normalize the incoming data. The specifics of the data matter but...if at all possible I do something like: BEGIN CREATE TEMP TABLE tt COPY tt FROM STDIN INSERT NEW RECORDS into t FROM tt - one statement (per target table) UPDATE EXISTING RECORDS in t USING tt - one statement (per target table) END I don't get why (or how) you'd "rename the table into a temp table"... Its nice that we've add upsert but it seems more useful for streaming compared to batch. At scale you should try to avoid collisions in the first place. Temporary table names only need to be unique within the session. The need for indexes on the temporary table are usually limited since the goal is to move large subsets of it around all at once. David J.
В списке pgsql-novice по дате отправления: