Re: continuous copy/update one table to another
От | Terry |
---|---|
Тема | Re: continuous copy/update one table to another |
Дата | |
Msg-id | 8ee061011002282121j4f4c1327i1f2b29c91e031b8d@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: continuous copy/update one table to another (Terry <td3201@gmail.com>) |
Ответы |
Re: continuous copy/update one table to another
|
Список | pgsql-general |
On Sun, Feb 28, 2010 at 10:23 PM, Terry <td3201@gmail.com> wrote: > On Sun, Feb 28, 2010 at 7:12 PM, John R Pierce <pierce@hogranch.com> wrote: >> Terry wrote: >>> >>> One more question. This is a pretty decent sized table. It is >>> estimated to be 19,038,200 rows. That said, should I see results >>> immediately pouring into the destination table while this is running? >>> >> >> SQL transactions are atomic. you wont' see anything in the 'new' table >> until the INSERT finishes committing, then you'll see it all at once. >> >> you will see a fair amount of disk write activity while its running. 20M >> rows will take a while to run the first time, and probably a fair amount of >> memory. > > This is working very well. The initial load worked great. Took a > little while but fine after that. I am using this: > INSERT INTO client_logs SELECT * FROM clients_event_log as t1 where > t1.ev_id > (select max(t.ev_id) from client_logs as t); > > However, I got lost in this little problem and overlooked another. I > need to convert the unix time in the ev_time column to a timestamp. I > have the idea with this little bit but not sure how to integrate it > nicely: > select timestamptz 'epoch' + 1267417261 * interval '1 second' > I love overcomplicating things: SELECT *,to_timestamp(ev_time) FROM clients_event_log as t1 where t1.ev_id > (select max(t.ev_id) from client_logs as t)
В списке pgsql-general по дате отправления: