Re: optimising data load
От | John Taylor |
---|---|
Тема | Re: optimising data load |
Дата | |
Msg-id | 02052310100201.31556@splash.hq.jtresponse.co.uk обсуждение исходный текст |
Ответ на | Re: optimising data load (Ron Johnson <ron.l.johnson@cox.net>) |
Ответы |
Re: optimising data load
Full Outer Joins |
Список | pgsql-novice |
On Thursday 23 May 2002 08:22, you wrote: > On Thu, 2002-05-23 at 02:01, John Taylor wrote: > > On Thursday 23 May 2002 03:40, Ron Johnson wrote: > > > On Wed, 2002-05-22 at 16:05, John Taylor wrote: > > > > > > > So, how would I merge the data from the view with the data to copy in ? > > > > > > ??? Unless I am misunderstanding you: > > > > I think you are misunderstanding me. > > I have 300,000 rows to insert. > > They each require a different query to obtain the correct data to insert. > > Oh, ok. How about creating an intermediary, indexless table > that the 300,000 records will be inserted into. Then, after > the intermediary table is populated, drop the index(es) on > orderlines and INSERT INTO orderlines SELECT * FROM intermediary; > That sounds a good idea. Would it be quicker to copy out from the intermediary, and then copy in to the real one ? I guess I'll have to experiment. I have a feeling a lot of the problems may be due to the indexes. I've been experimenting with an easier table. copy in to the live takes 5 minutes for 850000 records copy in to the updates takes 2 hours ! It is the same data with 3 extra columns, but there are an additional 2 indexes. I think it must be the indexes making the diffrence. I'm just checking now. Thanks for the ideas ! Regards JohnT
В списке pgsql-novice по дате отправления: