Re: Optimizing huge inserts/copy's
От | Jie Liang |
---|---|
Тема | Re: Optimizing huge inserts/copy's |
Дата | |
Msg-id | 39AC65E6.851BBC64@ipinc.com обсуждение исходный текст |
Ответ на | Optimizing huge inserts/copy's (Webb Sprague <wsprague100@yahoo.com>) |
Ответы |
Re: Optimizing huge inserts/copy's
|
Список | pgsql-sql |
Hi, there, 1. use copy ... from '.....'; 2. write a PL/pgSQL function and pass multiple records as an array. However, if your table have a foreign key constraint, it cannot be speed up, I have same question as you, my table invloving 9-13 million rows, I don't know how can I add a foreign key them also? Webb Sprague wrote: > Hi all, > > Does anybody have any thoughts on optimizing a huge > insert, involving something like 3 million records all > at once? Should I drop my indices before doing the > copy, and then create them after? I keep a > tab-delimited file as a buffer, copy it, then do it > again about 400 times. Each separate buffer is a few > thousand records. > > We do this at night, so it's not the end of the world > if it takes 8 hours, but I would be very grateful for > some good ideas... > > Thanks > W > > __________________________________________________ > Do You Yahoo!? > Yahoo! Mail - Free email you can access from anywhere! > http://mail.yahoo.com/ -- Jie LIANG Internet Products Inc. 10350 Science Center Drive Suite 100, San Diego, CA 92121 Office:(858)320-4873 jliang@ipinc.com www.ipinc.com
В списке pgsql-sql по дате отправления: