Re: Best practice to load a huge table from ORACLE to PG
От | Jonah H. Harris |
---|---|
Тема | Re: Best practice to load a huge table from ORACLE to PG |
Дата | |
Msg-id | 36e682920804261814w3508b232n6cf935874b19bf31@mail.gmail.com обсуждение исходный текст |
Ответ на | Best practice to load a huge table from ORACLE to PG ("Adonias Malosso" <malosso@gmail.com>) |
Ответы |
Re: Best practice to load a huge table from ORACLE to PG
|
Список | pgsql-performance |
On Sat, Apr 26, 2008 at 9:25 AM, Adonias Malosso <malosso@gmail.com> wrote: > I´d like to know what´s the best practice to LOAD a 70 milion rows, 101 > columns table > from ORACLE to PGSQL. The fastest and easiest method would be to dump the data from Oracle into CSV/delimited format using something like ociuldr (http://www.anysql.net/en/ociuldr.html) and load it back into PG using pg_bulkload (which is a helluva lot faster than COPY). Of course, you could try other things as well... such as setting up generic connectivity to PG and inserting the data to a PG table over the database link. Similarly, while I hate to see shameless self-plugs in the community, the *fastest* method you could use is dblink_ora_copy, contained in EnterpriseDB's PG+ Advanced Server; it uses an optimized OCI connection to COPY the data directly from Oracle into Postgres, which also saves you the intermediate step of dumping the data. -- Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324 EnterpriseDB Corporation | fax: 732.331.1301 499 Thornall Street, 2nd Floor | jonah.harris@enterprisedb.com Edison, NJ 08837 | http://www.enterprisedb.com/
В списке pgsql-performance по дате отправления: