Re: Inserting a large number of records
От | Oliver Jowett |
---|---|
Тема | Re: Inserting a large number of records |
Дата | |
Msg-id | 42D774DF.6010508@opencloud.com обсуждение исходный текст |
Ответ на | Re: Inserting a large number of records (Steve Wampler <swampler@noao.edu>) |
Список | pgsql-jdbc |
Steve Wampler wrote: > Oliver Jowett wrote: > >>Greg Alton wrote: >> >> >>>What is the most efficient way to insert a large number of records into >>>a table? >> >> >>I use a PreparedStatement INSERT and addBatch() / executeBatch() with >>autocommit off and no constraints or indexes present. > > > Does anyone have an idea as to how the performance of this would compare > to using a COPY? I've used the COPY patches with jdbc and 7.4.x with > impressive results, but if the above is 'nearly' as good then I don't have > to put off upgrading to 8.x while waiting on jdbc to officially include > support for COPY. (I can't test the above right now. Maybe soon, though.) I have one dataset that is about 20 million rows and takes about 40 minutes to import via batched INSERTs including translation from the original format (I'd guess perhaps 10-15% overhead). The same dataset dumped by pg_dump in COPY format takes about 15 minutes to restore (using psql not JDBC though) -O
В списке pgsql-jdbc по дате отправления: