Re: Populating large tables with occasional bad values
От | Oliver Jowett |
---|---|
Тема | Re: Populating large tables with occasional bad values |
Дата | |
Msg-id | 48507AA6.6060902@opencloud.com обсуждение исходный текст |
Ответ на | Re: Populating large tables with occasional bad values (Craig Ringer <craig@postnewspapers.com.au>) |
Список | pgsql-jdbc |
Craig Ringer wrote: > John T. Dow wrote: > >> If I did this regularly, three options seem easiest. >> >> 1 - Load locally to get clean data and then COPY. This requires the >> server to have access local access to the file to be copied, and if >> the server is hosted by an isp, it depends on them whether you can do >> this easily. > > You can COPY over a PostgreSQL network connection. See the \copy support > in psql for one example of how it works. > > I've never had cause to try to use it with JDBC so I don't know how/if > it works in that context. It's not supported in the standard JDBC driver unfortunately. > However, batched inserts can be quite fast enough. If you do one INSERT > INTO per 1000 rows you'll already be seeing a massive performance boost: > > INSERT INTO mytable > VALUES > (blah, blah), > (blah, blah), > (blah, blah), > -- hundreds of rows later > (blah, blah); > > ... will be a *LOT* faster. If you have to do special processing or > error handling you can do it once you have the data in a server-side > staging table - and you can get it there quickly with multi-row inserts > or (at least using psql) a \copy . You can get the same effect via JDBC batched inserts (addBatch() / executeBatch()) without having to actually do a multi-row INSERT statement. That's probably the most portable approach if you're using JDBC. -O
В списке pgsql-jdbc по дате отправления: