Re: Inserting large BLOBs via JDBC - OutOfMemoryError
От | Barry Lind |
---|---|
Тема | Re: Inserting large BLOBs via JDBC - OutOfMemoryError |
Дата | |
Msg-id | 3D5D22CE.5070005@xythos.com обсуждение исходный текст |
Ответ на | Re: Inserting large BLOBs via JDBC - OutOfMemoryError (hhaag@gmx.de) |
Список | pgsql-jdbc |
Make sure you have read and understand the limitations in postgres for this approach. The java.sql.Blob support in the postgres jdbc driver uses the postgres LargeObject API. There is a section in the postgres jdbc documentation that explains the pros/cons of the LargeObject API as compared to using the bytea datatype. If you can live with the limitations, then this is a fine solution. thanks, --Barry hhaag@gmx.de wrote: >There might be a workaround: > >From JDBC 2.0 on, the interface java.sql.Blob allows to manipulate BLOBs. >AbstractJdbc2Statement.setBlob() (code posted below) seems to create a BLOB >upfront storing it in the database. The actual INSERT command will then only >contain the OID, avoiding all memory problems. > >I'll let you know how it goes. > > > > public void setBlob(int i, Blob x) throws SQLException > { > InputStream l_inStream = x.getBinaryStream(); > int l_length = (int) x.length(); > LargeObjectManager lom = connection.getLargeObjectAPI(); > int oid = lom.create(); > LargeObject lob = lom.open(oid); > OutputStream los = lob.getOutputStream(); > try > { > // could be buffered, but then the OutputStream returned by LargeObject > // is buffered internally anyhow, so there would be no performance > // boost gained, if anything it would be worse! > int c = l_inStream.read(); > int p = 0; > while (c > -1 && p < l_length) > { > los.write(c); > c = l_inStream.read(); > p++; > } > los.close(); > } > catch (IOException se) > { > throw new PSQLException("postgresql.unusual", se); > } > // lob is closed by the stream so don't call lob.close() > setInt(i, oid); > } > > > > >
В списке pgsql-jdbc по дате отправления: