In the 7.2.2 codeset, PreparedStatement.setBlob() shows a loop as it reads a
byte from the input stream (the blob) and writes it to the output stream
(PG's LO routines).
This seems highly inefficient since most large objects are, well, large...
So if I want to insert a 1MB image, this will loop a million times. Is
there a reason it's not read in chunks (even a 4096 sized array would reduce
such a loop down to 250 iterations)?
This is much worse than the 7.1 code which simply took my byte array and
wrote it all to the LargeObject stream in one call.
+++
public void setBlob(int i, Blob x) throws SQLException
{
InputStream l_inStream = x.getBinaryStream();
int l_length = (int) x.length();
LargeObjectManager lom = connection.getLargeObjectAPI();
int oid = lom.create();
LargeObject lob = lom.open(oid);
OutputStream los = lob.getOutputStream();
try
{
// could be buffered, but then the OutputStream
returned by LargeObject
// is buffered internally anyhow, so there would be
no performance
// boost gained, if anything it would be worse!
int c = l_inStream.read();
int p = 0;
while (c > -1 && p < l_length)
{
los.write(c);
c = l_inStream.read();
p++;
}
los.close();
}
catch (IOException se)
{
throw new PSQLException("postgresql.unusual", se);
}
// lob is closed by the stream so don't call lob.close()
setInt(i, oid);
}
+++
Since the getBinaryStream() returns an InputStream, should this routine
close that inputstream once it's done, or does the Blob itself have to
somehow know that a stream it creates can be closed and discarded (and if
so, how?)?