Обсуждение: Blob getBinaryStream issue.

Поиск
Список
Период
Сортировка

Blob getBinaryStream issue.

От
"Pete Lewin-Harris"
Дата:
I'm getting an odd result when I try and get the length of a blob after I
have closed it's binary stream. In the snippet below, the final line throws
a PSQL Exception with the message 'FastPath call returned ERROR: invalid
large-object descriptor: 0'. If the final line is removed and
getBinaryStream called again, the second stream fails on first read with a
null pointer exception.

Statement stat = con.createStatement();
ResultSet rs = stat.executeQuery("SELECT blobdata from mytable where id =
1");

if (rs.next())
{
    Blob blob = rs.getBlob(1);
    InputStream is = blob.getBinaryStream();
    is.close();

    long len = blob.length();
}

Two questions, really.

Firstly, is it a bug that multiple calls to getBinaryStream do not return
separate InputStream objects or is it just the way it works and I need to
re-get the blob if I want to read the stream again?

Secondly, If this is the correct behaviour the shouldn't the exceptions
thrown be slightly more friendly? Perhaps calling getBinaryStream after the
stream has been closed should return null?


cheers, Pete




Re: Blob getBinaryStream issue.

От
Kris Jurka
Дата:

On Wed, 23 Jun 2004, Pete Lewin-Harris wrote:

> I'm getting an odd result when I try and get the length of a blob after I
> have closed it's binary stream. In the snippet below, the final line throws
> a PSQL Exception with the message 'FastPath call returned ERROR: invalid
> large-object descriptor: 0'. If the final line is removed and
> getBinaryStream called again, the second stream fails on first read with a
> null pointer exception.
>
> Firstly, is it a bug that multiple calls to getBinaryStream do not return
> separate InputStream objects or is it just the way it works and I need to
> re-get the blob if I want to read the stream again?

This looks like a bug to me.  They do actually return separate InputStream
objects, but the close method for it closes the underlying shared Blob.
Of course since they are based on the same shared blob, it would be
difficult to use more than one stream at a time since the actual position
information is stored in the shared object.  I'll investigate further, but
for now a potential simple fix would be to change
org.postgresql.largeobject.BlobInputStream#close() to not close the
underlying large object.

Kris Jurka