Re: TR: [BUGS] BUG #8842: lo_open/fastpath transaction inconsistency
От | Kevin Grittner |
---|---|
Тема | Re: TR: [BUGS] BUG #8842: lo_open/fastpath transaction inconsistency |
Дата | |
Msg-id | 1399298097.47090.YahooMailNeo@web122305.mail.ne1.yahoo.com обсуждение исходный текст |
Ответ на | TR: [BUGS] BUG #8842: lo_open/fastpath transaction inconsistency (Ludovic POLLET <ludovic.pollet@kurmi-software.com>) |
Ответы |
Re: TR: [BUGS] BUG #8842: lo_open/fastpath transaction
inconsistency
|
Список | pgsql-jdbc |
Ludovic POLLET <ludovic.pollet@kurmi-software.com> wrote: > We are facing a bug where our application cannot read a Lob > because of concurrent removal. However, we are using the > serializable isolation level and a correct transaction scope. > We're using the latest JDBC driver snapshot (and hibernate but > that does not really matter). > > The problematic use case is basically a thread that read the LOB > while another thread replace it with a new one and release the > old one. In this case, the reader will get a "large object xxx > does not exist". > > From what I understand, it is due to the opening of the large > object that is always done in read/write mode and thus behave as > a read commited one (as written in the doc). > > This behaviour is at least very surprising especially when > working in SERIALIZABLE concurrency level. > > The attached patch implements the following changes: > - Opening the lob in read mode until a write function is called. > In such a case, the LOB will be reopened in read/write mode. It > totally solves my problem since our application does not modify > Lob, but creates new ones indeed. > - Deferring the first lo_open until a Blob function is really > called (hibernate seems to be creating blob even if it does not > even read them...) > > What do you think about these behaviors ? As one of the developers of the implementation of serializable transactions used by PostgreSQL, I remember that we needed to exclude large objects from the transaction serialization because it basically did not follow transactional semantics in any other regard. The only workaround I can think to suggest is to only insert and delete large objects -- never update them, and rely on the serialization of modifications to the OID columns which *reference* the large objects. Always add the large object before setting a reference to it, and always eliminate references to a large object before deleting it. A crash at the wrong time could leave a large object which is unreferenced, so you might want to look at the vacuumlo executable: http://www.postgresql.org/docs/current/static/vacuumlo.html Unfortunately, I don't know how that approach interacts with Hibernate or its use of the JDBC driver. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-jdbc по дате отправления: