Re: Temporary memory peak
От | Barry Lind |
---|---|
Тема | Re: Temporary memory peak |
Дата | |
Msg-id | 3E77CBA6.50406@xythos.com обсуждение исходный текст |
Ответ на | Temporary memory peak (Marcel Ruff <mr@marcelruff.info>) |
Список | pgsql-jdbc |
Marcel, I don't know of any workaround for this problem. I hope that 7.4 will work better with large column values like this. There are three projects going on that should help: 1) Writing directly from the stream (assuming you use setBinaryStream()) to the server; currently the value is pulled into memory before being written to server. 2) Improvements to the protocol between client and server; currently the prorocol for binary data uses an octal escaping mechanism which can result in an upto four times data expansion for a binary value. 3) Internal improvements in the driver that will reduce the number of copies of the data that are necessary to keep in memory; currently at least three copies of the data exist, one being the raw value received from the server, the second being the String version of that raw value and the third being the actual byte[]. thanks, --Barry Marcel Ruff wrote: > Hi, > > i'm using the postgres JDBC driver to put a 2 MB blob into the database > (PostgreSQL 7.2.2). > > I have tried an old release, the current stable (for JDK 1.2/1.3 and for > JDK 1.4) and the current snapshot for JDK 1.4 (2003 02 09) with the same > result. > > When inserting a 2 MB blob (bytea) into Postgres the JDK memory > consumption reaches appr. 20 to 40 times the memory (40 MB). > Inserting a 5 MB blob exhausts my memory. > > As a comparison: > The Oracle JDBC driver consumes temporary 5 times the size (~10MB) > which is still a lot. > > Is there any workaround? > > thanks > > Marcel > http://www.xmlBlaster.org > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster >
В списке pgsql-jdbc по дате отправления: