LargeObject API and OIDs
От | Christian Niles |
---|---|
Тема | LargeObject API and OIDs |
Дата | |
Msg-id | ED2AAA08-25E2-11D9-AD96-000A9590B78E@unit12.net обсуждение исходный текст |
Ответы |
Re: LargeObject API and OIDs
|
Список | pgsql-jdbc |
I'm using PostgreSQL as the backend for a versioning content store, and after reading the JDBC docs, I'm planning on using the LargeObject API to store the actual data. I noticed that large objects are referenced using unsigned 4-byte integers, which for practical purposes should be fine, assuming there's no chance for data corruption if the number is exceeded. However, since a versioning system will have a higher number of entries compared to a normal storage system, I'm curious if there's any chance for data corruption in the case that the DB runs out of OIDs. Ideally, the database would raise an exception, and leave the existing data untouched. From what I've read in the documentation, OIDs aren't guaranteed to be unique, and may cycle. In this case, would the first large object after the limit overwrite the first object? Also, would the number of large objects available be limited by other database objects that use OIDs? The majority of the content stored in the system will be in small files, with the exception of some images, PDFs and so forth. So, if there is a chance for data corruption, I may implement a scheme were only files above some threshold are stored using large objects, and all others are stored in a bytea column. The JDBC docs mention the performance problems with large bytea values, but are there any implementation factors that might affect the threshold I choose? I apologize that this isn't specifically JDBC related, but since I'm using JDBC for all of this, I thought I'd ask here first. best, christian.
В списке pgsql-jdbc по дате отправления: