Re: Large objects.
От | Robert Haas |
---|---|
Тема | Re: Large objects. |
Дата | |
Msg-id | AANLkTinr5s-jKyESwAbX5qW9-Oh6WWUdZZODFNeKw0Kc@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Large objects. (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Large objects.
|
Список | pgsql-hackers |
On Mon, Sep 27, 2010 at 10:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> According to the documentation, the maximum size of a large object is >> 2 GB, which may be the reason for this behavior. > > In principle, since pg_largeobject stores an integer pageno, we could > support large objects of up to LOBLKSIZE * 2^31 bytes = 4TB without any > incompatible change in on-disk format. This'd require converting a lot > of the internal LO access logic to track positions as int64 not int32, > but now that we require platforms to have working int64 that's no big > drawback. The main practical problem is that the existing lo_seek and > lo_tell APIs use int32 positions. I'm not sure if there's any cleaner > way to deal with that than to add "lo_seek64" and "lo_tell64" functions, > and have the existing ones throw error if asked to deal with positions > past 2^31. > > In the particular case here, I think that lo_write may actually be > writing past the 2GB boundary, while the coding in lo_read is a bit > different and stops at the 2GB "limit". Ouch. Letting people write data to where they can't get it back from seems double-plus ungood. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company
В списке pgsql-hackers по дате отправления: