Re: [GENERAL] Avoiding io penalty when updating large objects
От | Mark Dilger |
---|---|
Тема | Re: [GENERAL] Avoiding io penalty when updating large objects |
Дата | |
Msg-id | 42C23209.7020607@markdilger.com обсуждение исходный текст |
Ответ на | Re: [GENERAL] Avoiding io penalty when updating large objects (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
Tom Lane wrote: > Alvaro Herrera <alvherre@surnet.cl> writes: > >>On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote: >> >>>If, for a given row, the value of c is, say, approximately 2^30 bytes >>>large, then I would expect it to be divided up into 8K chunks in an >>>external table, and I should be able to fetch individual chunks of that >>>object (by offset) rather than having to detoast the whole thing. > > >>I don't think you can do this with the TOAST mechanism. The problem is >>that there's no API which allows you to operate on only certain chunks >>of data. > > > There is the ability to fetch chunks of a toasted value (if it was > stored out-of-line but not compressed). There is no ability at the > moment to update it by chunks. If Mark needs the latter then large > objects are probably the best bet. > > I'm not sure what it'd take to support chunkwise update of toasted > fields. Jan, any thoughts? > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly Ok, If there appears to be a sane path to implementing this, I may be able to contribute engineering effort to it. (I manage a group of engineers and could spare perhaps half a man year towards this.) But I would like direction as to how you all think this should be done, or whether it is just a bad idea. I can also go with the large object approach. I'll look into that. Mark Dilger
В списке pgsql-hackers по дате отправления: