Re: Scanning a large binary field
От | John R Pierce |
---|---|
Тема | Re: Scanning a large binary field |
Дата | |
Msg-id | 49BD6DDD.4090502@hogranch.com обсуждение исходный текст |
Ответ на | Scanning a large binary field (Kynn Jones <kynnjo@gmail.com>) |
Ответы |
Re: Scanning a large binary field
|
Список | pgsql-general |
Kynn Jones wrote: > I have a C program that reads a large binary file, and uses the read > information plus some user-supplied arguments to generate an in-memory > data structure that is used during the remainder of the program's > execution. I would like to adapt this code so that it gets the > original binary data from a Pg database rather than a file. > > One very nice feature of the original scheme is that the reading of > the original file was done piecemeal, so that the full content of the > file (which is about 0.2GB) was never in memory all at once, which > kept the program's memory footprint nice and small. > > Is there any way to replicate this small memory footprint if the > program reads the binary data from a Pg DB instead of from a file? is this binary data in any way record or table structured such that it could be stored as multiple rows and perrhaps fields? if not, why would you want to put a 200MB blob of amorphous data into a relational database?
В списке pgsql-general по дате отправления: