Re: Storing a kazillion small blobs in postgresql. Smart?
От | Steve Atkins |
---|---|
Тема | Re: Storing a kazillion small blobs in postgresql. Smart? |
Дата | |
Msg-id | 7B6529FA-FAC1-4E10-A467-A62050762201@blighty.com обсуждение исходный текст |
Ответ на | Storing a kazillion small blobs in postgresql. Smart? (Stefan Arentz <stefan.arentz@norad.org>) |
Список | pgsql-general |
On Feb 11, 2006, at 9:41 AM, Stefan Arentz wrote: > Hi there, > > I'm looking at the design for an application where i have to store > potentially many gigabytes worth of small blobs of varying sizes > (8K - 64K) in a database. I've always been told that it is smarter > to store references to files in the database, and I think that is > true in a lot of cases. But here I need this data to be > transactional and probably replicated so I'm looking into storing > it directly in the db. > > I know this is all rather vague but I'm interested in more > experienced pgsql user's opinions about this. Is it smart? Would it > work? Or would something like this blow up the db? It would work just fine. Postgresql is pretty good at handling larger data elements like this, and will transparently behind the scenes store them in a look-aside table, compressed and pointed to by the main table (look for TOAST in the manual for the full info). For smaller chunks of data I'd use text fields if they're text, bytea fields id they're non-text. You can use "large objects" too, but I suspect the additional features they offer (the ability to read just part of the object, say) would be far outweighed by the inconvenience in using them compared to bytea for your app. (They also tend to be less well-supported by some client interfaces). Cheers, Steve
В списке pgsql-general по дате отправления: