Re: Caching Websites
От | scott.marlowe |
---|---|
Тема | Re: Caching Websites |
Дата | |
Msg-id | Pine.LNX.4.33.0305120941040.26708-100000@css120.ihs.com обсуждение исходный текст |
Ответ на | Re: Caching Websites (Doug McNaught <doug@mcnaught.org>) |
Список | pgsql-general |
On 12 May 2003, Doug McNaught wrote: > Adam Kessel <adam@bostoncoop.net> writes: > > > Based on the documetation, I don't immediately see any disadvantage to > > using these large objects--does anyone else see why I might not want to > > store archived websites in large objects? > > It's going to be (probably) a little slower than the filesystem > solution, and backups are a little more involved (you can't use > pg_dumpall) but everything works--I have been using LOs with success > for a couple years now. If the files aren't too big (under a meg or so each) you can either try bytea encoding / bytea field types, or you can base64 encode, escape, and store it in a text field. Since pgsql autocompresses text fields, the fact that base64 is a little bigger is no big deal. The advantage to storing them in bytea or text with base64 is that pg_dump backs up your whole database.
В списке pgsql-general по дате отправления: