Re: Re[2]: Blob question -(((
От | Brett W. McCoy |
---|---|
Тема | Re: Re[2]: Blob question -((( |
Дата | |
Msg-id | Pine.LNX.4.30.0101011413110.28136-100000@chapelperilous.net обсуждение исходный текст |
Ответ на | Re[2]: Blob question -((( (Boris <koester@x-itec.de>) |
Ответы |
Turn off referential checks for bulk copy
|
Список | pgsql-novice |
On Mon, 1 Jan 2001, Boris wrote: > BWM> 50-100k COLUMNs per row? Or are you talking about binary files of > BWM> 50-100K? You definitely need to use the large object fetaures of > BWM> PostgreSQL. > > Yes I need approx 50-100k to store ascii data for later > fulltext-search -(( Ah, now I see. large objects may not be the solution, if you are storing text, because it won't be searchable (unless you build up an external search like mnoGoSearch, but that's really for web stuff). However, all is not lost -- you can either break up your text into distinct fields, like title, author, abstract, text paragraph 1, text paragraph 2, and so on (this will entail a good bit of analysis and design of proper data structures on your part), and use the full text search that is in the contrib directory of the soruce distribution, or you can go the bleeding edge route and use the beta TOAST project, which will allow one to have row sizes of greater than the current limitations. The latter may not be a good solution for a production database. See http://postgresql.readysetnet.com/projects/devel-toast.html for more details on TOAST. -- Brett http://www.chapelperilous.net/~bmccoy/ --------------------------------------------------------------------------- How come everyone's going so slow if it's called rush hour?
В списке pgsql-novice по дате отправления: