Re: bigger blob rows?
| От | Doug McNaught |
|---|---|
| Тема | Re: bigger blob rows? |
| Дата | |
| Msg-id | 874q41xxf0.fsf@asmodeus.mcnaught.org обсуждение исходный текст |
| Ответ на | bigger blob rows? (Eric Davies <Eric@barrodale.com>) |
| Список | pgsql-general |
Eric Davies <Eric@barrodale.com> writes: > Back in the days of 7.4.2, we tried storing large blobs (1GB+) in > postgres but found them too slow because the blob was being chopped > into 2K rows stored in some other table. > However, it has occurred to us that if it was possible to configure > the server to split blobs into bigger pieces, say 32K, our speed > problems might diminish correspondingly. > Is there a compile time constant or a run time configuration entry > that accomplish this? I *think* the limit would be 8k (the size of a PG page) even if you could change it. Upping that would require recompiling with PAGE_SIZE set larger, which would have a lot of other consequences. -Doug
В списке pgsql-general по дате отправления: