Re: [HACKERS] compression in LO and other fields
От | Karel Zak - Zakkr |
---|---|
Тема | Re: [HACKERS] compression in LO and other fields |
Дата | |
Msg-id | Pine.LNX.3.96.991112094645.14930A-100000@ara.zf.jcu.cz обсуждение исходный текст |
Ответ на | Re: [HACKERS] compression in LO and other fields (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: [HACKERS] compression in LO and other fields
|
Список | pgsql-hackers |
On Fri, 12 Nov 1999, Tom Lane wrote: > Tatsuo Ishii <t-ishii@sra.co.jp> writes: > >> LO is a dead end. What we really want to do is eliminate tuple-size > >> restrictions and then have large ordinary fields (probably of type > >> bytea) in regular tuples. I'd suggest working on compression in that > >> context, say as a new data type called "bytez" or something like that. --- cut --- > > The only thing LO would do for you is divide the data into block-sized > tuples, so there would be a bunch of little WAL entries instead of one > big one. But that'd probably be easy to duplicate too. If we implement > big tuples by chaining together disk-block-sized segments, which seems > like the most likely approach, couldn't WAL log each segment as a > separate log entry? If so, there's almost no difference between LO and > inline field for logging purposes. > I'am not sure, that LO is a dead end for every users. Big (blob) fields going during SQL engine (?), but why - if I needn't use this data as typically SQL data (I not need index, search .. in (example) gif files). Will pity if LO devel. will go down. I still thing that LO compression is not bad idea :-) Other eventual compression questions: * some aplication allow use over slow networks between client<->server a compressed stream, and PostgreSQL? * MySQL dump allow make compressed dump file, it is good, and PostgreSQL? Karel
В списке pgsql-hackers по дате отправления: