Re: Compression and on-disk sorting
От | Zeugswetter Andreas DCP SD |
---|---|
Тема | Re: Compression and on-disk sorting |
Дата | |
Msg-id | E1539E0ED7043848906A8FF995BDA5790105450D@m0143.s-mxs.net обсуждение исходный текст |
Ответ на | Compression and on-disk sorting ("Jim C. Nasby" <jnasby@pervasive.com>) |
Список | pgsql-hackers |
> Unfortunatly, the interface provided by pg_lzcompress.c is probably > insufficient for this purpose. You want to be able to compress tuples > as they get inserted and start a new block once the output reaches a I don't think anything that compresses single tuples without context is going to be a win under realistic circumstances. I would at least compress whole pages. Allow a max ratio of 1:n, have the pg buffercache be uncompressed, and only compress on write (filesystem cache then holds compressed pages). The tricky part is predicting whether a tuple still fits in a n*8k uncompressed 8k compressed page, but since lzo is fast you might even test it in corner cases. (probably logic that needs to also be in the available page freespace calculation) Choosing a good n is also tricky, probably 2 (or 3 ?) is good. You probably also want to always keep the header part of the page uncompressed. Andreas
В списке pgsql-hackers по дате отправления: