data compression/encryption
От | Brett McCormick |
---|---|
Тема | data compression/encryption |
Дата | |
Msg-id | 13639.57260.727612.373168@abraxas.scene.com обсуждение исходный текст |
Список | pgsql-hackers |
from todo: Allow compression of large fields or a compressed field type I like this idea. Should be pretty easy too. Are we interested in putting this in the distribution, or as a contrib? I could easily create a compressed field type like the text type. However, how do you actually get the data in there? Assuming you're trying to get around the 8k tuple limit, there's still the 8k query length. Does copy do ok with >8k tuples (assuming the resulting tuple size is < 8k). Compression of large objects is also a good idea, but I'm not sure how it would be implemented, or how it would affect reads/writes (you can't really seek with zlib, which is what I would use). I've also been thinking about data encryption. Assuming it would be too hard & long to revamp or add a new storage manager and actually encrypt the pages themselves, we can encrypt what gets stored in the field, and either have a type for it, or a function. What about the idea of a 'data translator', a function which would act as a filter between the in/out functions and the actual storage of data on disk/in memory. So that it could be applied to fields which would then be automagically compressed.
В списке pgsql-hackers по дате отправления: