Question: merit / feasibility of compressing frontend <--> backend transfers w/ zlib
От | pgsql-general |
---|---|
Тема | Question: merit / feasibility of compressing frontend <--> backend transfers w/ zlib |
Дата | |
Msg-id | 3D331BEF.9020600@commandprompt.com обсуждение исходный текст |
Ответы |
Re: Question: merit / feasibility of compressing frontend <--> backend transfers w/ zlib
|
Список | pgsql-general |
Hello, I'm new to the list, and just started working as an intern at commandprompt.com. As one of my first projects I'm been asked to compress with zlib (www.gzip.org/zlib ) data flowing from postgres clients to and especially from the backend server. Our first idea was to write a sort of 'compression proxy' with a frontend and backend of its own. The postgres client would connect to the compression frontend on their local machine which would compress and transfer to the compresss backend on the server. Decompressed requests would be forwared to the postgres server. This idea was abandoned since: 1.) it means existing clients would have to be reconfigured to talk to their local machine, and 2.) it destroys host based authentication since all packets arriving at the sever would be from the local decompressor. The current idea is to rewrite parts of postgres itself, both the frontend libpq and the backend, so that a "compress" option could be passed by the client. After the startup packet and authentication all subsequent queries and responses would be compressed (and decompressed when received). My questions are: Is there any merit to this idea? i.e would compressing large result sets decrease the transfer time? and, How easy or difficult would it be to incorporate such change into the postgres frontend and backend source? Any help appreciated, Robert Flory using psql-general@commandprompt.com
В списке pgsql-general по дате отправления: