Обсуждение: Re: compression in replication
On Wed, 16 Jun 2021 at 13:41, Massimo Ortensi <mortensi@unimaticaspa.it> wrote:
is there any way to compress the traffic for streaming replication ?
--
I think this should help reduce traffic as it would decompress only when replaying.
I have not used it, but now that you asked, I will test it out.
Also, pg_basebackup supports gzip stream, and so does COPY but I do not see exclusive mention of compression support in replication slot details.
But if you try having a sidecar like envoyproxy or any tcpproxy per server, maybe communication between sidecars can be compressed but not db to sidecar.
Gzip Compressor — envoy 1.19.0-dev-069478 documentation (envoyproxy.io) (this is not pg stuff but ... )
[pg -> sidecar -> compressed wal stream] -> [sidecar -> decompress -> pg]
I have not tried any option, FYI.
Thanks,
Vijay
Mumbai, India
sorry, the link was broken. this was the param. "wal_compression (boolean) When this parameter is on, the PostgreSQL server compresses a full page image written to WAL when full_page_writes is on or during a base backup. A compressed page image will be decompressed during WAL replay. The default value is off. Only superusers can change this setting. Turning this parameter on can reduce the WAL volume without increasing the risk of unrecoverable data corruption, but at the cost of some extra CPU spent on the compression during WAL logging and on the decompression during WAL replay."
Keep in mind that this can be accomplished in other ways depending on what archive_command you're using. rsync, and sftp (I don't recommend scp due to security concerns but it does compression as well) both have parameters to do in-transit compression. If i remember correctly Postgresql compression must be enabled on both ends. I had a situation where that wasn't easy to do and opted for external (to Postgresql) compression.
On Wednesday, June 16, 2021, 2:56:09 PM UTC, Vijaykumar Jain <vijaykumarjain.github@gmail.com> wrote:
sorry, the link was broken.
this was the param.
"wal_compression (boolean)
When this parameter is on, the PostgreSQL server compresses a full
page image written to WAL when full_page_writes is on or during a base
backup. A compressed page image will be decompressed during WAL
replay. The default value is off. Only superusers can change this
setting.
Turning this parameter on can reduce the WAL volume without increasing
the risk of unrecoverable data corruption, but at the cost of some
extra CPU spent on the compression during WAL logging and on the
decompression during WAL replay."
this was the param.
"wal_compression (boolean)
When this parameter is on, the PostgreSQL server compresses a full
page image written to WAL when full_page_writes is on or during a base
backup. A compressed page image will be decompressed during WAL
replay. The default value is off. Only superusers can change this
setting.
Turning this parameter on can reduce the WAL volume without increasing
the risk of unrecoverable data corruption, but at the cost of some
extra CPU spent on the compression during WAL logging and on the
decompression during WAL replay."
Greetings, We generally prefer to do in-line replies (not top-posting) on these lists. So, more below. * LEROY TENNISON (leroy_tennison@prodigy.net) wrote: > Keep in mind that this can be accomplished in other ways depending on what archive_command you're using. rsync, and sftp(I don't recommend scp due to security concerns but it does compression as well) both have parameters to do in-transitcompression. If i remember correctly Postgresql compression must be enabled on both ends. I had a situation wherethat wasn't easy to do and opted for external (to Postgresql) compression. In general, rsync, sftp, and scp are very poor choices for archive command as they don't do anything to ensure that the WAL is actually written out to disk and also don't do a variety of other sensible checks to make sure the WAL matches the PG database, there's a history file for the timeline the WAL is on, etc. Thanks, Stephen
Вложения
Hi everybody. I've got a large db, where the pg_largeobject almost reached (or maybe already reached before vacuum_lo) the 32 TB size limit. If I query select pg_table_size ('pg_largeobject') 35194084753408 select pg_total_relation_size ('pg_largeobject') 35692575506432 select pg_relation_size ('pg_largeobject') 35184358776832 I'm still running vacuum_lo and then vacuum -F How can I tell how near to the limit my db really is ?