Re: pg_dump with compressible and non-compressible tables
От | Adrian Klaver |
---|---|
Тема | Re: pg_dump with compressible and non-compressible tables |
Дата | |
Msg-id | 97666a97-03b3-2086-ee8b-c89557ec9843@aklaver.com обсуждение исходный текст |
Ответ на | Re: pg_dump with compressible and non-compressible tables (Ron <ronljohnsonjr@gmail.com>) |
Список | pgsql-general |
On 05/05/2018 12:41 PM, Ron wrote: > On 05/05/2018 12:13 PM, Adrian Klaver wrote: >> On 05/05/2018 07:14 AM, Ron wrote: >>> Hi, >>> >>> v9.6 >>> >>> We've got big databases where some of the tables are highly >>> compressible, but some have many bytea fields containing PDF files. >> >> Can you see a demonstrable difference? > > Very much so. The ASCII hex representations of the PDF files are > compressible, but take a *long* time to compress. Uncompressed backups > are 50% faster. Got it. The developers will need to comment on whether this is doable or not. The thing is that this would be a new feature. At this point version 11 is closed to new features, so you are looking at version 12 which means 1.5-2 years out. If it where me I would try piping a pg_dump plain text output to a compression program other then zlib(used in pg_dump compression) and see if you can get better performance. >> These are different critters then bytea. > > Ok. I need the data in my backups anyway, so excluding them is 100% > contrary to what I need. I understand. What I was trying to say was that the blob you are referring to, bytea in a field, is not the same as what pg_dump is referring to, a large object stored in the pg_largeobject table: https://www.postgresql.org/docs/10/static/lo-intro.html So if you want to pursue this feature I think you need to come up with another name for it to avoid the confusion I mentioned above. > > -- > Angular momentum makes the world go 'round. -- Adrian Klaver adrian.klaver@aklaver.com
В списке pgsql-general по дате отправления: