Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)
От | Gregory Stark |
---|---|
Тема | Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows) |
Дата | |
Msg-id | 87iqotshij.fsf@oxford.xeocode.com обсуждение исходный текст |
Ответ на | Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows) ("Robert Haas" <robertmhaas@gmail.com>) |
Ответы |
Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)
Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows) |
Список | pgsql-hackers |
"Robert Haas" <robertmhaas@gmail.com> writes: > Regardless of whether we do that or not, no one has offered any > justification of the arbitrary decision not to compress columns >1MB, Er, yes, there was discussion before the change, for instance: http://archives.postgresql.org/pgsql-hackers/2007-08/msg00082.php And do you have any response to this point? I think the right value for this setting is going to depend on theenvironment. If the system is starved for cpu cycles thenyou won't want tocompress large data. If it's starved for i/o bandwidth but has spare cpucycles then you will. http://archives.postgresql.org/pgsql-hackers/2009-01/msg00074.php > and at least one person (Peter) has suggested that it is exactly > backwards. I think he's right, and this part should be backed out. Well the original code had a threshold above which we *always* compresed even if it saved only a single byte. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning
В списке pgsql-hackers по дате отправления: