Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)
От | Tom Lane |
---|---|
Тема | Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows) |
Дата | |
Msg-id | 24455.1231182332@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows) ("Robert Haas" <robertmhaas@gmail.com>) |
Ответы |
Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)
Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows) |
Список | pgsql-hackers |
"Robert Haas" <robertmhaas@gmail.com> writes: > The whole thing got started because Alex Hunsaker pointed out that his > database got a lot bigger because we disabled compression on columns > > 1MB. It seems like the obvious thing to do is turn it back on again. I suggest that before we make any knee-jerk responses, we need to go back and reread the prior discussion. The current 8.4 code was proposed here: http://archives.postgresql.org/pgsql-patches/2008-02/msg00053.php and that message links to several older threads that were complaining about the 8.3 behavior. In particular the notion of an upper limit on what we should attempt to compress was discussed in this thread: http://archives.postgresql.org/pgsql-general/2007-08/msg01129.php After poking around in those threads a bit, I think that the current threshold of 1MB was something I just made up on the fly (I did note that it needed tuning...). Perhaps something like 10MB would be a better default. Another possibility is to have different minimum compression rates for "small" and "large" datums. regards, tom lane
В списке pgsql-hackers по дате отправления: