Re: pg_lzcompress strategy parameters
От | Tom Lane |
---|---|
Тема | Re: pg_lzcompress strategy parameters |
Дата | |
Msg-id | 26793.1186353032@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: pg_lzcompress strategy parameters (Gregory Stark <stark@enterprisedb.com>) |
Ответы |
Re: pg_lzcompress strategy parameters
|
Список | pgsql-hackers |
Gregory Stark <stark@enterprisedb.com> writes: > (Incidentally, this means what I said earlier about uselessly trying to > compress objects below 256 is even grosser than I realized. If you have a > single large object which even after compressing will be over the toast target > it will force *every* varlena to be considered for compression even though > they mostly can't be compressed. Considering a varlena smaller than 256 for > compression only costs a useless palloc, so it's not the end of the world but > still. It does seem kind of strange that a tuple which otherwise wouldn't be > toasted at all suddenly gets all its fields compressed if you add one more > field which ends up being stored externally.) Yeah. It seems like we should modify the first and third loops so that if (after compression if any) the largest attribute is *by itself* larger than the target threshold, then we push it out to the toast table immediately, rather than continuing to compress other fields that might well not need to be touched. regards, tom lane
В списке pgsql-hackers по дате отправления: