Re: Optimize partial TOAST decompression
От | Binguo Bao |
---|---|
Тема | Re: Optimize partial TOAST decompression |
Дата | |
Msg-id | CAL-OGkuBn13N1=jBkiWs4oks_3SxVioiec9Sm9zPDKJFU4szyg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Optimize partial TOAST decompression (Andrey Borodin <x4mmm@yandex-team.ru>) |
Ответы |
Re: Optimize partial TOAST decompression
|
Список | pgsql-hackers |
Hi!
> Andrey Borodin <x4mmm@yandex-team.ru> 于2019年6月29日周六 下午9:48写道:
Hi!
Please, do not use top-posting, i.e. reply style where you quote whole message under your response. It makes reading of archives terse.
> 24 июня 2019 г., в 7:53, Binguo Bao <djydewang@gmail.com> написал(а):
>
>> This is not correct: L bytes of compressed data do not always can be decoded into at least L bytes of data. At worst we have one control byte per 8 bytes of literal bytes. This means at most we need (L*9 + 8) / 8 bytes with current pglz format.
>
> Good catch! I've corrected the related code in the patch.
> ...
> <0001-Optimize-partial-TOAST-decompression-2.patch>
I've took a look into the code.
I think we should extract function for computation of max_compressed_size and put it somewhere along with pglz code. Just in case something will change something about pglz so that they would not forget about compression algorithm assumption.
Also I suggest just using 64 bit computation to avoid overflows. And I think it worth to check if max_compressed_size is whole data and use min of (max_compressed_size, uncompressed_data_size).
Also you declared needsize and max_compressed_size too far from use. But this will be solved by function extraction anyway.
Thanks!
Best regards, Andrey Borodin.
Thanks for the suggestion.
I've extracted function for computation for max_compressed_size and put the function into pg_lzcompress.c.
Best regards, Binguo Bao.
Вложения
В списке pgsql-hackers по дате отправления: