Re: libpq compression

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: libpq compression
Дата
Msg-id CA+TgmoZbVH4R5JA9_pyTSdnFTq-w2B1JvomeD+r0oo0xp1DPtQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: libpq compression  (Daniil Zakhlystov <usernamedt@yandex-team.ru>)
Ответы Re: libpq compression  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Список pgsql-hackers
On Tue, Dec 22, 2020 at 6:24 AM Daniil Zakhlystov
<usernamedt@yandex-team.ru> wrote:
> When using bidirectional compression, Postgres resource usage correlates with the selected compression level. For
example,here is the Postgresql application memory usage:
 
>
> No compression - 1.2 GiB
>
> ZSTD
> zstd:1 - 1.4 GiB
> zstd:7 - 4.0 GiB
> zstd:13 - 17.7 GiB
> zstd:19 - 56.3 GiB
> zstd:20 - 109.8 GiB - did not succeed
> zstd:21, zstd:22  > 140 GiB
> Postgres process crashes (out of memory)

Good grief. So, suppose we add compression and support zstd. Then, can
unprivileged user capable of connecting to the database can negotiate
for zstd level 1 and then choose to actually send data compressed at
zstd level 22, crashing the server if it doesn't have a crapton of
memory? Honestly, I wouldn't blame somebody for filing a CVE if we
allowed that sort of thing to happen. I'm not sure what the solution
is, but we can't leave a way for a malicious client to consume 140GB
of memory on the server *per connection*. I assumed decompression
memory was going to measured in kB or MB, not GB. Honestly, even at
say L7, if you've got max_connections=100 and a user who wants to make
trouble, you have a really big problem.

Perhaps I'm being too pessimistic here, but man that's a lot of memory.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Geoghegan
Дата:
Сообщение: HOT chain bug in latestRemovedXid calculation
Следующее
От: Tomas Vondra
Дата:
Сообщение: Re: libpq compression