Re: Potential stack overflow in incremental base backup

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Potential stack overflow in incremental base backup
Дата
Msg-id CA+TgmoYoVLi7OrdF-PAxUMWbqT3WbDHY5bd12_FKnL0_oEbCbA@mail.gmail.com
обсуждение исходный текст
Ответ на Potential stack overflow in incremental base backup  (Thomas Munro <thomas.munro@gmail.com>)
Ответы Re: Potential stack overflow in incremental base backup  (Thomas Munro <thomas.munro@gmail.com>)
Re: Potential stack overflow in incremental base backup  (Thomas Munro <thomas.munro@gmail.com>)
Список pgsql-hackers
On Wed, Mar 6, 2024 at 12:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:
> I'll have to move that sucker onto the heap (we banned C99 variable
> length arrays and we don't use nonstandard alloca()), but I think the
> coding in master is already a bit dodgy: that's 131072 *
> sizeof(BlockNumber) = 512kB with default configure options, but
> --with-segsize X could create a stack variable up to 16GB,
> corresponding to segment size 32TB (meaning no segmentation at all).
> That wouldn't work.  Shouldn't we move it to the stack?  See attached
> draft patch.
>
> Even on the heap, 16GB is too much to assume we can allocate during a
> base backup.  I don't claim that's a real-world problem for
> incremental backup right now in master, because I don't have any
> evidence that anyone ever really uses --with-segsize (do they?), but
> if we make it an initdb option it will be more popular and this will
> become a problem.  Hmm.

I don't see a problem with moving it from the stack to the heap. I
don't believe I had any particular reason for wanting it on the stack
specifically.

I'm not sure what to do about really large segment sizes. As long as
the allocation here is < ~100MB, it's probably fine, both because (1)
100MB isn't an enormously large allocation these days and (2) if you
have a good reason to increase the segment size by a factor of 256 or
more, you're probably running on a big machine, and then you should
definitely have 100MB to burn.

However, a 32TB segment size is another thing altogether. We could
avoid transposing relative block numbers to absolute block numbers
whenever start_blkno is 0, but that doesn't do a whole lot when the
segment size is 8TB or 16TB rather than 32TB; plus, in such cases, the
relative block number array is also going to be gigantic. Maybe what
we need to do is figure out how to dynamically size the arrays in such
cases, so that you only make them as big as required for the file you
actually have, rather than as big as the file could theoretically be.
But I think that's really only necessary if we're actually going to
get rid of the idea of segmented relations altogether, which I don't
think is happening at least for v17, and maybe not ever.

--
Robert Haas
EDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: improve ssl error code, 2147483650
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: Popcount optimization using AVX512