Re: BufferAlloc: don't take two simultaneous locks

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: BufferAlloc: don't take two simultaneous locks
Дата
Msg-id 1475696.1649950037@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: BufferAlloc: don't take two simultaneous locks  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: BufferAlloc: don't take two simultaneous locks  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
> On Thu, Apr 14, 2022 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> FWIW, I have extremely strong doubts about whether this patch
>> is safe at all.  This particular problem seems resolvable though.

> Can you be any more specific?

> This existing comment is surely in the running for terrible comment of the year:

>          * To change the association of a valid buffer, we'll need to have
>          * exclusive lock on both the old and new mapping partitions.

I'm pretty sure that text is mine, and I didn't really think it needed
any additional explanation, because of exactly this:

> It seems to me that whatever hazards exist must come from the fact
> that the operation is no longer fully atomic.

If it's not atomic, then you have to worry about what happens if you
fail partway through, or somebody else changes relevant state while
you aren't holding the lock.  Maybe all those cases can be dealt with,
but it will be significantly more fragile and more complicated (and
therefore slower in isolation) than the current code.  Is the gain in
potential concurrency worth it?  I didn't think so at the time, and
the graphs upthread aren't doing much to convince me otherwise.

            regards, tom lane



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: API stability [was: pgsql: Fix possible recovery trouble if TRUNCATE overlaps a checkpoint.]
Следующее
От: Robert Haas
Дата:
Сообщение: Re: TRAP: FailedAssertion("HaveRegisteredOrActiveSnapshot()", File: "toast_internals.c", Line: 670, PID: 19403)