Re: BUG #17725: Sefault when seg_in() called with a large argument
От | Tom Lane |
---|---|
Тема | Re: BUG #17725: Sefault when seg_in() called with a large argument |
Дата | |
Msg-id | 1180064.1671555286@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: BUG #17725: Sefault when seg_in() called with a large argument (Robins Tharakan <tharakan@gmail.com>) |
Ответы |
Re: BUG #17725: Sefault when seg_in() called with a large argument
|
Список | pgsql-bugs |
Robins Tharakan <tharakan@gmail.com> writes: > On Tue, 20 Dec 2022 at 20:44, John Naylor <john.naylor@enterprisedb.com> wrote: >> Neither query shows the reported problem in my environment on master (as of today) or v14, so not sure > After trying a few combinations, I see that passing > CFLAGS="-Wuninitialized" (default for my test setup) causes this failure. > Removing the flag gives the error you mention, and possibly why this > may not be easy to reproduce on a production system (unsure). I don't see a crash either, but I can't help observing that this input leads to a "seg" struct with "-46" significant digits: (gdb) p *seg $3 = {lower = 31, upper = 31, l_sigd = -46 '\322', u_sigd = -46 '\322', l_ext = 0 '\000', u_ext = 0 '\000'} So we're invoking sprintf with a fairly insane precision spec: 939 sprintf(result, "%.*e", n - 1, val); (gdb) p n $4 = -46 (gdb) p val $5 = 31 POSIX says "a negative precision is taken as if the precision were omitted", and our code seems to do that, but I wonder if this is managing to overrun the output buffer on your platform. IMO: 1. The seg grammar needs to constrain the result of significant_digits() to something that will fit in the allocated "char" field width. It looks like some code paths there have clamps, but not all. 2. Because we might already have stored "seg" values with bogus sigd values, restore() had better clamp the "n" value it's given to something sane. I see it clamps large positive values, but it's not worrying about zero-or-negative. regards, tom lane
В списке pgsql-bugs по дате отправления: