Re: BUG #17254: Crash with 0xC0000409 in pg_stat_statements when pg_stat_tmp\pgss_query_texts.stat exceeded 2GB.
От | Tom Lane |
---|---|
Тема | Re: BUG #17254: Crash with 0xC0000409 in pg_stat_statements when pg_stat_tmp\pgss_query_texts.stat exceeded 2GB. |
Дата | |
Msg-id | 856857.1635627048@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: BUG #17254: Crash with 0xC0000409 in pg_stat_statements when pg_stat_tmp\pgss_query_texts.stat exceeded 2GB. (Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>) |
Ответы |
Re: BUG #17254: Crash with 0xC0000409 in pg_stat_statements when pg_stat_tmp\pgss_query_texts.stat exceeded 2GB.
|
Список | pgsql-bugs |
=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes: > On Sat, Oct 30, 2021 at 6:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: >> I think instead, we need to turn the subsequent one-off read() call into a >> loop that reads no more than INT_MAX bytes at a time. It'd be possible >> to restrict that to Windows, but probably no harm in doing it the same >> way everywhere. > Seems reasonable to me, can such a change be back-patched? Don't see why not. >> A different line of thought is that maybe we shouldn't be letting the >> file get so big in the first place. Letting every backend have its >> own copy of a multi-gigabyte stats file is going to be problematic, >> and not only on Windows. It looks like the existing logic just considers >> the number of hash table entries, not their size ... should we rearrange >> things to keep a running count of the space used? > +1. There should be a mechanism to limit the effective memory size. This, on the other hand, would likely be something for HEAD only. But now that we've seen a field complaint, it seems like a good thing to pursue. regards, tom lane
В списке pgsql-bugs по дате отправления: