Re: 10.1: hash index size exploding on vacuum full analyze
От | Amit Kapila |
---|---|
Тема | Re: 10.1: hash index size exploding on vacuum full analyze |
Дата | |
Msg-id | CAA4eK1Lurd5cP6=g1=prodxRgxyuorhzbr3UHfBHsyiYo53dgw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: 10.1: hash index size exploding on vacuum full analyze (Teodor Sigaev <teodor@sigaev.ru>) |
Ответы |
Re: 10.1: hash index size exploding on vacuum full analyze
|
Список | pgsql-hackers |
On Tue, Dec 26, 2017 at 9:48 PM, Teodor Sigaev <teodor@sigaev.ru> wrote: >> Initially, I have also thought of doing it in swap_relation_files, but >> we don't have stats values there. We might be able to pass it, but >> not sure if there is any need for same. As far as Toast table's case >> is concerned, I don't see the problem because we are copying the data >> row-by-row only for heap where the value of num_tuples and num_pages >> could be different. See copy_heap_data. > > > Ok, agree. AP (sorry, I don't see your name), could your check that patch > fixes your issue? > > Nevertheless, I'm going to push this patch in any case and, suppose, it > should be backpatched to version 10 too, although the bug is not about data > loss or any corruption. But patch looks rather straightforward and has low > risk of some new bugs. > Ideally, we can backpatch this patch to prior versions as well, but I think users will see this problem in v10 onwards (as hash indexes are primarily getting used from v10), so it seems okay to backpatch till 10. In future, if we see any other symptom in prior branches, then we can always backpatch it. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: