Re: [HACKERS] [WIP] Effective storage of duplicates in B-tree index.
От | Bruce Momjian |
---|---|
Тема | Re: [HACKERS] [WIP] Effective storage of duplicates in B-tree index. |
Дата | |
Msg-id | 20191217215825.GA30116@momjian.us обсуждение исходный текст |
Ответ на | Re: [HACKERS] [WIP] Effective storage of duplicates in B-tree index. (Peter Geoghegan <pg@bowt.ie>) |
Ответы |
Re: [HACKERS] [WIP] Effective storage of duplicates in B-tree index.
|
Список | pgsql-hackers |
On Thu, Dec 12, 2019 at 06:21:20PM -0800, Peter Geoghegan wrote: > On Tue, Dec 3, 2019 at 12:13 PM Peter Geoghegan <pg@bowt.ie> wrote: > > The new criteria/heuristic for unique indexes is very simple: If a > > unique index has an existing item that is a duplicate on the incoming > > item at the point that we might have to split the page, then apply > > deduplication. Otherwise (when the incoming item has no duplicates), > > don't apply deduplication at all -- just accept that we'll have to > > split the page. We already cache the bounds of our initial binary > > search in insert state, so we can reuse that information within > > _bt_findinsertloc() when considering deduplication in unique indexes. > > Attached is v26, which adds this new criteria/heuristic for unique > indexes. We now seem to consistently get good results with unique > indexes. In the past we tried to increase the number of cases where HOT updates can happen but were unable to. Would this help with non-HOT updates? Do we have any benchmarks where non-HOT updates cause slowdowns that we can test on this? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription +
В списке pgsql-hackers по дате отправления: