Re: [HACKERS] [WIP] Effective storage of duplicates in B-tree index.
От | Peter Geoghegan |
---|---|
Тема | Re: [HACKERS] [WIP] Effective storage of duplicates in B-tree index. |
Дата | |
Msg-id | CAH2-Wz=7WU6O1dv941LaKLwoOS4k2G8xVCZG6gEHuYbp65E3mw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] [WIP] Effective storage of duplicates in B-tree index. (Bruce Momjian <bruce@momjian.us>) |
Список | pgsql-hackers |
On Thu, Jul 11, 2019 at 7:30 AM Bruce Momjian <bruce@momjian.us> wrote: > Wow, I never thought of that. The only things I know we lock until > transaction end are rows we update (against concurrent updates), and > additions to unique indexes. By definition, indexes with many > duplicates are not unique, so that doesn't apply. Right. Another advantage of their approach is that you can make queries like this work: UPDATE tab SET unique_col = unique_col + 1 This will not throw a unique violation error on most/all other DB systems when the updated column (in this case "unique_col") has a unique constraint/is the primary key. This behavior is actually required by the SQL standard. An SQL statement is supposed to be all-or-nothing, which Postgres doesn't quite manage here. The section "6.6 Interdependencies of Transactional Storage" from the paper "Architecture of a Database System" provides additional background information (I should have suggested reading both 6.6 and 6.7 together). -- Peter Geoghegan
В списке pgsql-hackers по дате отправления: