Re: [RFC] Minmax indexes
От | Greg Stark |
---|---|
Тема | Re: [RFC] Minmax indexes |
Дата | |
Msg-id | CAM-w4HOq6t0tkbCTZ3cWq7X0Ma4oXuKV6Dnp=X1eCOGX-_fMUA@mail.gmail.com обсуждение исходный текст |
Ответ на | [RFC] Minmax indexes (Alvaro Herrera <alvherre@2ndquadrant.com>) |
Ответы |
Re: [RFC] Minmax indexes
|
Список | pgsql-hackers |
On Fri, Jun 14, 2013 at 11:28 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > Re-summarization is relatively expensive, because the complete page range has > to be scanned. That doesn't sound too bad to me. It just means there's a downside to having larger page ranges. I would expect the page ranges to be something in the ballpark of 32 pages -- scanning 32 pages to resummarize doesn't sound that painful but sounds like it's large enough that the resulting index would be a reasonable size. But I don't understand why an insert would invalid a tuple. An insert can just update the min and max incrementally. It's a delete that invalidates the range but as you note it doesn't really invalidate it, just mark it as needing a refresh -- and even then only if the value being deleted is equal to either the min or max. > Same-size page ranges? > Current related literature seems to consider that each "index entry" in a > minmax index must cover the same number of pages. There doesn't seem to be a I assume the reason for this in the literature is the need to quickly find the summary for a given page when you're handling an insert or delete. If you have some kind of meta data structure that lets you find it (which I gather is what the validity map is?) then you wouldn't need it. But that seems like a difficulty cost to justify compared to just having a 1:1 mapping from block to bitmap tuple. -- greg
В списке pgsql-hackers по дате отправления: