Re: Hash Indexes
От | Robert Haas |
---|---|
Тема | Re: Hash Indexes |
Дата | |
Msg-id | CA+TgmoZGbti7cQM1_AZeyNk6zNLj3AOJz6b5ALwoew=4HeccgA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Hash Indexes (Amit Kapila <amit.kapila16@gmail.com>) |
Список | pgsql-hackers |
On Fri, Dec 2, 2016 at 10:54 PM, Amit Kapila <amit.kapila16@gmail.com> wrote: > On Sat, Dec 3, 2016 at 12:13 AM, Robert Haas <robertmhaas@gmail.com> wrote: >> On Fri, Dec 2, 2016 at 1:54 AM, Amit Kapila <amit.kapila16@gmail.com> wrote: >>>> I want to split when the average bucket >>>> contains 10 pages worth of tuples. >>> >>> oh, I think what you mean to say is hack the code to bump fill factor >>> and then test it. I was confused that how can user can do that from >>> SQL command. >> >> Yes, that's why I said "hacking the fill factor up to 1000" when I >> originally mentioned it. >> >> Actually, for hash indexes, there's no reason why we couldn't allow >> fillfactor settings greater than 100, and it might be useful. > > Yeah, I agree with that, but as of now, it might be tricky to support > the different range of fill factor for one of the indexes. Another > idea could be to have an additional storage parameter like > split_bucket_length or something like that for hash indexes which > indicate that split will occur after the average bucket contains > "split_bucket_length * page" worth of tuples. We do have additional > storage parameters for other types of indexes, so having one for the > hash index should not be a problem. Agreed. > I think this is important because split immediately increases the hash > index space by approximately 2 times. We might want to change that > algorithm someday, but the above idea will prevent that in many cases. Also agreed. But the first thing is that you should probably do some testing in that area via a quick hack to see if anything breaks in an obvious way. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: