Re: possible patch to increase number of hash overflow pages?
От | Tom Lane |
---|---|
Тема | Re: possible patch to increase number of hash overflow pages? |
Дата | |
Msg-id | 22796.992962301@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | possible patch to increase number of hash overflow pages? (Stephen Ramsey <sramsey@internap.com>) |
Ответы |
Re: possible patch to increase number of hash overflow
pages?
|
Список | pgsql-patches |
Stephen Ramsey <sramsey@internap.com> writes: > I was attempting to index an int4 column on a table with 6x10^7 rows using > the "hash" index algorithm under PostgreSQL 7.1 on Debian Linux, and > received the following error message: > nubs=# create index throughput_index_service_fk on throughput_datum using > hash (service_fk); > ERROR: HASH: Out of overflow pages. Out of luck. Just out of curiosity, what's the reason for using a hash index at all? The btree index type is much better supported and will do everything that a hash index could do (and more). > Looking into the source code a bit, it looked (to my untrained eye) as if > it might be possible to increase the number of overflow pages, with a > patch to src/include/access/hash.h to use a 32-bit "overflow page address" > data type rather than a 16-bit "overflow page address" data type. I haven't looked much at hash either, but am I right to guess that overflow pages are used when an individual hash bucket fills up? If so, overrunning a 16-bit field would suggest that you've got more than 64K index pages in a single hash bucket ... which does not bode well at all for performance. Seems like the answer is to get the thing to use more hash buckets, not to make it possible to support linear searches over chains exceeding 64K pages... regards, tom lane
В списке pgsql-patches по дате отправления: