Re: [HACKERS] Page Scan Mode in Hash Index
От | Alexander Korotkov |
---|---|
Тема | Re: [HACKERS] Page Scan Mode in Hash Index |
Дата | |
Msg-id | CAPpHfdtUrn5wtu7vkt9JNGm7ey28gCeN=YUWM1wchxwNMzF6fQ@mail.gmail.com обсуждение исходный текст |
Ответ на | [HACKERS] Page Scan Mode in Hash Index (Ashutosh Sharma <ashu.coek88@gmail.com>) |
Ответы |
Re: [HACKERS] Page Scan Mode in Hash Index
|
Список | pgsql-hackers |
Hi, Ashutosh!
I've assigned to review this patch.
At first, I'd like to notice that I like idea and general design.
Secondly, patch set don't apply cleanly to master. Please, rebase it.
On Tue, Feb 14, 2017 at 8:27 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:
1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time. patch: this
patch rewrites the hash index scan module to work in page-at-a-time
mode. It basically introduces two new functions-- _hash_readpage() and
_hash_saveitem(). The former is used to load all the qualifying tuples
from a target bucket or overflow page into an items array. The latter
one is used by _hash_readpage to save all the qualifying tuples found
in a page into an items array. Apart from that, this patch bascially
cleans _hash_first(), _hash_next and hashgettuple().
I see that forward and backward scan cases of function _hash_readpage contain a lot of code duplication
Could you please refactor this function to have less code duplication?
Also, I wonder if you have a special idea behind inserting data in test.sql by 1002 separate SQL statements.
INSERT INTO con_hash_index_table (keycol) SELECT a FROM GENERATE_SERIES(1, 1000) a;
You can achieve the same result by execution of single SQL statement.
INSERT INTO con_hash_index_table (keycol) SELECT (a - 1) % 1000 + 1 FROM GENERATE_SERIES(1, 1002000) a;
------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
В списке pgsql-hackers по дате отправления: