Re: Hash Indexes
От | Jeff Janes |
---|---|
Тема | Re: Hash Indexes |
Дата | |
Msg-id | CAMkU=1wZs-9VsLhGZ6MKn3CM1eZ7Wm8n3NbST6R27+8dBZ7LLg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Hash Indexes (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: Hash Indexes
|
Список | pgsql-hackers |
On Thu, Sep 15, 2016 at 7:13 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Thu, Sep 15, 2016 at 1:41 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I think it is possible without breaking pg_upgrade, if we match all
> items of a page at once (and save them as local copy), rather than
> matching item-by-item as we do now. We are already doing similar for
> btree, refer explanation of BTScanPosItem and BTScanPosData in
> nbtree.h.
If ever we want to sort hash buckets by TID, it would be best to do
that in v10 since we're presumably going to be recommending a REINDEX
anyway.
We are? I thought we were trying to preserve on-disk compatibility so that we didn't have to rebuild the indexes.
Is the concern that lack of WAL logging has generated some subtle unrecognized on disk corruption?
If I were using hash indexes on a production system and I experienced a crash, I would surely reindex immediately after the crash, not wait until the next pg_upgrade.
But is that a good thing to do? That's a little harder to
say.
How could we go about deciding that? Do you think anything short of coding it up and seeing how it works would suffice? I agree that if we want to do it, v10 is the time. But we have about 6 months yet on that.
Cheers,
Jeff
В списке pgsql-hackers по дате отправления: