Re: Preliminary notes about hash index concurrency (long)
От | Tom Lane |
---|---|
Тема | Re: Preliminary notes about hash index concurrency (long) |
Дата | |
Msg-id | 10525.1062450097@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Preliminary notes about hash index concurrency (long) (Greg Stark <gsstark@mit.edu>) |
Список | pgsql-hackers |
Greg Stark <gsstark@mit.edu> writes: > Tom Lane <tgl@sss.pgh.pa.us> writes: >> If multiple inserters failed to split, the index might still be overfull, >> but eventually, the index will not be overfull and split attempts will stop. > If one backend is executing a query but the client has paused reading records, > is it possible the shared lock on the index bucket would be held for a long > time? Yes. > If so wouldn't it be possible for an arbitrarily large number of records to be > inserted while the lock is held, eventually causing the bucket to become > extremely large? Yes. > Is there a maximum size at which the bucket split MUST succeed or is > it purely a performance issue that the buckets be reasonably balanced? AFAICS it's purely a performance issue. Note also that a hash index will by definition have sucky performance on large numbers of equal keys, so anyone who is using a hash index on such a column deserves what they get. Now you could possibly have this worst-case scenario even on a column with well-scattered keys, but it seems improbable. regards, tom lane
В списке pgsql-hackers по дате отправления: