Re: Batch update of indexes on data loading
От | ITAGAKI Takahiro |
---|---|
Тема | Re: Batch update of indexes on data loading |
Дата | |
Msg-id | 20080228145814.5F49.52131E4D@oss.ntt.co.jp обсуждение исходный текст |
Ответ на | Re: Batch update of indexes on data loading (Simon Riggs <simon@2ndquadrant.com>) |
Ответы |
Re: Batch update of indexes on data loading
|
Список | pgsql-hackers |
Simon Riggs <simon@2ndquadrant.com> wrote: > The LOCK is only required because we defer the inserts into unique > indexes, yes? No, as far as present pg_bulkload. It creates a new relfilenode like REINDEX, therefore, access exclusive lock is needed. When there is violations of unique constraints, all of the loading is rollbacked at the end of loading. BTW, why REINDEX requires access exclusive lock? Read-only queries are forbidden during the operation now, but I feel they are ok because REINDEX only reads existing tuples. Can we do REINDEX holding only shared lock on the index? > I very much like the idea of index merging, or put another way: batch > index inserts. How big do the batch of index inserts have to be for us > to gain benefit from this technique? Hmm, we might need to know *why* COPY with indexes is slow. If the major cause is searching position to insert, batch inserts will work well. However, if the cause is index splitting and following random i/o, batch insertion cannot solve the problem; "rebuild" is still required. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
В списке pgsql-hackers по дате отправления: