Re: Batch update of indexes on data loading
| От | ITAGAKI Takahiro |
|---|---|
| Тема | Re: Batch update of indexes on data loading |
| Дата | |
| Msg-id | 20080222094033.8B6B.52131E4D@oss.ntt.co.jp обсуждение исходный текст |
| Ответ на | Re: Batch update of indexes on data loading (Alvaro Herrera <alvherre@commandprompt.com>) |
| Ответы |
Re: Batch update of indexes on data loading
|
| Список | pgsql-hackers |
Alvaro Herrera <alvherre@commandprompt.com> wrote: > > The basic concept is spooling new coming data, and merge the spool and > > the existing indexes into a new index at the end of data loading. It is > > 5-10 times faster than index insertion per-row, that is the way in 8.3. > > Please see > http://thread.gmane.org/gmane.comp.db.postgresql.general/102370/focus=102901 Yeah, BEFORE INSERT FOR EACH ROW trigger is one of the problems. I think it is enough to disallow bulkloading if there are any BEFORE INSERT triggers. It is not a serious limitation because DBA often disables triggers in bulkloading for performance. >> You could work around this if the indexscan code knew to go search in the >> list of pending insertions, but that's pretty ugly and possibly slow too. I heard it is used in Falcon storage engine in MySQL, so it seems to be not so unrealistic approach. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center
В списке pgsql-hackers по дате отправления: