RE: SELECT performance drop v 6.5 -> 7.0.3
От | Creager, Robert S |
---|---|
Тема | RE: SELECT performance drop v 6.5 -> 7.0.3 |
Дата | |
Msg-id | 10FE17AD5F7ED31188CE002048406DE8514CD4@lsv-msg06.stortek.com обсуждение исходный текст |
Ответ на | SELECT performance drop v 6.5 -> 7.0.3 (Pascal Hingamp <hingamp@ciml.univ-mrs.fr>) |
Список | pgsql-general |
I've a question. I have often seen the 'trick' of dropping an index, importing large amounts of data, then re-creating the index to speed the import. The obvious problem with this is during the time from index drop to the index finishing re-creation, a large db is going to be essentially worthless to queries which use those indexes. I know nothing about the backend and how it does 'stuff', so I may be asking something absurd here. Why, when using transactions, are indexes updated on every insert? It seems logical (to someone who doesn't know better), that the indexes could be updated on the COMMIT. Please don't hurt me too bad... Rob Robert Creager Senior Software Engineer Client Server Library 303.673.2365 V 303.661.5379 F 888.912.4458 P StorageTek INFORMATION made POWERFUL > -----Original Message----- > > As for the import process taking so long, you might want to try > turning off fsync during the import. 7.1 improves the fsync > on performance > but it's still in beta. Dropping non-required indexes before > doing the > import then re-creating them after import will also help speed it up. > Always make sure you vacuum analyze it after. > > Matt > > ---------------------------(end of > broadcast)--------------------------- > TIP 6: Have you searched our list archives? > http://www.postgresql.org/search.mpl
В списке pgsql-general по дате отправления: