Re: Multiple indexes, huge table
От | Tom Lane |
---|---|
Тема | Re: Multiple indexes, huge table |
Дата | |
Msg-id | 13407.1346968450@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Multiple indexes, huge table (Aram Fingal <fingal@multifactorial.com>) |
Ответы |
Re: Multiple indexes, huge table
Re: Multiple indexes, huge table |
Список | pgsql-general |
Aram Fingal <fingal@multifactorial.com> writes: > I have a table which currently has about 500 million rows. For the most part, the situation is going to be that I willimport a few hundred million more rows from text files once every few months but otherwise there won't be any insert,update or delete queries. I have created five indexes, some of them multi-column, which make a tremendous differencein performance for the statistical queries which I need to run frequently (seconds versus hours.) When addingdata to the table, however, I have found that it is much faster to drop all the indexes, copy the data to the tableand then create the indexes again (hours versus days.) So, my question is whether this is really the best way. ShouldI write a script which drops all the indexes, copies the data and then recreates the indexes or is there a better wayto do this? Yes, that's actually recommended practice for such cases. > There are also rare cases where I might want to make a correction. For example, one of the columns is sample name whichis a foreign key to a samples table defined with " ON UPDATE CASCADE." I decided to change a sample name in the samplestable which should affect about 20 million rows out of the previously mentioned 500 million. That query has now beenrunning for five days and isn't finished yet. That sounds like you lack an index on the referencing column of the foreign key constraint. Postgres doesn't require you to keep such an index, but it's a really good idea if you ever update the referenced column. regards, tom lane
В списке pgsql-general по дате отправления: