Re: Tuning massive UPDATES and GROUP BY's?
От | fork |
---|---|
Тема | Re: Tuning massive UPDATES and GROUP BY's? |
Дата | |
Msg-id | loom.20110310T185007-149@post.gmane.org обсуждение исходный текст |
Ответ на | Tuning massive UPDATES and GROUP BY's? (fork <forkandwait@gmail.com>) |
Список | pgsql-performance |
Merlin Moncure <mmoncure <at> gmail.com> writes: > > I am loathe to create a new table from a select, since the indexes themselves > > take a really long time to build. > > you are aware that updating the field for the entire table, especially > if there is an index on it (or any field being updated), will cause > all your indexes to be rebuilt anyways? when you update a record, it > gets a new position in the table, and a new index entry with that > position. > insert/select to temp, + truncate + insert/select back is > usually going to be faster and will save you the reindex/cluster. > otoh, if you have foreign keys it can be a headache. Hmph. I guess I will have to find a way to automate it, since there will be a lot of times I want to do this. > > As the title alludes, I will also be doing GROUP BY's on the data, and would > > love to speed these up, mostly just for my own impatience... > > need to see the query here to see if you can make them go faster. I guess I was hoping for a blog entry on general guidelines given a DB that is really only for batch analysis versus transaction processing. Like "put all your temp tables on a different disk" or whatever. I will post specifics later.
В списке pgsql-performance по дате отправления: