Re: 10+hrs vs 15min because of just one index
От | hubert depesz lubaczewski |
---|---|
Тема | Re: 10+hrs vs 15min because of just one index |
Дата | |
Msg-id | 9e4684ce0602100100x3d75554apb9decc3d08af52bd@mail.gmail.com обсуждение исходный текст |
Ответ на | 10+hrs vs 15min because of just one index (Aaron Turner <synfinatic@gmail.com>) |
Ответы |
Re: 10+hrs vs 15min because of just one index
|
Список | pgsql-performance |
On 2/10/06, Aaron Turner <synfinatic@gmail.com> wrote: > So I'm trying to figure out how to optimize my PG install (8.0.3) to > get better performance without dropping one of my indexes. > Basically, I have a table of 5M records with 3 columns: > pri_key (SERIAL) > data char(48) > groupid integer > there is an additional unique index on the data column. > The problem is that when I update the groupid column for all the > records, the query takes over 10hrs (after that I just canceled the > update). Looking at iostat, top, vmstat shows I'm horribly disk IO > bound (for data not WAL, CPU 85-90% iowait) and not swapping. > Dropping the unique index on data (which isn't used in the query), for such a large update i would suggest to go with different scenario: split update into packets (10000, or 50000 rows at the time) and do: update packet vacuum table for all packets. and then reindex the table. should work much nicer. depesz
В списке pgsql-performance по дате отправления: