Re: Updating a very large table
От | Ron Mayer |
---|---|
Тема | Re: Updating a very large table |
Дата | |
Msg-id | 49F29F05.30805@cheapcomplexdevices.com обсуждение исходный текст |
Ответ на | Re: Updating a very large table ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>) |
Ответы |
Re: Updating a very large table
|
Список | pgsql-admin |
Kevin Grittner wrote: > Chris Browne <cbbrowne@acm.org> wrote: > >> I'd suggest adding an index > > The OP said the table had 15 indexes already. I would guess one of > those could be used. Perhaps it has a primary key.... > >> update table1 set new_column = [whatever calculation] >> where new_column is null and >> quasi_unique_column in >> (select quasi_unique_column from table1 >> where new_column is null limit 1000); > > Or, if the primary key (or other unique or quasi-unique existing > index) has multiple columns, this could still be done with: > > update table1 set new_column = [whatever calculation] > where new_column is null and > (col1, col2) in > (select col1, col2 from table1 > where new_column is null limit 1000); > Would doing something with ctid be even better? Or does it have some risks I'm missing. I'm thinking something like: fli=# select max(ctid) from table1; max ------------- (183000,42) (1 row) Then update table set new_column=[whatever] where ctid<'(10000,1)'; vacuum; update table set new_column=[whatever] where ctid>'(10000,1)' and ctid<'(20000,1'); vacuum; ... update table set new_column=[whatever] where ctid>'(180000,1)'; vacuum; and perhaps a final update table set new_column=[whatever] where new_column is null; to catch any this might have missed? Seems this makes it easer to control how much the table will bloat too -- if I only want it to bloat 5% I divide max(ctid) by 20 for each group size....
В списке pgsql-admin по дате отправления: