Re: Alter the column data type of the large data volume table.
От | Rich Shepard |
---|---|
Тема | Re: Alter the column data type of the large data volume table. |
Дата | |
Msg-id | alpine.LNX.2.20.2012030915280.29996@salmo.appl-ecosys.com обсуждение исходный текст |
Ответ на | Re: Alter the column data type of the large data volume table. (Michael Lewis <mlewis@entrata.com>) |
Ответы |
Re: Alter the column data type of the large data volume table.
Re: Alter the column data type of the large data volume table. |
Список | pgsql-general |
On Thu, 3 Dec 2020, Michael Lewis wrote: > On Wed, Dec 2, 2020 at 11:53 PM charles meng <xlyybz@gmail.com> wrote: >> I have a table with 1.6 billion records. The data type of the primary key >> column is incorrectly used as integer. I need to replace the type of the >> column with bigint. Is there any ideas for this? > You can add a new column with NO default value and null as default and have > it be very fast. Then you can gradually update rows in batches (if on > PG11+, perhaps use do script with a loop to commit after X rows) to set the > new column the same as the primary key. Lastly, in a transaction, update > any new rows where the bigint column is null, and change which column is > the primary key & drop the old one. This should keep each transaction > reasonably sized to not hold up other processes. Tell me, please, why ALTER TABLE <tablename> ALTER COLUMN <columnname> SET DATA TYPE BIGINT will not do the job? I've found some varchar columns in a couple of tables too small and used the above to increase their size. Worked perfectly. Regards, Rich
В списке pgsql-general по дате отправления: