Re: Alter the column data type of the large data volume table.
От | Michael Lewis |
---|---|
Тема | Re: Alter the column data type of the large data volume table. |
Дата | |
Msg-id | CAHOFxGpqi_=T2JQf+eNef3A5XR-Z9FeSB-hpZkG5aOr0PPvi0g@mail.gmail.com обсуждение исходный текст |
Ответ на | Alter the column data type of the large data volume table. (charles meng <xlyybz@gmail.com>) |
Ответы |
Re: Alter the column data type of the large data volume table.
|
Список | pgsql-general |
On Wed, Dec 2, 2020 at 11:53 PM charles meng <xlyybz@gmail.com> wrote:
Hi all,I have a table with 1.6 billion records. The data type of the primary key column is incorrectly used as integer. I need to replace the type of the column with bigint. Is there any ideas for this?
Solutions that have been tried:
Adding temporary columns was too time-consuming, so I gave up.
Using a temporary table, there is no good way to migrate the original table data to the temporary tableThanks in advance.
You can add a new column with NO default value and null as default and have it be very fast. Then you can gradually update rows in batches (if on PG11+, perhaps use do script with a loop to commit after X rows) to set the new column the same as the primary key. Lastly, in a transaction, update any new rows where the bigint column is null, and change which column is the primary key & drop the old one. This should keep each transaction reasonably sized to not hold up other processes.
В списке pgsql-general по дате отправления: