Обсуждение: How to have a smooth migration

Поиск
Список
Период
Сортировка

How to have a smooth migration

От
veem v
Дата:
Hi,
Its postgres database behind the scenes.

We have a use case in which the customer is planning to migrate data from an older version (V1) to a newer version (V2). For V2, the tables will be new, but their structure will be similar to the V1 version with few changes in relationship might be there. We want to have this migration approach happen in multiple phases in which each time the delta data from version V1 will be moved to version- V2 and then final cutover will  happen to V2 if all looks good or else rollback to V1. The tables are smaller in size like max ~100K records in tables.

My question is, is it a good idea to have an approach in which we will have procedures created to move the delta data in every phase and schedule those using some tasks for each table. Or any other strategy should we follow?

Also another thing to note , we have used sequences as primary keys in some tables and they have FK relationships with other tables, so the same sequence number in version V2 will cause issues/conflict, so how should we handle this scenario? Should we just create new sequences with higher start values?

Regards
Veem

Re: How to have a smooth migration

От
Achilleas Mantzios
Дата:
On 5/15/25 07:09, veem v wrote:
> Hi,
> Its postgres database behind the scenes.
>
> We have a use case in which the customer is planning to migrate data 
> from an older version (V1) to a newer version (V2). For V2, the tables 
> will be new, but their structure will be similar to the V1 version 
> with few changes in relationship might be there. We want to have this 
> migration approach happen in multiple phases in which each time the 
> delta data from version V1 will be moved to version- V2 and then final 
> cutover will  happen to V2 if all looks good or else rollback to V1. 
> The tables are smaller in size like max ~100K records in tables.
>
> My question is, is it a good idea to have an approach in which we will 
> have procedures created to move the delta data in every phase and 
> schedule those using some tasks for each table. Or any other strategy 
> should we follow?
>
> Also another thing to note , we have used sequences as primary keys in 
> some tables and they have FK relationships with other tables, so the 
> same sequence number in version V2 will cause issues/conflict, so how 
> should we handle this scenario? Should we just create new sequences 
> with higher start values?

Yes, use logical replication, and do thorough testing using a test V1 
(publisher) and a test V2 (subscriber).

If it succeeds, as a final step, swap the roles of publisher and 
subscriber so that the new V2 becomes the publisher and the old V1 the 
subscriber.

After you test everything, you replay your steps on production environment.

By this way, after the switch over, and although V2 is now the 
production , if for some reason your tests were inadequate or poor or 
something you missed and you face critical problem to stay on V2, you 
can still go back to V1 which will have up to date data.

I have done this migrating from 10.* to 16.* , it was the safest upgrade 
I have ever done. We kept the old 10 for some months until no one cared 
anymore.

>
> Regards
> Veem



Re: How to have a smooth migration

От
Adrian Klaver
Дата:
On 5/14/25 23:09, veem v wrote:
> Hi,
> Its postgres database behind the scenes.
> 
> We have a use case in which the customer is planning to migrate data 
> from an older version (V1) to a newer version (V2). For V2, the tables 
> will be new, but their structure will be similar to the V1 version with 
> few changes in relationship might be there. We want to have this 
> migration approach happen in multiple phases in which each time the 
> delta data from version V1 will be moved to version- V2 and then final 
> cutover will  happen to V2 if all looks good or else rollback to V1. The 
> tables are smaller in size like max ~100K records in tables.
> 
> My question is, is it a good idea to have an approach in which we will 
> have procedures created to move the delta data in every phase and 
> schedule those using some tasks for each table. Or any other strategy 
> should we follow?

This is what Sqitch(https://sqitch.org/) was designed for.

The biggest issue is that the data will be incrementing while you do the 
structural changes. How you handle that is going to depend on the 
question raised by Peter J. Holzer:
Is this  being done in place on one Postgres instance or between 
separate Postgres instances?

> 
> Also another thing to note , we have used sequences as primary keys in 
> some tables and they have FK relationships with other tables, so the 
> same sequence number in version V2 will cause issues/conflict, so how 
> should we handle this scenario? Should we just create new sequences with 
> higher start values?
> 
> Regards
> Veem

-- 
Adrian Klaver
adrian.klaver@aklaver.com




Re: How to have a smooth migration

От
veem v
Дата:


This is what Sqitch(https://sqitch.org/) was designed for.

The biggest issue is that the data will be incrementing while you do the
structural changes. How you handle that is going to depend on the
question raised by Peter J. Holzer:
Is this  being done in place on one Postgres instance or between
separate Postgres instances?



Thank you. Yes, these tables are going to be part of the same database. Never use sqitch though , but was wondering if we can do it with the stored simple proc as the number of table is very small <20 and also the max size of table in <50MB. Also , missed to add , this is a cloud RDS database and so not sure we can have this tool there.

To answer the questions specifically raised by Peter J. Holzer .

Are V1 and V2 different databases or do plan to do this in-place?
Answer:-
Yes both the versions of the table are in the same database.

What is the purpose of doing it in multiple phases? Do you have lengthy acceptance tests during which new data will accumulate?
Answer:-
Yes. Actually there will be a test kind of thing happen with the new code pointing to version V2 tables and to get comfortable. But teh delta data will  be very small.

By "rollback" do mean a transaction rollback or some other means of restoring the previous state?
Answer:-
"rollbak" means pointing the old code back to the version V1 tables.
 

Re: How to have a smooth migration

От
Adrian Klaver
Дата:
On 5/15/25 09:29, veem v wrote:
> 
> 
>     This is what Sqitch(https://sqitch.org/ <https://sqitch.org/>) was
>     designed for.
> 
>     The biggest issue is that the data will be incrementing while you do
>     the
>     structural changes. How you handle that is going to depend on the
>     question raised by Peter J. Holzer:
>     Is this  being done in place on one Postgres instance or between
>     separate Postgres instances?
> 
> 
> 
> Thank you. Yes, these tables are going to be part of the same database. 
> Never use sqitch though , but was wondering if we can do it with the 
> stored simple proc as the number of table is very small <20 and also the 
> max size of table in <50MB. Also , missed to add , this is a cloud RDS 
> database and so not sure we can have this tool there.
> 

1) For Postgres Sqitch uses psql as the client for making the changes. 
Therefore you only need access to psql. Also the deployments can be run 
from a machine that is not in the Cloud, as long as you have remote 
access to the Postgres instance.

2) With Sqitch you have:

a) Deploy/verify/revert actions. The verify helps keep out erroneous 
deployments and revert can take you back to a known prior state. Caveat 
the actions are based on SQL/psql scripts you create, they are only 
useful to the extent you make them so.

b) Targets, which are different instances of Postgres you can 
deploy/verify/revert against independently of each other. Useful to try 
your changes against a dev instance before deploying to production.

3) I would strongly suggest:

a) Breaking the changes down into smaller portions, probably best around 
tables having relationships.

b) Create a dev/test Postgres instance to trial changes and test them.

Sqitch is not the only database changes management system out there, it 
is just the one I found to be useful for my needs.




-- 
Adrian Klaver
adrian.klaver@aklaver.com