Обсуждение: Guidance Requested: Migrating Large-Scale DB2 Databases to PostgreSQL

Поиск
Список
Период
Сортировка

Guidance Requested: Migrating Large-Scale DB2 Databases to PostgreSQL

От
mahamood hussain
Дата:

Hi Team,

We are in the process of migrating several DB2 databases to PostgreSQL, primarily to reduce the high licensing costs associated with DB2. These databases support retail applications (e.g., supermarkets and stores), and during peak hours, we anticipate over 100 concurrent connections.


Current Database Profile:

  • Approximately 3,000 tables in total

  • Around 100 tables contain active data

  • Most tables have low data volume

  • A few large tables range from 10 GB to 2 TB

  • The largest table contains approximately 80 billion rows


Migration Approach:

  • We are using Ispirer for code conversion (DB2 to PostgreSQL).

  • For data migration, we are evaluating Fivetran, but noted that it relies on the COPY method for data loading.


Questions & Areas Where We Need Guidance:

  1. Is Fivetran a suitable option for migrating very large datasets (e.g., tables with 80+ billion rows)?

  2. Are there any reliable open-source tools for DB2 to PostgreSQL data migration that we can use internally, without needing to invest in a tool like Fivetran?

  3. Are there more scalable or efficient alternatives for both the initial load and ongoing/incremental sync to PostgreSQL?


Additional Input Requested:

  • What are the key best practices (Do’s and Don’ts) to keep in mind during a large-scale DB2 → PostgreSQL migration?

  • Are there specific PostgreSQL settings or configurations we should pay attention to for optimizing performance, especially for large datasets and DB2-style workloads?


We are keen to ensure performance, data integrity, and scalability throughout this migration. Any insights—particularly from those with experience in similar large-scale PostgreSQL implementations—would be highly appreciated.

If this is not the right forum for these questions, please do let me know if there is a better place to seek this guidance.

Thanks in advance for your support!

Re: Guidance Requested: Migrating Large-Scale DB2 Databases to PostgreSQL

От
Ron Johnson
Дата:
On Wed, Oct 15, 2025 at 5:14 PM mahamood hussain <hussain.ieg@gmail.com> wrote:

Hi Team,

We are in the process of migrating several DB2 databases to PostgreSQL, primarily to reduce the high licensing costs associated with DB2. These databases support retail applications (e.g., supermarkets and stores), and during peak hours, we anticipate over 100 concurrent connections.


Current Database Profile:

  • Approximately 3,000 tables in total

  • Around 100 tables contain active data

  • Most tables have low data volume

  • A few large tables range from 10 GB to 2 TB

  • The largest table contains approximately 80 billion rows


Migration Approach:

  • We are using Ispirer for code conversion (DB2 to PostgreSQL).

  • For data migration, we are evaluating Fivetran, but noted that it relies on the COPY method for data loading.


Questions & Areas Where We Need Guidance:

  1. Is Fivetran a suitable option for migrating very large datasets (e.g., tables with 80+ billion rows)?

  2. Are there any reliable open-source tools for DB2 to PostgreSQL data migration that we can use internally, without needing to invest in a tool like Fivetran?

  3. Are there more scalable or efficient alternatives for both the initial load and ongoing/incremental sync to PostgreSQL?


Additional Input Requested:

  • What are the key best practices (Do’s and Don’ts) to keep in mind during a large-scale DB2 → PostgreSQL migration?


Since this is retail, I bet the only two statements that ever touch the biggest tables are INSERT and SELECT.

In a similar situation (Oracle to PG, but that should not make a difference), I created per-month views on the source side, and then exported them to CSV files, which then got COPY-loaded into Postgresql.

For each table, I tracked in a spreadsheet which month-csv files had been exported, compressed. scp'd, loaded.and indexed.  Iteratively, I developed a pretty slick "parallel assembly line" process of cron jobs of continually looping shell scripts that exported views to CSV files, compressed CSV files who's export had completed, and then scp'd the files once the CSV files had finished compressing.  On the VM running PG, a cron job running a continuously-looping shell script would pick up successfully transferred files then decompress and load them into the appropriate table and create secondary indices as soon as a file finished being uploaded.  It was quite fast

On cutover day, I just had to move the current month's data from those tables, along with the (relatively) small amount of data in tables that get UPDATEd and then ran the CREATE INDEX and ALTER TABLE ... ADD FOREIGN KEY statements.

Honestly, that 10GB table is small enough that I'd leave until cutover day.

Note that I had six hours to do the final data moves.

  • Are there specific PostgreSQL settings or configurations we should pay attention to for optimizing performance, especially for large datasets and DB2-style workloads?

What's a DB2-style workload?
 

    We are keen to ensure performance

    The default autovacuum settings are too conservative IMO, so I set them thusly:
    autovacuum_analyze_scale_factor = 0.03
    autovacuum_vacuum_scale_factor = 0.03
    autovacuum_vacuum_insert_scale_factor = 0.03
    default_statistics_target = 5000

    And the memory configs for a system with 128GB RAM
    effective_cache_size = 108GB # RAM less room for Linux & Carbon Black
    shared_buffers = 32GB       # 25% of RAM
    work_mem = 164MB            # ECS/100 (expected max connections)
    maintenance_work_mem = 8GB

    Your mileage will vary.

    , data integrity,


    Build the PG instance with checksums enabled.  It's 2025; the computational overhead is minuscule.
    If you use Foreign Key constraints, verify that supporting indices exist.  They probably already do exist, but can't hurt to check...

    PgBackRest is my go-to backup and restore program.  Multi-threaded, with automatic PITR, compression and encryption.

    and scalability throughout this migration. Any insights—particularly from those with experience in similar large-scale PostgreSQL implementations—would be highly appreciated.


    You'll probably want to partition those big tables. Note, though, that PG requires that the partition field be part of the PK.

    If your existing PKs have transaction_date, invoice_date, etc as part of the PK, then hurray; your job is easy.  If not (which is likely the case if the PK is synthetic), then I'd partition by some number PK values.  Every 10,000,000 PK values or something.  I'd study the distribution patterns of the data.  For example, if there is lower volume in January through March, and higher volume in summer and December, then maybe partition every 7,000,000 PK values in the typically slow months, and 14,000,000 PK values in the busy months.  Gaps between PK values due to sequence caching will also affect that number.

    --
    Death to <Redacted>, and butter sauce.
    Don't boil me, I'm still alive.
    <Redacted> lobster!

    Re: Guidance Requested: Migrating Large-Scale DB2 Databases to PostgreSQL

    От
    Brian Crockard
    Дата:
    You may want to look into an IBM product called InfoSphere Data Replciation (CDC)


    It will replicate data from DB2 to PostgreSQL. You can set it up to actively replciate the data and keep it consistent. Then perform a cutover at some point to the new system running against the new replicated database. You will then have all of your historical data in the new system. We used this on an extremely active system and it hard very few issues. 

    On Wednesday, October 15, 2025 at 05:15:00 PM EDT, mahamood hussain <hussain.ieg@gmail.com> wrote:


    Hi Team,

    We are in the process of migrating several DB2 databases to PostgreSQL, primarily to reduce the high licensing costs associated with DB2. These databases support retail applications (e.g., supermarkets and stores), and during peak hours, we anticipate over 100 concurrent connections.


    Current Database Profile:

    • Approximately 3,000 tables in total

    • Around 100 tables contain active data

    • Most tables have low data volume

    • A few large tables range from 10 GB to 2 TB

    • The largest table contains approximately 80 billion rows


    Migration Approach:

    • We are using Ispirer for code conversion (DB2 to PostgreSQL).

    • For data migration, we are evaluating Fivetran, but noted that it relies on the COPY method for data loading.


    Questions & Areas Where We Need Guidance:

    1. Is Fivetran a suitable option for migrating very large datasets (e.g., tables with 80+ billion rows)?

    2. Are there any reliable open-source tools for DB2 to PostgreSQL data migration that we can use internally, without needing to invest in a tool like Fivetran?

    3. Are there more scalable or efficient alternatives for both the initial load and ongoing/incremental sync to PostgreSQL?


    Additional Input Requested:

    • What are the key best practices (Do’s and Don’ts) to keep in mind during a large-scale DB2 → PostgreSQL migration?

    • Are there specific PostgreSQL settings or configurations we should pay attention to for optimizing performance, especially for large datasets and DB2-style workloads?


    We are keen to ensure performance, data integrity, and scalability throughout this migration. Any insights—particularly from those with experience in similar large-scale PostgreSQL implementations—would be highly appreciated.

    If this is not the right forum for these questions, please do let me know if there is a better place to seek this guidance.

    Thanks in advance for your support!

    Re: Guidance Requested: Migrating Large-Scale DB2 Databases to PostgreSQL

    От
    Muhammet Kurtoğlu
    Дата:

    Try symetricds


    16 Eki 2025 Per, saat 15:46 tarihinde Brian Crockard <bcrockard@yahoo.com> şunu yazdı:
    You may want to look into an IBM product called InfoSphere Data Replciation (CDC)


    It will replicate data from DB2 to PostgreSQL. You can set it up to actively replciate the data and keep it consistent. Then perform a cutover at some point to the new system running against the new replicated database. You will then have all of your historical data in the new system. We used this on an extremely active system and it hard very few issues. 

    On Wednesday, October 15, 2025 at 05:15:00 PM EDT, mahamood hussain <hussain.ieg@gmail.com> wrote:


    Hi Team,

    We are in the process of migrating several DB2 databases to PostgreSQL, primarily to reduce the high licensing costs associated with DB2. These databases support retail applications (e.g., supermarkets and stores), and during peak hours, we anticipate over 100 concurrent connections.


    Current Database Profile:

    • Approximately 3,000 tables in total

    • Around 100 tables contain active data

    • Most tables have low data volume

    • A few large tables range from 10 GB to 2 TB

    • The largest table contains approximately 80 billion rows


    Migration Approach:

    • We are using Ispirer for code conversion (DB2 to PostgreSQL).

    • For data migration, we are evaluating Fivetran, but noted that it relies on the COPY method for data loading.


    Questions & Areas Where We Need Guidance:

    1. Is Fivetran a suitable option for migrating very large datasets (e.g., tables with 80+ billion rows)?

    2. Are there any reliable open-source tools for DB2 to PostgreSQL data migration that we can use internally, without needing to invest in a tool like Fivetran?

    3. Are there more scalable or efficient alternatives for both the initial load and ongoing/incremental sync to PostgreSQL?


    Additional Input Requested:

    • What are the key best practices (Do’s and Don’ts) to keep in mind during a large-scale DB2 → PostgreSQL migration?

    • Are there specific PostgreSQL settings or configurations we should pay attention to for optimizing performance, especially for large datasets and DB2-style workloads?


    We are keen to ensure performance, data integrity, and scalability throughout this migration. Any insights—particularly from those with experience in similar large-scale PostgreSQL implementations—would be highly appreciated.

    If this is not the right forum for these questions, please do let me know if there is a better place to seek this guidance.

    Thanks in advance for your support!

    Re: Guidance Requested: Migrating Large-Scale DB2 Databases to PostgreSQL

    От
    mahamood hussain
    Дата:

    Hi Brian, Ron, Kurtoglu,

    Thank you all for the thoughtful responses. It took a bit of effort to pull this information together, but I'm glad to see the insights coming in — it's given me more confidence that this migration is indeed achievable with the right approach.

    To clarify our setup:
    We're currently running DB2 LUW v11.5.9 (Advanced Enterprise Server Edition). One of our key concerns is licensing cost, which is driving our move to PostgreSQL.

    A few follow-up questions based on your responses:

    1. InfoSphere Data Replication (CDC):
      We understand this is a strong option, but it likely incurs additional licensing costs. We’re exploring Fivetran for data replication between DB2 and PostgreSQL, with the flexibility to fall back to DB2 if performance or stability issues arise in production.

      • Can CDC be used for such a failback scenario (i.e., from PostgreSQL → DB2) similar to how Fivetran supports bidirectional sync?

    2. Open-source Tools:
      Do we have any robust open-source tools  that support both forward and backward replication between DB2 and PostgreSQL? This would help us plan for both migration and rollback scenarios without heavy vendor lock-in.

    3. Pitfalls to Avoid During Migration:
      Since our primary goal is cost reduction, we would ideally avoid needing to fall back to DB2. Based on your experience, what are some common pitfalls or gotchas we should look out for when moving from DB2 to PostgreSQL?

    4. PostgreSQL Performance Tuning & Config:
      If there’s a list of standard PostgreSQL parameters  that you'd recommend tweaking right after installation — especially for large datasets — that would be very helpful. We're anticipating around 100+ concurrent connections during peak hours and migrating some very large tables (up to 80B rows).

    5. Community vs Enterprise Support:
      In production, having support with a guaranteed response time (e.g., under an hour) is crucial.

      • Does the PostgreSQL community offer any premium or paid support options with SLAs, or would we need to go with providers like EDB for this level of assurance?



    On Thu, Oct 16, 2025 at 6:15 PM Brian Crockard <bcrockard@yahoo.com> wrote:
    You may want to look into an IBM product called InfoSphere Data Replciation (CDC)


    It will replicate data from DB2 to PostgreSQL. You can set it up to actively replciate the data and keep it consistent. Then perform a cutover at some point to the new system running against the new replicated database. You will then have all of your historical data in the new system. We used this on an extremely active system and it hard very few issues. 

    On Wednesday, October 15, 2025 at 05:15:00 PM EDT, mahamood hussain <hussain.ieg@gmail.com> wrote:


    Hi Team,

    We are in the process of migrating several DB2 databases to PostgreSQL, primarily to reduce the high licensing costs associated with DB2. These databases support retail applications (e.g., supermarkets and stores), and during peak hours, we anticipate over 100 concurrent connections.


    Current Database Profile:

    • Approximately 3,000 tables in total

    • Around 100 tables contain active data

    • Most tables have low data volume

    • A few large tables range from 10 GB to 2 TB

    • The largest table contains approximately 80 billion rows


    Migration Approach:

    • We are using Ispirer for code conversion (DB2 to PostgreSQL).

    • For data migration, we are evaluating Fivetran, but noted that it relies on the COPY method for data loading.


    Questions & Areas Where We Need Guidance:

    1. Is Fivetran a suitable option for migrating very large datasets (e.g., tables with 80+ billion rows)?

    2. Are there any reliable open-source tools for DB2 to PostgreSQL data migration that we can use internally, without needing to invest in a tool like Fivetran?

    3. Are there more scalable or efficient alternatives for both the initial load and ongoing/incremental sync to PostgreSQL?


    Additional Input Requested:

    • What are the key best practices (Do’s and Don’ts) to keep in mind during a large-scale DB2 → PostgreSQL migration?

    • Are there specific PostgreSQL settings or configurations we should pay attention to for optimizing performance, especially for large datasets and DB2-style workloads?


    We are keen to ensure performance, data integrity, and scalability throughout this migration. Any insights—particularly from those with experience in similar large-scale PostgreSQL implementations—would be highly appreciated.

    If this is not the right forum for these questions, please do let me know if there is a better place to seek this guidance.

    Thanks in advance for your support!

    Re: Guidance Requested: Migrating Large-Scale DB2 Databases to PostgreSQL

    От
    Muhammet Kurtoğlu
    Дата:
    Hi
    Open source Symetricds support bidirectonal replication from db2 to postgresql you can look at below. 

    There are open source topla such as pmm , graphana etc to monitor performance of postgresql .
    İ recommend to take professional support for postgresql and open source technologies for migration.

    No need o take edb support. There are lots of company that gives postgresql professional supports with required sla.

    One of them is our company BiSoft located in TURKEY


    17 Eki 2025 Cum, saat 22:49 tarihinde mahamood hussain <hussain.ieg@gmail.com> şunu yazdı:

    Hi Brian, Ron, Kurtoglu,

    Thank you all for the thoughtful responses. It took a bit of effort to pull this information together, but I'm glad to see the insights coming in — it's given me more confidence that this migration is indeed achievable with the right approach.

    To clarify our setup:
    We're currently running DB2 LUW v11.5.9 (Advanced Enterprise Server Edition). One of our key concerns is licensing cost, which is driving our move to PostgreSQL.

    A few follow-up questions based on your responses:

    1. InfoSphere Data Replication (CDC):
      We understand this is a strong option, but it likely incurs additional licensing costs. We’re exploring Fivetran for data replication between DB2 and PostgreSQL, with the flexibility to fall back to DB2 if performance or stability issues arise in production.

      • Can CDC be used for such a failback scenario (i.e., from PostgreSQL → DB2) similar to how Fivetran supports bidirectional sync?

    2. Open-source Tools:
      Do we have any robust open-source tools  that support both forward and backward replication between DB2 and PostgreSQL? This would help us plan for both migration and rollback scenarios without heavy vendor lock-in.

    3. Pitfalls to Avoid During Migration:
      Since our primary goal is cost reduction, we would ideally avoid needing to fall back to DB2. Based on your experience, what are some common pitfalls or gotchas we should look out for when moving from DB2 to PostgreSQL?

    4. PostgreSQL Performance Tuning & Config:
      If there’s a list of standard PostgreSQL parameters  that you'd recommend tweaking right after installation — especially for large datasets — that would be very helpful. We're anticipating around 100+ concurrent connections during peak hours and migrating some very large tables (up to 80B rows).

    5. Community vs Enterprise Support:
      In production, having support with a guaranteed response time (e.g., under an hour) is crucial.

      • Does the PostgreSQL community offer any premium or paid support options with SLAs, or would we need to go with providers like EDB for this level of assurance?



    On Thu, Oct 16, 2025 at 6:15 PM Brian Crockard <bcrockard@yahoo.com> wrote:
    You may want to look into an IBM product called InfoSphere Data Replciation (CDC)


    It will replicate data from DB2 to PostgreSQL. You can set it up to actively replciate the data and keep it consistent. Then perform a cutover at some point to the new system running against the new replicated database. You will then have all of your historical data in the new system. We used this on an extremely active system and it hard very few issues. 

    On Wednesday, October 15, 2025 at 05:15:00 PM EDT, mahamood hussain <hussain.ieg@gmail.com> wrote:


    Hi Team,

    We are in the process of migrating several DB2 databases to PostgreSQL, primarily to reduce the high licensing costs associated with DB2. These databases support retail applications (e.g., supermarkets and stores), and during peak hours, we anticipate over 100 concurrent connections.


    Current Database Profile:

    • Approximately 3,000 tables in total

    • Around 100 tables contain active data

    • Most tables have low data volume

    • A few large tables range from 10 GB to 2 TB

    • The largest table contains approximately 80 billion rows


    Migration Approach:

    • We are using Ispirer for code conversion (DB2 to PostgreSQL).

    • For data migration, we are evaluating Fivetran, but noted that it relies on the COPY method for data loading.


    Questions & Areas Where We Need Guidance:

    1. Is Fivetran a suitable option for migrating very large datasets (e.g., tables with 80+ billion rows)?

    2. Are there any reliable open-source tools for DB2 to PostgreSQL data migration that we can use internally, without needing to invest in a tool like Fivetran?

    3. Are there more scalable or efficient alternatives for both the initial load and ongoing/incremental sync to PostgreSQL?


    Additional Input Requested:

    • What are the key best practices (Do’s and Don’ts) to keep in mind during a large-scale DB2 → PostgreSQL migration?

    • Are there specific PostgreSQL settings or configurations we should pay attention to for optimizing performance, especially for large datasets and DB2-style workloads?


    We are keen to ensure performance, data integrity, and scalability throughout this migration. Any insights—particularly from those with experience in similar large-scale PostgreSQL implementations—would be highly appreciated.

    If this is not the right forum for these questions, please do let me know if there is a better place to seek this guidance.

    Thanks in advance for your support!

    Re: Guidance Requested: Migrating Large-Scale DB2 Databases to PostgreSQL

    От
    mahamood hussain
    Дата:
    A few other points I’d like to highlight — we use the DB2 LOAD utility for our daily data loads into tables. Since this utility bypasses transaction logging for performance reasons, any open-source replication tool that relies on transaction logs to replicate data into PostgreSQL will not capture or replicate these changes. Therefore, I'm looking for tools that can specifically address this primary limitation.
    On Sat, Oct 18, 2025 at 1:19 AM mahamood hussain <hussain.ieg@gmail.com> wrote:

    Hi Brian, Ron, Kurtoglu,

    Thank you all for the thoughtful responses. It took a bit of effort to pull this information together, but I'm glad to see the insights coming in — it's given me more confidence that this migration is indeed achievable with the right approach.

    To clarify our setup:
    We're currently running DB2 LUW v11.5.9 (Advanced Enterprise Server Edition). One of our key concerns is licensing cost, which is driving our move to PostgreSQL.

    A few follow-up questions based on your responses:

    1. InfoSphere Data Replication (CDC):
      We understand this is a strong option, but it likely incurs additional licensing costs. We’re exploring Fivetran for data replication between DB2 and PostgreSQL, with the flexibility to fall back to DB2 if performance or stability issues arise in production.

      • Can CDC be used for such a failback scenario (i.e., from PostgreSQL → DB2) similar to how Fivetran supports bidirectional sync?

    2. Open-source Tools:
      Do we have any robust open-source tools  that support both forward and backward replication between DB2 and PostgreSQL? This would help us plan for both migration and rollback scenarios without heavy vendor lock-in.

    3. Pitfalls to Avoid During Migration:
      Since our primary goal is cost reduction, we would ideally avoid needing to fall back to DB2. Based on your experience, what are some common pitfalls or gotchas we should look out for when moving from DB2 to PostgreSQL?

    4. PostgreSQL Performance Tuning & Config:
      If there’s a list of standard PostgreSQL parameters  that you'd recommend tweaking right after installation — especially for large datasets — that would be very helpful. We're anticipating around 100+ concurrent connections during peak hours and migrating some very large tables (up to 80B rows).

    5. Community vs Enterprise Support:
      In production, having support with a guaranteed response time (e.g., under an hour) is crucial.

      • Does the PostgreSQL community offer any premium or paid support options with SLAs, or would we need to go with providers like EDB for this level of assurance?



    On Thu, Oct 16, 2025 at 6:15 PM Brian Crockard <bcrockard@yahoo.com> wrote:
    You may want to look into an IBM product called InfoSphere Data Replciation (CDC)


    It will replicate data from DB2 to PostgreSQL. You can set it up to actively replciate the data and keep it consistent. Then perform a cutover at some point to the new system running against the new replicated database. You will then have all of your historical data in the new system. We used this on an extremely active system and it hard very few issues. 

    On Wednesday, October 15, 2025 at 05:15:00 PM EDT, mahamood hussain <hussain.ieg@gmail.com> wrote:


    Hi Team,

    We are in the process of migrating several DB2 databases to PostgreSQL, primarily to reduce the high licensing costs associated with DB2. These databases support retail applications (e.g., supermarkets and stores), and during peak hours, we anticipate over 100 concurrent connections.


    Current Database Profile:

    • Approximately 3,000 tables in total

    • Around 100 tables contain active data

    • Most tables have low data volume

    • A few large tables range from 10 GB to 2 TB

    • The largest table contains approximately 80 billion rows


    Migration Approach:

    • We are using Ispirer for code conversion (DB2 to PostgreSQL).

    • For data migration, we are evaluating Fivetran, but noted that it relies on the COPY method for data loading.


    Questions & Areas Where We Need Guidance:

    1. Is Fivetran a suitable option for migrating very large datasets (e.g., tables with 80+ billion rows)?

    2. Are there any reliable open-source tools for DB2 to PostgreSQL data migration that we can use internally, without needing to invest in a tool like Fivetran?

    3. Are there more scalable or efficient alternatives for both the initial load and ongoing/incremental sync to PostgreSQL?


    Additional Input Requested:

    • What are the key best practices (Do’s and Don’ts) to keep in mind during a large-scale DB2 → PostgreSQL migration?

    • Are there specific PostgreSQL settings or configurations we should pay attention to for optimizing performance, especially for large datasets and DB2-style workloads?


    We are keen to ensure performance, data integrity, and scalability throughout this migration. Any insights—particularly from those with experience in similar large-scale PostgreSQL implementations—would be highly appreciated.

    If this is not the right forum for these questions, please do let me know if there is a better place to seek this guidance.

    Thanks in advance for your support!

    Re: Guidance Requested: Migrating Large-Scale DB2 Databases to PostgreSQL

    От
    Muhammet Kurtoğlu
    Дата:
    Symetricds is trigger based not log based 



    17 Eki 2025 Cum, saat 23:30 tarihinde mahamood hussain <hussain.ieg@gmail.com> şunu yazdı:
    A few other points I’d like to highlight — we use the DB2 LOAD utility for our daily data loads into tables. Since this utility bypasses transaction logging for performance reasons, any open-source replication tool that relies on transaction logs to replicate data into PostgreSQL will not capture or replicate these changes. Therefore, I'm looking for tools that can specifically address this primary limitation.

    On Sat, Oct 18, 2025 at 1:19 AM mahamood hussain <hussain.ieg@gmail.com> wrote:

    Hi Brian, Ron, Kurtoglu,

    Thank you all for the thoughtful responses. It took a bit of effort to pull this information together, but I'm glad to see the insights coming in — it's given me more confidence that this migration is indeed achievable with the right approach.

    To clarify our setup:
    We're currently running DB2 LUW v11.5.9 (Advanced Enterprise Server Edition). One of our key concerns is licensing cost, which is driving our move to PostgreSQL.

    A few follow-up questions based on your responses:

    1. InfoSphere Data Replication (CDC):
      We understand this is a strong option, but it likely incurs additional licensing costs. We’re exploring Fivetran for data replication between DB2 and PostgreSQL, with the flexibility to fall back to DB2 if performance or stability issues arise in production.

      • Can CDC be used for such a failback scenario (i.e., from PostgreSQL → DB2) similar to how Fivetran supports bidirectional sync?

    2. Open-source Tools:
      Do we have any robust open-source tools  that support both forward and backward replication between DB2 and PostgreSQL? This would help us plan for both migration and rollback scenarios without heavy vendor lock-in.

    3. Pitfalls to Avoid During Migration:
      Since our primary goal is cost reduction, we would ideally avoid needing to fall back to DB2. Based on your experience, what are some common pitfalls or gotchas we should look out for when moving from DB2 to PostgreSQL?

    4. PostgreSQL Performance Tuning & Config:
      If there’s a list of standard PostgreSQL parameters  that you'd recommend tweaking right after installation — especially for large datasets — that would be very helpful. We're anticipating around 100+ concurrent connections during peak hours and migrating some very large tables (up to 80B rows).

    5. Community vs Enterprise Support:
      In production, having support with a guaranteed response time (e.g., under an hour) is crucial.

      • Does the PostgreSQL community offer any premium or paid support options with SLAs, or would we need to go with providers like EDB for this level of assurance?



    On Thu, Oct 16, 2025 at 6:15 PM Brian Crockard <bcrockard@yahoo.com> wrote:
    You may want to look into an IBM product called InfoSphere Data Replciation (CDC)


    It will replicate data from DB2 to PostgreSQL. You can set it up to actively replciate the data and keep it consistent. Then perform a cutover at some point to the new system running against the new replicated database. You will then have all of your historical data in the new system. We used this on an extremely active system and it hard very few issues. 

    On Wednesday, October 15, 2025 at 05:15:00 PM EDT, mahamood hussain <hussain.ieg@gmail.com> wrote:


    Hi Team,

    We are in the process of migrating several DB2 databases to PostgreSQL, primarily to reduce the high licensing costs associated with DB2. These databases support retail applications (e.g., supermarkets and stores), and during peak hours, we anticipate over 100 concurrent connections.


    Current Database Profile:

    • Approximately 3,000 tables in total

    • Around 100 tables contain active data

    • Most tables have low data volume

    • A few large tables range from 10 GB to 2 TB

    • The largest table contains approximately 80 billion rows


    Migration Approach:

    • We are using Ispirer for code conversion (DB2 to PostgreSQL).

    • For data migration, we are evaluating Fivetran, but noted that it relies on the COPY method for data loading.


    Questions & Areas Where We Need Guidance:

    1. Is Fivetran a suitable option for migrating very large datasets (e.g., tables with 80+ billion rows)?

    2. Are there any reliable open-source tools for DB2 to PostgreSQL data migration that we can use internally, without needing to invest in a tool like Fivetran?

    3. Are there more scalable or efficient alternatives for both the initial load and ongoing/incremental sync to PostgreSQL?


    Additional Input Requested:

    • What are the key best practices (Do’s and Don’ts) to keep in mind during a large-scale DB2 → PostgreSQL migration?

    • Are there specific PostgreSQL settings or configurations we should pay attention to for optimizing performance, especially for large datasets and DB2-style workloads?


    We are keen to ensure performance, data integrity, and scalability throughout this migration. Any insights—particularly from those with experience in similar large-scale PostgreSQL implementations—would be highly appreciated.

    If this is not the right forum for these questions, please do let me know if there is a better place to seek this guidance.

    Thanks in advance for your support!