Re: Guidance Needed: Scaling PostgreSQL for 12 TB Data Growth - New Feature Implementation

Поиск
Список
Период
Сортировка
От Paul Smith*
Тема Re: Guidance Needed: Scaling PostgreSQL for 12 TB Data Growth - New Feature Implementation
Дата
Msg-id ddc2adec-4fd1-4d92-8646-65227b9ff84c@pscs.co.uk
обсуждение исходный текст
Ответ на Re: Guidance Needed: Scaling PostgreSQL for 12 TB Data Growth - New Feature Implementation  (Motog Plus <mplus7535@gmail.com>)
Список pgsql-admin
On 26/06/2025 14:43, Motog Plus wrote:
> OLTP: This is our primary transactional workload and has replication 
> setup, pgpool - II
> Reporting/DW: This is for reporting purposes.
>
> The growth figures I initially shared (8-9 TB) were a more 
> conservative estimate for OLTP.
>
> However, after a more focused rough estimate for our OLTP workload 
> alone, we anticipate it could reach 35-40 TB of data over the next 5-7 
> years.
>
>
> Specifically for our OLTP databases (which I listed in my initial email):
>
> Database C could reach 30-32 TB, with the acc schema within it 
> potentially growing to 13-15 TB.

The database size is irrelevant (once it's significantly bigger than the 
available RAM)

A Raspberry Pi could easily handle a 30TB database with 50 transactions 
an hour

A 64 core Xeon with 64GB RAM couldn't handle a 500GB database with 
50,000 random insert/update transactions a second.

How many transactions per <time unit> is more important than size, as is 
what sort of transactions are they - eg an indexed SELECT of a single 
row is a lot less effort than an INSERT which has triggers to update 
multiple other tables.

That is what people mean by 'transactional workload'


If the quantity/quality of transactions is staying the same, but, for 
instance, you are simply keeping historical data longer and not querying 
it, then increasing the database size might not be as significant as you 
fear

Paul




В списке pgsql-admin по дате отправления: