Обсуждение: Max tables in a cluster?
Hi admins,
Is there a limit to number of tables that a cluster can support?
At the moment, one of my clusters has about 250k user tables (plus 150k toast tables) across a hundred schemas within a single database.
My application team wants to double the number of schemas (and with it the tables)
I am looking for both pros and cons of doing this and what tuning can help this cluster perform acceptably.
My questions are
1. Is there a limit to number of tables a cluster can support effectively?
2. My guess is autovacuum would have to be tuned. Current gucs for this are
max_autovacuum_workers = 6
maintenance_work_mem = 1GB
autovacuum_vacuum_scale_factor = 0.2
autovacuum_vacuum_cost_delay = 20ms
Are there any potential problems that I need to think about ?
My machine specs are as below.
RAM 256GB
32-core CPU
7TB SSD
thanks
karthik
On 4/5/23 02:07, Karthik Krishnakumar wrote:
https://www.postgresql.org/docs/13/limits.html
Hi admins,Is there a limit to number of tables that a cluster can support?
https://www.postgresql.org/docs/13/limits.html
At the moment, one of my clusters has about 250k user tables (plus 150k toast tables) across a hundred schemas within a single database.My application team wants to double the number of schemas (and with it the tables)I am looking for both pros and cons of doing this and what tuning can help this cluster perform acceptably.
My questions are1. Is there a limit to number of tables a cluster can support effectively?2. My guess is autovacuum would have to be tuned. Current gucs for this aremax_autovacuum_workers = 6maintenance_work_mem = 1GBautovacuum_vacuum_scale_factor = 0.2autovacuum_vacuum_cost_delay = 20msAre there any potential problems that I need to think about ?My machine specs are as below.RAM 256GB32-core CPU7TB SSDthankskarthik
--
Born in Arizona, moved to Babylonia.
Born in Arizona, moved to Babylonia.
On Wed, Apr 05, 2023 at 12:37:41PM +0530, Karthik Krishnakumar wrote: > Is there a limit to number of tables that a cluster can support? Well, yes. For starters: https://www.postgresql.org/docs/current/limits.html But more important is: with large number of tables certain operations become VERY slow. Specifically getting schema dump. Which is used, for example, by pg_upgrade. In my experience hitting 1 million rows in pg_class made it realistically impossible to use pg_dump. But there have been many improvements to pg_dump over the years (https://why-upgrade.depesz.com/show?from=9.5&to=15.2&keywords=pg_dump) to it's possible that the problem is gone now. Try taking `pg_dump -s` of the db, and see how long it takes... Best regards, depesz