Re: pg_dump fails when a table is in ACCESS SHARE MODE
От | Viral Shah |
---|---|
Тема | Re: pg_dump fails when a table is in ACCESS SHARE MODE |
Дата | |
Msg-id | CAEVFvu3t5Vtg6PTbY9aB3Sj6-p2GsduCPVGDL4gN_X16wqHUyA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_dump fails when a table is in ACCESS SHARE MODE (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: pg_dump fails when a table is in ACCESS SHARE MODE
|
Список | pgsql-sql |
Hello Tom,
I have about 7400 tables in my database. When I ran a select on pg_locks while attempting a pg_dump, pg_locks returned with about 7400 rows all originating from one process id that was running select pg_catalog.pg_get_statisticsobjdef() by pg_dump.
Now, this does explain why I need to increase the max_transaction_locks so that I can account for all the locks pg_locks is running into while pg_dump is in progress. I am still trying to understand why pg_get_statisticsobjdef() creates locks (AccessShare) on all the base tables at once leading to this issue?
Now, this does explain why I need to increase the max_transaction_locks so that I can account for all the locks pg_locks is running into while pg_dump is in progress. I am still trying to understand why pg_get_statisticsobjdef() creates locks (AccessShare) on all the base tables at once leading to this issue?
On Fri, May 1, 2020 at 10:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Viral Shah <viralshah009@gmail.com> writes:
> Per the postgres documentation,
> *max_locks_per_transaction * (max_connections + max_prepared_transactions)*
> this formula determines the max no of allowed objects that can be locked on
> the database. Currently my database has the following values:
> *max_locks_per_transaction = 64 (default)*
> *max_connections = 100*
> *max_prepared_transactions = 0*
> Using this value in the above formula tells that our database or rather
> postgres server can/should handle *6400* locked objects at a time.
Right.
> What is surprising is why Postgres complains of insufficient locks per
> transaction if only 10 processes (parallel jobs in pg_dump) are running on
> the database while taking the dump.
They're taking more than 6400 locks, evidently. How many tables are
in your database? Have you tried looking into pg_locks while the dump
is running?
regards, tom lane
В списке pgsql-sql по дате отправления: