Re: pg_dump with lots and lots of tables

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: pg_dump with lots and lots of tables
Дата
Msg-id 11774.1383412537@sss.pgh.pa.us
обсуждение исходный текст
Ответ на pg_dump with lots and lots of tables  (Andy Colson <andy@squeakycode.net>)
Список pgsql-general
Andy Colson <andy@squeakycode.net> writes:
> pg_dump is upset that my max_locks_per_transaction is too low.  I've bumped it up several times (up to 600 so far)
butnow sure how many it needs. 

> I'm merging 90 databases into a single database with 90 schemas.  Each schema can have 500'ish tables.  Do I need to
setmax_locks_per_transaction to (90*500) 45,000?  Will that even work? 

The pg_dump will need about 45000 locks altogether, so anything north of
45000/max_connections should work (more if you have other sessions going
on at the same time).

Basically the lock table is sized at max_locks_per_transaction*max_connections,
and transactions can use as many entries as they want --- there's no
attempt to hold a session to its "fair share" of the table.  The parameter
is only defined as it is to ensure that if you bump up max_connections the
lock table will get bigger automatically, so you won't starve sessions of
locks accidentally.

> Will I ever need to bump up sysctl kernel.shmmax?

If the postmaster fails to start with the larger setting, then yes.
But lock entries aren't that large so probably it won't matter.
If it does matter, and increasing shmmax is inconvenient, you could
back off shared_buffers to make room.

            regards, tom lane


В списке pgsql-general по дате отправления:

Предыдущее
От: John R Pierce
Дата:
Сообщение: Re: changing port numbers so pgbouncer can read geoserver and postgres
Следующее
От: Grzegorz Tańczyk
Дата:
Сообщение: Memory usage per postmaster process