Re: pg_upgrade failing for 200+ million Large Objects

Поиск
Список
Период
Сортировка
От Jan Wieck
Тема Re: pg_upgrade failing for 200+ million Large Objects
Дата
Msg-id 4fbf92f2-ec91-f4fa-a259-f0968e34f3d7@wi3ck.info
обсуждение исходный текст
Ответы Re: pg_upgrade failing for 200+ million Large Objects  (Andrew Dunstan <andrew@dunslane.net>)
Re: pg_upgrade failing for 200+ million Large Objects  (Tom Lane <tgl@sss.pgh.pa.us>)
Fix pg_upgrade to preserve datdba (was: Re: pg_upgrade failing for 200+ million Large Objects)  (Jan Wieck <jan@wi3ck.info>)
Список pgsql-hackers
On 3/8/21 11:58 AM, Tom Lane wrote:
> The answer up to now has been "raise max_locks_per_transaction enough
> so you don't see the failure".  Having now consumed a little more
> caffeine, I remember that that works in pg_upgrade scenarios too,
> since the user can fiddle with the target cluster's postgresql.conf
> before starting pg_upgrade.
> 
> So it seems like the path of least resistance is
> 
> (a) make pg_upgrade use --single-transaction when calling pg_restore
> 
> (b) document (better) how to get around too-many-locks failures.

That would first require to fix how pg_upgrade is creating the 
databases. It uses "pg_restore --create", which is mutually exclusive 
with --single-transaction because we cannot create a database inside of 
a transaction. On the way pg_upgrade also mangles the pg_database.datdba 
(all databases are owned by postgres after an upgrade; will submit a 
separate patch for that as I consider that a bug by itself).

All that aside, the entire approach doesn't scale.

In a hacked up pg_upgrade that does "createdb" first before calling 
pg_upgrade with --single-transaction. I can upgrade 1M large objects with
     max_locks_per_transaction = 5300
     max_connectinons=100
which contradicts the docs. Need to find out where that math went off 
the rails because that config should only have room for 530,000 locks, 
not 1M. The same test fails with max_locks_per_transaction = 5200.

But this would mean that one has to modify the postgresql.conf to 
something like 530,000 max_locks_per_transaction at 100 max_connections 
in order to actually run a successful upgrade of 100M large objects. 
This config requires 26GB of memory just for locks. Add to that the 
memory pg_restore needs to load the entire TOC before even restoring a 
single object.

Not going to work. But tests are still ongoing ...


Regards, Jan


-- 
Jan Wieck
Principle Database Engineer
Amazon Web Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Ajin Cherian
Дата:
Сообщение: Re: [HACKERS] logical decoding of two-phase transactions
Следующее
От: Masahiro Ikeda
Дата:
Сообщение: Re: make the stats collector shutdown without writing the statsfiles if the immediate shutdown is requested.