Re: pg_dump and thousands of schemas
От | Jeff Janes |
---|---|
Тема | Re: pg_dump and thousands of schemas |
Дата | |
Msg-id | CAMkU=1zdr7eOEcbopM6c-+zT1aTaWXsTyA_5ZkZ4rgG7EkxMPQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_dump and thousands of schemas (Tatsuo Ishii <ishii@postgresql.org>) |
Ответы |
Re: pg_dump and thousands of schemas
|
Список | pgsql-performance |
On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii <ishii@postgresql.org> wrote: >> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock >> management in the server. What I fixed so far on the pg_dump side >> should be enough to let partial dumps run at reasonable speed even if >> the whole database contains many tables. But if psql is taking >> AccessShareLock on lots of tables, there's still a problem. > > Ok, I modified the part of pg_dump where tremendous number of LOCK > TABLE are issued. I replace them with single LOCK TABLE with multiple > tables. With 100k tables LOCK statements took 13 minutes in total, now > it only takes 3 seconds. Comments? Could you rebase this? I tried doing it myself, but must have messed it up because it got slower rather than faster. Thanks, Jeff
В списке pgsql-performance по дате отправления: