Re: pg_dump dump catalog ACLs
От | Robert Haas |
---|---|
Тема | Re: pg_dump dump catalog ACLs |
Дата | |
Msg-id | CA+TgmoaU+741kAQSeU4PnqtL1UZoR3wsN6PAWAb-EkRanpDfzA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_dump dump catalog ACLs (Peter Geoghegan <pg@heroku.com>) |
Список | pgsql-hackers |
On Fri, Apr 22, 2016 at 3:30 AM, Peter Geoghegan <pg@heroku.com> wrote: > On Fri, Apr 22, 2016 at 12:25 AM, Noah Misch <noah@leadboat.com> wrote: >> Folks run clusters with ~1000 databases; we previously accepted at least one >> complex performance improvement[1] based on that use case. On the faster of >> the two machines I tested, the present thread's commits slowed "pg_dumpall >> --schema-only --binary-upgrade" by 1-2s per database. That doubles pg_dump >> runtime against the installcheck regression database. A run against a cluster >> of one hundred empty databases slowed fifteen-fold, from 8.6s to 131s. >> "pg_upgrade -j50" probably will keep things tolerable for the 1000-database >> case, but the performance regression remains jarring. I think we should not >> release 9.6 with pg_dump performance as it stands today. > > As someone that is responsible for many such clusters, I strongly agree. Stephen: This is a CRITICAL ISSUE. Unless I'm missing something, this hasn't gone anywhere in well over a week, and we're wrapping beta next Monday. Please fix it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: