Re: [HACKERS] pg_dump and thousands of schemas
От | Tom Lane |
---|---|
Тема | Re: [HACKERS] pg_dump and thousands of schemas |
Дата | |
Msg-id | 8407.1352300557@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [HACKERS] pg_dump and thousands of schemas (Denis <socsam@gmail.com>) |
Ответы |
Re: [HACKERS] pg_dump and thousands of schemas
|
Список | pgsql-performance |
Denis <socsam@gmail.com> writes: > Tom Lane-2 wrote >> Hmmm ... so the problem here isn't that you've got 2600 schemas, it's >> that you've got 183924 tables. That's going to take some time no matter >> what. > I wonder why pg_dump has to have deal with all these 183924 tables, if I > specified to dump only one scheme: "pg_dump -n schema_name" or even like > this to dump just one table "pg_dump -t 'schema_name.comments' " ? It has to know about all the tables even if it's not going to dump them all, for purposes such as dependency analysis. > We have a web application where we create a schema with a number of tables > in it for each customer. This architecture was chosen to ease the process of > backup/restoring data. I find that argument fairly dubious, but in any case you should not imagine that hundreds of thousands of tables are going to be cost-free. regards, tom lane
В списке pgsql-performance по дате отправления: