Re: Fastest way to clone schema ~1000x
От | Adrian Klaver |
---|---|
Тема | Re: Fastest way to clone schema ~1000x |
Дата | |
Msg-id | 3064222d-bcf2-4eaf-ad9a-357a61c4348a@aklaver.com обсуждение исходный текст |
Ответ на | Re: Fastest way to clone schema ~1000x (Emiel Mols <emiel@crisp.nl>) |
Ответы |
Re: Fastest way to clone schema ~1000x
|
Список | pgsql-general |
On 2/26/24 01:06, Emiel Mols wrote: > On Mon, Feb 26, 2024 at 3:50 PM Daniel Gustafsson <daniel@yesql.se > <mailto:daniel@yesql.se>> wrote: > > There is a measurable overhead in connections, regardless of if they > are used > or not. If you are looking to squeeze out performance then doing > more over > already established connections, and reducing max_connections, is a > good place > to start. > > > Clear, but with database-per-test (and our backend setup), it would have > been *great* if we could have switched database on the same connection > (similar to "USE xxx" in mysql). That would limit the connections to the > amount of workers, not multiplied by tests. That is because: https://dev.mysql.com/doc/refman/8.3/en/glossary.html#glos_schema "In MySQL, physically, a schema is synonymous with a database. You can substitute the keyword SCHEMA instead of DATABASE in MySQL SQL syntax, for example using CREATE SCHEMA instead of CREATE DATABASE. " > > Even with a pooler, we're still going to be maintaining 1000s of > connections from the backend workers to the pooler. I would expect this > to be rather efficient, but still unnecessary. Also, both > pgbouncer/pgpool don't seem to support switching database in-connection > (they could have implemented the aforementioned "USE" statement I > think!). [Additionally we're using PHP that doesn't seem to have a good > shared memory pool implementation -- pg_pconnect is pretty buggy]. > > I'll continue with some more testing. Thanks for now! -- Adrian Klaver adrian.klaver@aklaver.com
В списке pgsql-general по дате отправления: