Re: [GENERAL] Question about loading up a table
От | vinny |
---|---|
Тема | Re: [GENERAL] Question about loading up a table |
Дата | |
Msg-id | f8a27af49c1594c927de38373fde43dc@xs4all.nl обсуждение исходный текст |
Ответ на | [GENERAL] Question about loading up a table (Alex Samad <alex@samad.com.au>) |
Ответы |
Re: [GENERAL] Question about loading up a table
|
Список | pgsql-general |
On 2017-07-31 11:02, Alex Samad wrote: > Hi > > I am using pg_dump | psql to transfer data from my old 9.2 psql into a > 9.6 psql. > > The new DB server is setup as master replicating to a hot standby > server. > > What I have noticed is that the rows don't get replicated over until > the copy from stdin is finished... hard to test when you have M+ lines > of rows. If you are "just testing" then you could use the COPY command https://www.postgresql.org/docs/9.2/static/sql-copy.html to generate a smaller dataset. > > Is there a way to tell the master to replicate earlier I highly doubt it, because the master cannot know what to replicate until your transaction is ended with a COMMIT. If you end with ROLLBACK, or your last query is DELETE FROM (your_table>; then there isn't even anything to replicate at all... > or is there a > way to get pg_dump to bundle into say 100K rows at a time ? I'm not aware of such a feature, it would be quite tricky because of dependencies between records. You cannot simply dump the first 100k rows from table A and the first 100k from table B, because row #9 from table A may have a relation to row 100.001 from table B.
В списке pgsql-general по дате отправления: