Re: Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS?
От | David Johnston |
---|---|
Тема | Re: Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS? |
Дата | |
Msg-id | E52607E6-F352-434D-86EE-78A6EC46FA64@yahoo.com обсуждение исходный текст |
Ответ на | Re: Is it possible to make a streaming replication faster using COPY instead of lots of INSERTS? (Craig Ringer <ringerc@ringerc.id.au>) |
Ответы |
Re: Is it possible to make a streaming replication faster
using COPY instead of lots of INSERTS?
|
Список | pgsql-general |
On Nov 30, 2011, at 18:44, Craig Ringer <ringerc@ringerc.id.au> wrote: > On 11/30/2011 10:32 PM, Sergey Konoplev wrote: >> Would it be more compact from the point of view of streaming >> replication if we make the application accumulate changes and do one >> COPY instead of lots of INSERTS say once a minute? And if it will be >> so how to estimate the effect approximately? > Streaming replication works on a rather lower level than that. It records information about transaction starts, rollbacksand commits, and records disk block changes. It does not record SQL statements. It's not using INSERT, so you can'tswitch to COPY. Streaming replication basically just copies the WAL data, and WAL data is not all that compact. I think a better way to phrase the question is whether these three types of constructs affect different results on the replicationside: Insert into tbl values(...); [times 50] insert into tbl values (...), (...), (...), ...; [ once with 50 values ] Copy [ with 50 input rows provided ] I would presume the first one is badly performing but no idea whether the multi-value version of insert would be outperformedby an equivalent Copy command (both on the main query and during replication) Though, does auto-commit affect the results in the first case; I.e., without auto-commit do the first two results replicateequivalently? > David J
В списке pgsql-general по дате отправления: