Re: Integrating Replication into Core
От | David Boreham |
---|---|
Тема | Re: Integrating Replication into Core |
Дата | |
Msg-id | 4569C401.50902@boreham.org обсуждение исходный текст |
Ответ на | Re: Integrating Replication into Core (Markus Schiltknecht <markus@bluegap.ch>) |
Список | pgsql-hackers |
Markus Schiltknecht wrote: > LOL, I've just figured that netem is the project behind: > > tc qdisc ... netem ... > > I'm already using that, too ;-) Just wasn't aware it's called netem. > Sounds silly, since the name is in the command line, I know... Heh. AFAIK netem is the tc stuff that isn't much use for production router use (e.g. introduce a 10ms packet delay on this kind of traffic...). We used a mixture of netem and regular tc kernel modules, in a Linux box that had 6 NICs, with Python driving it. Each replication node test machine was connected with a straight-through patch cable to one of the NICs on the 'spider' machine. The Python could set up the netem/tc on the router such that various test scenarios with different banwidth/delay values were implemented. Also of course loss of connectivity by dropping all packets on an interface. Each test machine had two NICs - the second one being used to communicate with it out of band from the replication traffic and network emulation. Then on top of all this the actual replication tests were run. One of the things we were interested in was replication throughput vs network latency, so we also measured performance and made that being acceptable a test pass condition. If you want really fancy network emulation you'd need to use nistnet. It can do some things that are not possible with netem (statistical packet drop for example). However IMHO this is only appropriate for testing TCP/IP stack implementation. Varying latency, throughput, and introducing connectivity outages is good enough for user mode code I believe. Nistnet is not in the stock kernel, wheras netem is.
В списке pgsql-hackers по дате отправления: