DEV Community

Discussion on: Testing an application with a network delay

Collapse
 
fritshooglandyugabyte profile image
Frits Hoogland

Thank you Dave for your comments. Like always, the devil is in the details, and this is not exception!

If I read your link correctly, the problem is netem will build up packets when delaying execution, and it will drop any packet that is added when the number of packets reaches its default limit, which is 1000 packets.

If that is happening, and you expect such a setting to just delay packets and not these being dropped, obviously the test is flawed, because something else is happening that you think is happening.

Would you deem a test to be correct if tc -s qdisc show will not show drops?

In principle it's logical that when delaying any form of packet delivery that the packets need to be stored to wait for the delay. And a network device and network stack has limits to what it can do, and will drop any packet that it cannot store because it has run out of its buffers.

(in the past I've seen a fair share of systems where simply too much traffic was sent over a (network) cluster interconnect, leading to packet drops because the network buffer was full)

The link does mention that netem with qdisc doesn't work, and requires a separate machine; how does the separate machine need to be configured to perform predictable network delay?