Networking articles by CCIE #37149/ CCDE #20160011
Building a WAN Impairment Device in Linux on VMware vSphere
In some scenarios it is really useful to be able to simulate a WAN in regards to latency, jitter, and packet loss. Especially for those of us that work with SD-WAN and want to test or policies in a controlled environment. In this post I will describe how I build a WAN impairment device in Linux for a VMware vSphere environment and how I can simulate different conditions.
My SD-WAN lab is built on VMware vSphere using Catalyst SD-WAN with Catalyst8000v as virtual routers and on-premises controllers. The goal with the WAN impairment device is to be able to manipulate each internet connection to a router individually. That way I can simulate that a particular connection or router is having issues while other connections/routers are not. I don’t want to impose the same conditions on all connections/devices simultaneously. To do this, I have built a physical topology that looks like this:
All devices are connected to a management network that I can access via a VPN. This way I have “out of band” access to all devices and can use SSH to configure my routers with a bootstrap configuration. To avoid having to create many unique VLANs in the vSwitch, a network using VLAN 4095 has been created which allows for many devices to connect to the same network but simulate a point to point connection by tagging using different VLANs. This network can be seen below:
For example, VLAN 542 connects to one router at a site and VLAN 544 connects to the other router at that site. This is implemented on the Linux device using subinterfaces. My choice of Linux host is a Ubuntu device (22.04.2 LTS):
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
daniel@bridge:~$ uname -a
Linux bridge 5.19.0-45-generic #46~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jun 7 15:06:04 UTC 20 x86_64 x86_64 x86_64 GNU/Linux
daniel@bridge:~$ uname -a
Linux bridge 5.19.0-45-generic #46~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jun 7 15:06:04 UTC 20 x86_64 x86_64 x86_64 GNU/Linux
daniel@bridge:~$ uname -a
Linux bridge 5.19.0-45-generic #46~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jun 7 15:06:04 UTC 20 x86_64 x86_64 x86_64 GNU/Linux
This should work on other Linux distributions as well.
To create subinterfaces needed, the IP utility is used. First check that the 802.1Q module has been loaded:
If the module for some reason has not been installed or loaded into the kernel use
sudo apt-get install vlan
sudo apt-get install vlan and
sudo modprobe 8021q
sudo modprobe 8021q to install and load the module into the kernel.
The next step is to start adding interfaces using the IP utility. Add the interface:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
sudo ip link add link ens192 name ens192.542 type vlan id 542
sudo ip link add link ens192 name ens192.542 type vlan id 542
sudo ip link add link ens192 name ens192.542 type vlan id 542
The
ip link add link ens192
ip link add link ens192 defines what interface to add the VLAN to. Then it is given a name and finally the VLAN is defined. Then set the interface to be up:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
sudo ip link set dev ens192.542 up
sudo ip link set dev ens192.542 up
sudo ip link set dev ens192.542 up
Then add an IP address to the interface:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
daniel@bridge:~$ sudo ip addr add 192.0.2.57/30 dev ens192.542
daniel@bridge:~$ sudo ip addr add 192.0.2.57/30 dev ens192.542
daniel@bridge:~$ sudo ip addr add 192.0.2.57/30 dev ens192.542
To view information about the interface, use the
ip -d link show
ip -d link show command:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
daniel@bridge:~$ ip -d link show ens192.542
13: ens192.542@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
daniel@bridge:~$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:79:43 brd ff:ff:ff:ff:ff:ff
altname enp3s0
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
altname enp11s0
4: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:ae:52 brd ff:ff:ff:ff:ff:ff
altname enp19s0
13: ens192.542@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
14: ens224.543@ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:ae:52 brd ff:ff:ff:ff:ff:ff
15: ens192.520@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
16: ens192.522@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
17: ens192.524@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
18: ens192.526@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
19: ens192.528@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
20: ens192.530@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
21: ens192.532@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
22: ens192.534@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
23: ens192.536@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
24: ens192.538@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
25: ens192.540@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
26: ens192.541@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
27: ens192.544@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
daniel@bridge:~$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:79:43 brd ff:ff:ff:ff:ff:ff
altname enp3s0
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
altname enp11s0
4: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:ae:52 brd ff:ff:ff:ff:ff:ff
altname enp19s0
13: ens192.542@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
14: ens224.543@ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:ae:52 brd ff:ff:ff:ff:ff:ff
15: ens192.520@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
16: ens192.522@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
17: ens192.524@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
18: ens192.526@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
19: ens192.528@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
20: ens192.530@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
21: ens192.532@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
22: ens192.534@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
23: ens192.536@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
24: ens192.538@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
25: ens192.540@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
26: ens192.541@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
27: ens192.544@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:ad:3a:52 brd ff:ff:ff:ff:ff:ff
The Linux device should be used as a router, meaning that it routes packet between its interfaces. This will not be done by default as ip_forward is set to 0. First, verify what ip_forward is set to:
The device will now route packets between interfaces.
It is now time to install tcconfig, a Python project that uses tc (traffic control) on Linux to modify how an interface is behaving in regards to latency, jitter, packet loss, and throughput:
Nothing is currently applied to the interface. Let’s try a ping towards one of the devices to see what the latency is:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
daniel@bridge:~$ ping 192.0.2.58 -c 5
PING 192.0.2.58 (192.0.2.58) 56(84) bytes of data.
64 bytes from 192.0.2.58: icmp_seq=1 ttl=255 time=0.367 ms
64 bytes from 192.0.2.58: icmp_seq=2 ttl=255 time=0.295 ms
64 bytes from 192.0.2.58: icmp_seq=3 ttl=255 time=0.219 ms
64 bytes from 192.0.2.58: icmp_seq=4 ttl=255 time=0.241 ms
64 bytes from 192.0.2.58: icmp_seq=5 ttl=255 time=0.301 ms
--- 192.0.2.58 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4095ms
rtt min/avg/max/mdev = 0.219/0.284/0.367/0.051 ms
daniel@bridge:~$ ping 192.0.2.58 -c 5
PING 192.0.2.58 (192.0.2.58) 56(84) bytes of data.
64 bytes from 192.0.2.58: icmp_seq=1 ttl=255 time=0.367 ms
64 bytes from 192.0.2.58: icmp_seq=2 ttl=255 time=0.295 ms
64 bytes from 192.0.2.58: icmp_seq=3 ttl=255 time=0.219 ms
64 bytes from 192.0.2.58: icmp_seq=4 ttl=255 time=0.241 ms
64 bytes from 192.0.2.58: icmp_seq=5 ttl=255 time=0.301 ms
--- 192.0.2.58 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4095ms
rtt min/avg/max/mdev = 0.219/0.284/0.367/0.051 ms
daniel@bridge:~$ ping 192.0.2.58 -c 5
PING 192.0.2.58 (192.0.2.58) 56(84) bytes of data.
64 bytes from 192.0.2.58: icmp_seq=1 ttl=255 time=0.367 ms
64 bytes from 192.0.2.58: icmp_seq=2 ttl=255 time=0.295 ms
64 bytes from 192.0.2.58: icmp_seq=3 ttl=255 time=0.219 ms
64 bytes from 192.0.2.58: icmp_seq=4 ttl=255 time=0.241 ms
64 bytes from 192.0.2.58: icmp_seq=5 ttl=255 time=0.301 ms
--- 192.0.2.58 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4095ms
rtt min/avg/max/mdev = 0.219/0.284/0.367/0.051 ms
The latency is low as expected as these hosts are in the same virtual network. Now let’s add some latency to the interface to make it look like the two devices are on a WAN in different continents. Let’s add 150ms of latency using the
PING 192.0.2.58 (192.0.2.58) 56(84) bytes of data.
64 bytes from 192.0.2.58: icmp_seq=1 ttl=255 time=0.227 ms
64 bytes from 192.0.2.58: icmp_seq=3 ttl=255 time=0.314 ms
64 bytes from 192.0.2.58: icmp_seq=4 ttl=255 time=0.272 ms
64 bytes from 192.0.2.58: icmp_seq=6 ttl=255 time=0.314 ms
64 bytes from 192.0.2.58: icmp_seq=7 ttl=255 time=0.308 ms
64 bytes from 192.0.2.58: icmp_seq=8 ttl=255 time=0.267 ms
64 bytes from 192.0.2.58: icmp_seq=10 ttl=255 time=0.190 ms
--- 192.0.2.58 ping statistics ---
10 packets transmitted, 7 received, 30% packet loss, time 9214ms
rtt min/avg/max/mdev = 0.190/0.270/0.314/0.044 ms
daniel@bridge:~$ ping 192.0.2.58 -c 10
PING 192.0.2.58 (192.0.2.58) 56(84) bytes of data.
64 bytes from 192.0.2.58: icmp_seq=1 ttl=255 time=0.227 ms
64 bytes from 192.0.2.58: icmp_seq=3 ttl=255 time=0.314 ms
64 bytes from 192.0.2.58: icmp_seq=4 ttl=255 time=0.272 ms
64 bytes from 192.0.2.58: icmp_seq=6 ttl=255 time=0.314 ms
64 bytes from 192.0.2.58: icmp_seq=7 ttl=255 time=0.308 ms
64 bytes from 192.0.2.58: icmp_seq=8 ttl=255 time=0.267 ms
64 bytes from 192.0.2.58: icmp_seq=10 ttl=255 time=0.190 ms
--- 192.0.2.58 ping statistics ---
10 packets transmitted, 7 received, 30% packet loss, time 9214ms
rtt min/avg/max/mdev = 0.190/0.270/0.314/0.044 ms
daniel@bridge:~$ ping 192.0.2.58 -c 10
PING 192.0.2.58 (192.0.2.58) 56(84) bytes of data.
64 bytes from 192.0.2.58: icmp_seq=1 ttl=255 time=0.227 ms
64 bytes from 192.0.2.58: icmp_seq=3 ttl=255 time=0.314 ms
64 bytes from 192.0.2.58: icmp_seq=4 ttl=255 time=0.272 ms
64 bytes from 192.0.2.58: icmp_seq=6 ttl=255 time=0.314 ms
64 bytes from 192.0.2.58: icmp_seq=7 ttl=255 time=0.308 ms
64 bytes from 192.0.2.58: icmp_seq=8 ttl=255 time=0.267 ms
64 bytes from 192.0.2.58: icmp_seq=10 ttl=255 time=0.190 ms
--- 192.0.2.58 ping statistics ---
10 packets transmitted, 7 received, 30% packet loss, time 9214ms
rtt min/avg/max/mdev = 0.190/0.270/0.314/0.044 ms
There was now severe packet loss. With modifying latency we could see exactly the added latency while with packet loss it’s more difficult to see exactly how much loss you have. This should be more clear when using a protocol such as BFD to measure loss.
It’s also possible to get even more advanced by only adding delay if traffic comes from certain networks. To demonstrate this, I will ping using different source IPs. Let’s add delay if packet is coming from 192.0.2.40/29 (direction needs to be incoming):
PING 192.0.2.58 (192.0.2.58) from 192.0.2.41 : 56(84) bytes of data.
64 bytes from 192.0.2.58: icmp_seq=1 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=2 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=3 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=4 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=5 ttl=255 time=150 ms
--- 192.0.2.58 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 150.315/150.340/150.363/0.017 ms
daniel@bridge:~$ ping 192.0.2.58 -c 5 -I 192.0.2.41
PING 192.0.2.58 (192.0.2.58) from 192.0.2.41 : 56(84) bytes of data.
64 bytes from 192.0.2.58: icmp_seq=1 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=2 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=3 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=4 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=5 ttl=255 time=150 ms
--- 192.0.2.58 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 150.315/150.340/150.363/0.017 ms
daniel@bridge:~$ ping 192.0.2.58 -c 5 -I 192.0.2.41
PING 192.0.2.58 (192.0.2.58) from 192.0.2.41 : 56(84) bytes of data.
64 bytes from 192.0.2.58: icmp_seq=1 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=2 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=3 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=4 ttl=255 time=150 ms
64 bytes from 192.0.2.58: icmp_seq=5 ttl=255 time=150 ms
--- 192.0.2.58 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 150.315/150.340/150.363/0.017 ms
This can be used to simulate poor performance between two specific sites. This sometimes happens in real WANs.
Building a WAN impairment device is simple using Linux and tcconfig. I hope this post has demonstrated how to easily insert a WAN impairment device into a VMware vSphere environment. All the concepts can apply to other environments as well, though. Happy building!
Building a WAN Impairment Device in Linux on VMware vSphere
Good article. I use EVE-NG for SD-WAN tests, with NETem I can easily set loss, jitter, latency, it is much easier for me.
Very nice. Thanks for sharing.
Thanks!