I have been running some QoS tests lately and wanted to share some of my results. Some of this behavior is described in various documentation guides but it’s not really clearly described in one place. I’ll describe what I have found so far in this post.

QoS is only active during congestion. This is well known but it’s not as well known how congestion is detected. the TX ring is used to hold packets before they get transmitted out on an interface. This is a hardware FIFO queue and when the queue gets filled, the interface is congested. When buying a subrate circuit from a SP, something must be added to achieve the backpressure so that the TX ring is considered full. This is done by applying a parent shaper and a child policy with the actual queue configuration.

The LLQ is used for high priority traffic. When the interface is not congested, the LLQ can use all available bandwidth unless an explicit policer is configured under the LLQ.

A normal queue can use more bandwidth than it is guaranteed when there is no congestion.

When a normal queue wants to use more bandwidth than its guaranteed, it can if there is bandwidth available from other classes not using their full amount of bandwidth.

If multiple queues want to use more bandwidth than they are guaranteed and there is bandwidth available from other classes not using their full amount, they can. The amount of “free” bandwidth seems to be distributed quite evenly although I don’t think there is any guarantee that this is always the case.

When there is congestion, the LLQ does not get assigned additional bandwidth even if there is bandwidth available from any of the normal classes. The only time the LLQ can use more bandwidth is if there is no congestion.

I hope this post gives you some insight on how LLQ and CBWFQ behaves on Cisco devices.

Update:

Some of the readers asked if I could provide the CLI output as well which I am doing below. My setup was two CSR1000v routers and two Ubuntu hosts running Iperf3.

The first test is to test the maximum bandwidth without any QoS applied and sending TCP traffic.

Without any shaping the routers were able to use the full bandwidth available. The next test uses UDP.

UDP could utilize the full bandwidth as well but only after I had tweaked some buffer settings in Linux. With the default buffer size I could only achieve around 500 Mbit/s. UDP is more reliant on buffer size since it has no window scaling like TCP.

I then applied the following QoS configuration.

After applying the policy, the priority class can use the full bandwidth 90Mbit/s. The first test uses TCP.

[email protected]:~$ sudo iperf3 -c x.x.x.x -f m -p 5001 -i 10 -t 120
Connecting to host x.x.x.x, port 5001
[ 4] local x.x.x.x port 38949 connected to x.x.x.x port 5001
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-10.00 sec 106 MBytes 88.6 Mbits/sec 33 624 KBytes
[ 4] 10.00-20.00 sec 102 MBytes 86.0 Mbits/sec 3 706 KBytes
[ 4] 20.00-30.00 sec 102 MBytes 86.0 Mbits/sec 3 631 KBytes
[ 4] 30.00-40.00 sec 102 MBytes 86.0 Mbits/sec 6 556 KBytes
[ 4] 40.00-50.00 sec 102 MBytes 86.0 Mbits/sec 0 697 KBytes
[ 4] 50.00-60.00 sec 102 MBytes 86.0 Mbits/sec 6 617 KBytes
[ 4] 60.00-70.00 sec 101 MBytes 84.9 Mbits/sec 6 667 KBytes
[ 4] 70.00-80.00 sec 100 MBytes 83.9 Mbits/sec 37 591 KBytes
[ 4] 80.00-90.00 sec 101 MBytes 84.9 Mbits/sec 3 624 KBytes
[ 4] 90.00-100.00 sec 102 MBytes 86.0 Mbits/sec 9 540 KBytes
[ 4] 100.00-110.00 sec 102 MBytes 86.0 Mbits/sec 0 680 KBytes
[ 4] 110.00-120.00 sec 102 MBytes 86.0 Mbits/sec 6 612 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-120.00 sec 1.20 GBytes 85.9 Mbits/sec 112 sender
[ 4] 0.00-120.00 sec 1.20 GBytes 85.6 Mbits/sec receiver

This is the output from the CSR router.

The next test is then performed with UDP. Observe that there is some packet loss since UDP does not have acknowledgements.

The next test sends 90Mbit/s UDP in the priority class and 90Mbit/s TCP in another class.

The TCP traffic got 78Mbit/s which was everything that was available after the priority class got its share.

The priority class got around 8Mbit/s. Iperf is sending at 90Mbit/s but notice the 91% packet loss.

The CSR router shows a high drop drate. To get the actual rate take the offered rate minus the drop rate.

The final test is sending 90 Mbit/s in all eight classes. This means that each class should only get its alotted bandwidth.

The priority class gets 7Mbit/s.

Then we have three classes getting 5Mbit/s.

Then we have four classes with 15Mbit/s.

Each class only got the bandwidth that was assigned to it. Hopefully the CLI output makes it even more clear how QoS works on Cisco devices.

General – Behavior Of QoS Queues On Cisco IOS
Tagged on:             

6 thoughts on “General – Behavior Of QoS Queues On Cisco IOS

  • November 16, 2015 at 3:06 pm
    Permalink

    Hello, Daniel!
    At first, thank you for the article. I really enjoy to read your blog.
    About a content, I dont actually agree with that sentence – “QoS is only active during congestion.”. That decline the important QoS part as “congestion avoidance”. And as we know, that part works to _prevent_ a congestion but not to manage. So QoS may take a work before a congestion occurs and basically it uses RED/WRED mechanism.

    I like how Juniper defines drop profiles to utilize congestion avoidance – http://www.juniper.net/documentation/en_US/junos15.1/topics/concept/red-drop-profile-overview-cos-config-guide.html

    Clearly undestandable CLI terms – an example, at 50% drop 10% of random traffic to prevent an avoidance.
    edit class-of-service drop-profiles DROP1
    fill-level 50
    drop-probability 10
    top

    Reply
    • November 16, 2015 at 3:10 pm
      Permalink

      Thanks Boris!

      I should have been a bit more clear that the management of queues is only active during congestion. WRED is a good tool for TCP flows if you can find the right values to use. The syntax on J seems a bit more simple than C.

      Reply
  • November 16, 2015 at 5:50 pm
    Permalink

    Nice Article,
    could you please share the Cli result as well.

    Reply
    • November 16, 2015 at 10:00 pm
      Permalink

      I’ll try to add the CLI results within the next few days.

      Reply
  • November 16, 2015 at 6:46 pm
    Permalink

    Hi Daniel
    Thanks for all your CCDE posts. I just subscribed and loving it all the way. I’m currently studying for the CCDE so having blogs like yours to read is definitely rewarding. Regarding this post, can you please share the CLi results with your findings when you can?

    Reply
    • November 16, 2015 at 10:00 pm
      Permalink

      Thanks for the feedback. I’ll add the CLI results within the next few days.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: