I’m back studying and I have already booked a new lab date.
I won’t make an announcement until I get back.
This is to keep a bit of the pressure off from taking the lab.

The last couple of days I have been studying Catalyst QoS. It can get a bit messy.

There are no Catalyst 3550’s in the lab any longer, only 3560. So when practicing
forget about 3550. If you have a 3750 that is fine since it is basically the same
switch as 3560 but with stacking.

The Catalyst has 2 ingress queues, one can be used for priority. The switch also
has 4 egress queues where one can be used for priority. It is more likely to end
up with congestion on egress than ingress but we have the option of configuring

Lets assume that we have a switch with a Cisco IP phone connected to it.
The switchport will have a general configuration like the one below.

Even though the port is configured as access this is actually a form of trunk since
voice and data are using different VLANs. By default QoS is disabled. This means that
the switch will be transparent to the QoS markings. Whatever the phone, computers are
setting will be transparently forwarded through the switch. As soon as we turn on QoS
with the mls qos command this behaviour is no longer true.
If we just enable QoS and do nothing more than all markings will be set to BE (0).
This is true both for CoS and DSCP. To check if QoS is enabled use show mls qos.

If we trust the device connecting to a port, most likely a phone then we setup
a trust boundary. We can trust CoS, IP precedence or DSCP. CoS or DSCP will be
more common than precedence.
The CoS is a layer 2 marking, sometimes also called 802.1p priority bits.
CoS is only available in tagged frames like on a 802.1Q trunk. There is a risk
of loosing the marking when the frame gets forwarded through different media from
Ethernet to frame relay or PPP or whatever your links are running. Because of this
it makes much sense to either trust DSCP or use the CoS value to map to a DSCP value.
If we want to configure trusting of CoS on a port we configure it like this.

This means that the CoS marking coming in to the port is trusted. Untagged frames
will receive BE treatment since there is no marking of those packets. If we want to
mark the untagged frames we use the following configuration.

All untagged frames will get a CoS marking of 3. What if we want the port to
mark all the packets the same no matter comes in to the port?
We can use the override command for this.

This will effectively set the CoS value to 1 for all frames entering the port.
We can also use the switchport priority command to instruct the Cisco phone to set a
CoS marking on packets from the computer (data) entering the IP phone.

This will set all frames from the computer entering the phone to have a
marking of 1 no matter what the computer tries to set them to.

It is important to know that the Catalyst switch uses a concept of an
internal QoS label. This is a DSCP value which is used internally and
will define into which queues the traffic ends up.
If you type show mls qos map you will see a lot of different maps that
the Catalyst uses. The CoS to DSCP map is used by the switch so if we trust
CoS then a DSCP value will be derived from that and when the frame is exiting
the switch to another switch then the CoS value will be set according
to the DSCP to QoS mapping table. This effectively keeps the QoS labels synchronized.

Now lets take a look at the ingress queues. We have two of them.
By default queue 2 will be the priority queue. To see the default settings
use the show mls qos input-queue command. We can manipulate which queue becomes
the priority queue and this is done with the
mls qos srr-queue input priority-queue bandwidth command.
If you want to use queue 1 as the priority queue then enter a 1 in the command.
The weight defines how much bandwidth the priority queue can use. By default it uses 10%
You can set this value from 0 to 40 so that it does not starve all of the bandwidth.

The switch uses buffers if there is a need to queue packets. Remember the basic
function of QoS that without congestion there is no queueuing to begin with, only forwarding.
Unfortunately Cisco does not tell us a lot about how much buffers are available
in the Catalyst platforms. To tune the buffers we use mls qos srr-queue input buffers.
We should not assign too much of the buffers to the priority queue.
Finding optimal values depends a lot on your network and takes a lot of testing.
The safest bet might be to use Auto QoS and look what Cisco is using.
These values have been researched by Cisco and should be safe to use.
Lets temporarily enable Auto QoS and look which values we get.

With Auto QoS configured the priority queue gets 10% of bandwidth
and 33% of the buffers. The thresholds for the non priority queue are significantly
lower than the default settings. So the buffers assigns buffer space to the
queues but it does not say how much bandwidth is available to each queue.
We control this with mls qos srr-queue input bandwidth.
It is important to note here that the priority queue gets served first and then a
Shared Round Robin (SRR) algorithm is used to divide the traffic between the
two queues according to the weights. These are just weights and not necessarily
percentages although you could configure it to be.

If we look at show mls qos map cos-input-q and show mls qos dscp-input-q
we can see the maps that are used to define which queue the traffic ends up in.
We can of course set these values according to our needs.

Everything is by default mapped to queue 1 except for CoS 5 which is
mapped to queue 2. The general idea is to map VoIP to queue 2 and everything
else in queue 1. Lets look at the DSCP table as well.

To read this table start by reading from the left column and combining
that number with a number to a row on the right. Almost everything is mapped
to queue 1 except for DSCP 40-47 which is mapped to queue 2.

Up until now we have only discussed queues. The catalyst switch also uses
a congestion avoidance mechanism that is called Weighted Tail Drop (WTD).
The switch has three thresholds for every queue where the third threshold
is not configurable, it is always set to 100%
We can set the other two thresholds to values of our liking. Now we will
map CoS 6 to queue 2, threshold 3. We don’t want this traffic to get dropped unless
there is no other option.

Always confirm your result with the show mls qos map command.

Now lets try to map DSCP EF to queue 1, threshold 3.

That covers the ingress queues. Note that all commands will affect all
ports on the switch, there is no way of setting port specific QoS settings for input queues.

Now lets look at our options for egress queues. We have four queues
where every queue has three thresholds. We start by looking at the default settings.

The egress queueing is a bit more flexible. With the SRR algorithm we
can do some port specific bandwidth control. When it comes to egress queues
we can either shape or share a queue. A shaped queue is guaranteed an amount
of bandwidth but is also policed to that value. Even if there is no
congestion that queue still can’t use more bandwidth then it has been assigned.
The shaped value is calculated against the physical interface speed.
Look at the following command.

How much bandwidth did we just assign to queue 1? 25 Mbit?
We assigned 4 Mbit since (1/25)*100 = 4. When we set the other queues to 0
this means that they are operating in shared mode instead of shaped.
Now we configure the three other queues.

How much does every queue get? Notice I put a 33 for queue 1 but
that will do nothing since it is operating in shaped mode. That leaves
us with the other three queues. To calculate their share we use
(33/33+33+33)*96 = 32 Mbit. So these values are just a weight
and we have to subtract the value from the shaped queue when calculating
how much the other queues get. When operating in shared mode if one
queue is not using all of its bandwidth the other queues may cut into this.
This is different compared to the shaped mode.

Assigning values to the egress queue depending on CoS or DSCP works
the same way as for ingress.

The egress queues also uses buffers. These can be tuned by configuring a
queue-set. By default all ports will use queue-set 1 with these settings.

We can configure one of our own queue-sets and tell a port to use this instead.

We can also configure thresholds and how much of the buffers are reserved.

Then we need to actually assign the queue-set to an interface.

We can check our settings with show mls qos queue-set

The buffer values can be a bit confusing. First we define how big a share
the queue gets of the buffers in percent. The thresholds will define when
traffic will be dropped. For queue 1 we start dropping traffic in threshold 1 at 50%
Then we drop traffic in threshold 2 at 200%
How can a queue get to 200%?! The secret here is that a queue can outgrow
the buffers we assign if there are buffers available in the common pool.
This is where the reserved values comes into play.
Every queue gets assigned buffers but we can define that only 50% of
these buffers are strictly reserved for the queue.
The other 50% goes into a common pool and can be used by the other queues as well.
We then set a maximum value for the queue which says that it can grow
up to 400% but no more than that.

Early in this post I talked about the priority queue for egress queues.
This how we enable it.

This will always be queue 1 and is not configurable.

Now lets move on to some other things we can do with QoS. Lets assume that we
have a customer connecting to switch and internally they are using totally different
DSCP values than we want. We can use a DSCP mutation map for that.

So in this example we are mutating DSCP 40 (CS5) to DSCP 46 (EF).

We also have the option of using policy maps just like on routers.
And we can even police traffic. This policy-map will match all ICMP and police
it to 128k with a marking of EF, any exceeding traffic will be remarked
to DSCP 0.

If you are used to configuring MQC on routers then you will be surprised
to know that the show policy-map is not working for switches.
We need to use show mls qos int statistics instead.

This is a huge table showing how much traffic with different markings
are coming in and going out. At the very end you see the Policer which
shows how much is in profile and out of profile.

So that is one way of configuring policy-maps. When using the
catalyst switches they can use QoS in either VLAN based mode or port based mode.
If we use VLAN based mode we will apply the policy to a SVI instead.
This might be more scalable depending on your setup.
The caveat with using a policy-map on SVI is that you can’t police in the parent map.
You need a child map for that. Lets look at an example using a parent and child map.
Any IP traffic from trunks may use 256k (Fa0/13 – 21) and traffic from Fa0/6 will
be restricted to 56k. This will all be configured for VLAN 146.

The final thing I want to show is how to use an aggregate policer.
We can use this if we want several classes to share a bandwidth instead
of setting bandwidth per class. Take a look at this.

And before I leave you there is this final thing I want to show.
That is how to limit the egress traffic on an interface with the SRR command.
Let’s say that We have a 100 Mbit interface but customer only pays for 4 Mbit.
We can use this command.

The lowest we can set is 10 though. If we have a port running at
100 Mbit that will leave us with 10 Mbit. We can set the port to 10 Mbit
and configure this to 40% Then we will achieve the value that was requested.

This post turned out to be very long but I hope it has been informative
and I know for sure it helped me to solidify the concepts.
I hope it will help a lot of other people as well.

Catalyst QoS
Tagged on:                 

24 thoughts on “Catalyst QoS

  • March 8, 2012 at 2:33 pm

    very very nice
    what sources did you use to compile all this info ?
    it would be really helpful to read that also

    • March 9, 2012 at 2:19 pm

      Daniel, I’ve got a plan for troubleshooting.
      Make your own lab and instead of breaking it yourself get another CCIE candidate to break it! We could do it section by section – so 1st we break OSPF and give 20 mins to fix it, then Multicast, etc, etc. You need to have a powerful enough PC that can run 20+ routers. We can set it up as a kind of study group where we are breaking each others labs..
      What do you think? If you’re interested we can exchange details.

      • March 9, 2012 at 2:29 pm

        Yes I think that would be interesting. If you break something yourself you obviousl know the answers so. Would be good to have some kind of baseline config and then go from there with introducing faults.

      • April 9, 2012 at 8:25 pm

        Can i join the group for troubleshooting break and fix? – I am 4 months away from my lab date.
        Thanks for the info about the cisco 3550. I still have 2 cisco 3550 in my home lab – I think it is time to change replace them with a 3560.

        • April 10, 2012 at 6:33 am

          There is no group for TS, at least not yet but add me to Gtalk if you want to discuss something.

  • March 8, 2012 at 4:39 pm


    Really good summary.

    You are missing one thing: before applying the mutation map, you need to trust the DSCP on that port.

    • March 8, 2012 at 5:19 pm

      Yes, that is correct. I will add it to the post. Thanks for noticing 🙂

  • March 9, 2012 at 7:31 am

    Hello Danil

    QOS is alwayz gives me hard time to understand, as i am very novice to thiis, can you tell me the best way to understand it.


    • March 9, 2012 at 7:51 am

      QoS is a large area and it takes time to master it. You need to do a lot of reading. The QoS guide by Odom is good. Read a lot of posts on INE and on the DOCCD of course. Then it is down to labbing and seeing for yourself how it works. I still want to do some more QoS before I feel totally comfortable with it.

  • March 10, 2012 at 1:12 pm

    Thanks for a useful writeup of a somewhat secondary topic.But what is it secondary with the new CCIE ? An interesting post on the cisco learning network was sayng that the INE material in general favors a very deep knowledge of the routing aspect of the exam,often not required ,leaving a bit uncover some of the many other subjects.Certainly some of the redistribution scenarios of INE can be very convoluted and time consuming.
    I would be interested in knowing your opinion.The only time I have taken the exam was before the changes.

    • March 10, 2012 at 5:54 pm

      Well as you never know what will be on the exam you have to pretty much study it all. You need to have a very strong understanding of layer 2 and 3. Yes, INE can sometimes get a bit overcarried with their scenarios and redistribution etc but I guess you would rather see it now than on the lab?

      It’s not possible to know it all but you need to at least recognize everything and know where to find it on the DOCCD. QoS is guranteed to be on the lab in some form so those are points you will want to have.

  • April 11, 2012 at 10:07 am

    Really great post, very precise and informative. Thank you……

  • May 11, 2012 at 5:45 am

    Hi Daniel, great post! I’ve printed it off and I’ll be referencing it many times. One small thing – with the VLAN based qos config example, just above where you begin to discuss the aggregate policer, you’ve left out configuring ‘mls qos vlan-based’ on the trunk ports and fa0/6.

    • May 11, 2012 at 7:44 am

      Hi David,

      Thanks for the great feedback. I checked the post but I can’t find the error. I think in the example you are referencing I’m showing how to police by matching on input port. Generally it makes more sense to use VLAN based QoS like you say but you can match on the input interface as well.

  • May 11, 2012 at 9:36 am

    Hi Daniel,

    Take a quick look at this: http://www.cisco.com/en/US/docs/switches/lan/catalyst3560/software/release/12.2_50_se/configuration/guide/swqos.html#wp1767120

    Under “Classifying, Policing, and Marking Traffic on SVIs by Using Hierarchical Policy Maps” check out the first bullet.

    “Before configuring a hierarchical policy map, you must enable VLAN-based QoS on the physical ports that are to be specified at the interface level of the policy map”

    • May 11, 2012 at 10:17 am

      Hi David,

      You are 100% correct. I updated the post. It just shows how easy it is to miss something when studing for the CCIE because of all the detail. Good luck with your studies.

      • May 11, 2012 at 12:00 pm

        No worries Daniel, I know exactly what you mean. Thanks again for the great post! It’s helped to greatly improve my understanding of Catalyst QoS.

  • June 20, 2012 at 10:35 pm

    Hi Daniel,

    Thanks so much for sharing this information. i am having trouble with qos at work. We had one site and everything seems to set up properly but still call quality is not good. This is a I site where they take calls and ans . we check the firewall as well and its not striping any incoming and outgoing packets. DSCP ef making we can see on the packet capture. Any idea?
    i can post the config if you need.


  • October 3, 2012 at 4:29 pm


    Excellent post. This really cleared up a lot for me.

  • December 26, 2012 at 12:23 pm

    Hi Daniel….
    Its really hard moth for me….trying to implement Vlan Based QoS but unable to do so… I tried all the possibilities on 3750 switch but no luck at all….. My goal is to distribute internet bandwidth on vlan basis (SVI). PLease guid me…..

    • December 26, 2012 at 12:36 pm

      Hi Imran,

      Is this switch the gateway for all VLANs? VLAN based means you do it at SVI level instead of per port. Are you sure that you enabled QoS? Show me the config you used and I will try to help you.


  • Pingback:3560 QoS | np2ie

  • Pingback:Catalyst QoS | Serhii Maistrenko

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: