I’ve done a post earlier on Catalyst QoS. That described how to
configure the QoS features on the Catalyst but I didn’t describe
in detail how the buffers work on the Catalyst platform. In this
post I will go into more detail about the buffers and thresholds
that are used.
By default, QoS is disabled. When we enable QoS all ports
will be assigned to queue-set 1. We can configure up to two
different queue-sets.
sh mls qos queue-set Queueset: 1 Queue : 1 2 3 4 ---------------------------------------------- buffers : 25 25 25 25 threshold1: 100 200 100 100 threshold2: 100 200 100 100 reserved : 50 50 50 50 maximum : 400 400 400 400 Queueset: 2 Queue : 1 2 3 4 ---------------------------------------------- buffers : 25 25 25 25 threshold1: 100 200 100 100 threshold2: 100 200 100 100 reserved : 50 50 50 50 maximum : 400 400 400 400
These are the default settings. Every port on the Catalyst has
4 egress queues (TX). When a port is experiencing congestion
it needs to place the packet into a buffer. If a packet gets
dropped it is because there were not enough buffers to store it.
So by default each queue gets 25% of the buffers. The value is
in percent to make it usable across different versions of the Catalyst
since they may have different size of buffers. The ASIC will have
buffers of some size, maybe a couple of megs but this size is not known
to us so we have to use the percentages.
Of the buffers we assign to a queue we can make the buffers reserved.
This means that no other queue can borrow from these buffers. If we
compare it to CBWFQ it would be the same as the bandwidth percent command
because that guarantees X percent of the bandwidth but it may use more
if there is bandwidth available. The buffers work the same way. There is
a common pool of buffers. The buffers that are not reserved go into the
common pool. By default 50% of the buffers are reserved and the rest go
into the common pool.
There is a maximum how much buffers the queue may use and by default this
is set to 400% This means that the queue may use up to 4x more buffers than
it has allocated (25%).
To differentiate between packets assigned to the same queue the thresholds
can be used. You can configure two thresholds and then there is an implicit
threshold that is not configurable (threshold3). It is always set to the maximum the queue
can support. If a threshold is set to 100% that means it can use 100% of
the buffers allocated to a queue. It is not recommended to put a low value
for the thresholds. IOS enforces a limit of at least 16 buffers assigned
to a queue. Every buffer is 256 bytes which means that 4096 bytes are
reserved.
Q1% Q1buffer Q2% Q2buffer Q3% Q3buffer Q4% Q4buffer buffers 25 25 25 25 Thresh1 100 50 100 50 100 50 100 50 Thresh2 100 50 100 50 100 50 100 50 Reserved 50 25 50 25 50 25 50 25 maximum 400 200 400 200 400 200 400 200
This table explains how the buffers works. Lets say that this port
on the ASIC has been assigned 200 buffers. Every queue gets 25% of the
buffers which is 50 buffers. However out of these 50 buffers only 50%
are reserved which means 25 buffers. The rest of the buffers go to the
common pool. The thresholds are set to 100% which means they can use 100%
of the allocated buffers to the queue which was 50 buffers. For packets
that go to threshold3 400% of the buffers can be used which means 200 buffers.
This means that a single queue can use up all the non reserved buffers
if the other queues are not using them.
To see which queue packets are getting queued to we can use the show
platform port-asic stats enqueue command.
Switch#show platform port-asic stats enqueue gi1/0/25 Interface Gi1/0/25 TxQueue Enqueue Statistics Queue 0 Weight 0 Frames 2 Weight 1 Frames 0 Weight 2 Frames 0 Queue 1 Weight 0 Frames 3729 Weight 1 Frames 91 Weight 2 Frames 1894 Queue 2 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 0 Queue 3 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 577
In this output we have the four queues with three thresholds. Note that queue 0
here is actually queue 1. Queue 1 is queue 2 and so on. Weight 0 is
threshold1, weight 1 is threshold2 and weight 3 is the maximum threshold.
We can also list which frames are being dropped. To do this we use the
show platform port-asic stats drop command.
Switch-38#show platform port-asic stats drop gi1/0/25 Interface Gi1/0/25 TxQueue Drop Statistics Queue 0 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 0 Queue 1 Weight 0 Frames 5 Weight 1 Frames 0 Weight 2 Frames 0 Queue 2 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 0 Queue 3 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 0
The queues are displayed in the same way here where queue 0 = queue 1.
This command can be good to find out if you are having packet loss for important
traffic like IPTV traffic or such that is dropping in a certain queue.
The documentation for Catalyst QoS can be a bit shady and by this post I
hope that you know have a better understanding how the egress queueing works.
Hi Daniel,
May I consult the following question from you:
As I am considering to limit
1Mbps to queue-1,
14Mbps to queue-2,
111Mbps for queue-3 and
Nothing to queue-4 on the interface gi0/1 of a switch. Altogether (1+14+111=126Mbps). Does the following settings are correct?
int gi0/1
speed 1000
duplex full
srr-queue bandwidth shape 1000 71 9 0
srr-queue bandwidth limit 13 (Set 13 as it closed to 126Mbps)
On the other hand, do I have to adjust the buffer proportional to the bandwidth assigned in queue 1 to 3? Like set the reserved as 100%:
mls qos srr-queue output 1 threshold 1 100 100 100 400
mls qos srr-queue output 1 threshold 2 100 100 100 400
mls qos srr-queue output 1 threshold 3 90 100 100 400
mls qos queue-set output 1 buffets 1 12 87 0
How can I check the total buffer available for the queue from a switch?
Thank you so much!
Do you want to shape it to 126 Mb at all times or only during congestion? Have a look at this document for the buffers on 3560:
https://supportforums.cisco.com/docs/DOC-8093
It’s quite tricky to get the right values.