A friend was looking for some input on low latency queuing yesterday. I thought the exchange we had could be useful for others so I decided to write a quick post.

The query was where the rule about the priority queue being limited to 33% came from. The follow up question is how you handle dual priority queues.

This is one of those rules that are known as a best practice and doesn’t really get challenged. The rule is based on Cisco internal testing within technical marketing. Their testing showed that data applications suffered when the LLQ was assigned a to large portion of the available bandwidth. The background to this rule is that you have a converged network running voice, video and data. It is possibly to break this rule if you are delivering a pure voice or pure video transport where the other traffic in place is not business critical. Other applications are likely to suffer if the LLQ gets too big and if everything is priority then essentially nothing is priority. I have seen implementations using around 50-55% LLQ for VoIP circuits which is a reasonable amount.

How should dual LLQs be deployed? The rule still applies. Do not provision more than 33% to the LLQs in total if there is a mix of voice, video and data.

TCP based applications will likely suffer if they don’t have enough bandwidth. Don’t break this rule unless you have to and be aware of the consequences.

Edit:

I had some feedback from Saku Ytti about the relevance of a LLQ. Many of these best practices revolving QoS came from when networks were much more fragile and we had poor quality VoIP solutions that were very sensitive to jitter and packet loss. Modern VoIP solutions aren’t as sensitive to these conditions and we can use codecs that predict audio when packets are lost. Modern network devices add very little jitter to packets so jitter should not be a large issue today. For these reasons the importance of the LLQ has definitely decreased.

I would be interested if any of my readers has any data on placing VoIP and/or video in a standard queue vs a LLQ and what the result was. Bonus points if you have graphs to show 🙂

QoS – Quick Post on Low Latency Queuing
Tagged on:                 

3 thoughts on “QoS – Quick Post on Low Latency Queuing

  • October 5, 2016 at 8:22 pm
    Permalink

    I don’t have a graph to backup my claim, but my previous company ran video traffic over DMVPN networks, and we had no issues. Also, some voice traffic (from/to softphone) traversed through the DMVPN network as well and had no issues as long as the circuit was not saturated.

    Reply
    • October 6, 2016 at 11:56 am
      Permalink

      QoS mechanisms only start to take place, when there is a congestion on the network. If the circuits are not saturated, everything will work as expected.

      Reply
  • October 6, 2016 at 1:39 am
    Permalink

    As you mention the comment from Saku Ytti, this is true. This rule was mostly relevant with low bandwidth links and on older software-based platforms.

    I’ve actually tested in the past setups with even 50% of the link dedicated to PQ (this becomes even more relevant with higher end platforms which have multiple PQ’s) and it just works.

    I would still adhere to this rule of thumb if all you have is a small CPE with a tiny uplink, but if your link is anything modern (i.e. at least 10’s of Mbps), I would ignore this rule.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *