UDP Operations



Welcome back to another Hack And Tinker CCIE exam topic post.  Today’s topic: UDP.  In honor of the fact that UDP is smaller and simpler than TCP, this post is going to be significantly shorter than it’s TCP cousin. Again, as with the TCP post, I’m not going into basic concepts, as I’m already assuming at least some familiarity with what UDP is and how it works.  Here’s our next set of exam blueprint topics:

1.1.f   Explain UDP operations
1.1.f (i)   Starvation
1.1.f (ii)   Latency
1.1.f (iii)   RTP/RTCP concepts

UDP Starvation

Truth be told, UDP is a really selfish protocol.  On top of that, it’s also very “hungry.”  We know this because it can starve TCP to death.  How so?  Well, the issue really comes into play when there is congestion on the network.  Either because of bandwidth limitations, or due to QoS mechanisms such as WRED, which intentionally drops certain types of traffic, UDP will naturally tend to “win” the battle over TCP.  The reason for this is that TCP has congestion avoidance and error discovery mechanisms that allow it to know when it needs to slow things down a bit.  UDP however, does not have these built-in congestion avoidance mechanisms.  That means that a UDP-based traffic flow will just keep on blasting its destination with traffic, with no regard to how that may be affecting other traffic flows.  TCP’s backoff algorithms might allow it to slow to a crawl, while UDP is enjoying the uncluttered highways.  Two similar important concepts can come into play to solve this.  First, try to put critical TCP traffic flows into queues that will ensure that they always have a fair chance at sending their payloads.  Second, avoid putting TCP and UDP into the same queues.  While this can isolate TCP from all other UDP traffic that is placed into different queues, you’ll still experience starvation of your TCP applications if the queue is at its threshold, at which point, you’ll need to increase the bandwidth available to the queue, which might in turn requiring increased bandwidth on the link altogether.


I’m not gonna lie to you; I’m not exactly sure what Cisco is looking for here on this exam topic.  Google “UDP latency” and you’ll get the gamut of hits, mostly all simply referencing the fact that UDP can reduce latency because of being smaller and simpler than TCP.  Apart from that, my research produced no content-worthy information that I thought would help pass the CCIE exams, so I’m simply going to say this: please don’t hesitate to comment if you have something to share here.  We’d love to hear your thoughts and input if you have any!  Otherwise, our basic foundation knowledge of UDP is enough to say that we understand the implication that UDPs use will have on reducing latency.

RTP/RTCP Concepts

As a “Voice Guy”, this particular topic is near and dear to my heart.  RTP helps pay my mortgage.  RTCP is a protocol that you may not be as familiar with though, especially if you haven’t worked much with VoIP.  It’s as simple as this: RTP carries the actual data, such as voice or video.  RTCP carries information about that data – information such as jitter and delay statistics, or information about QoS mechanisms involved with these conversations.  RTP streams use even-numbered UDP ports, and the accompanying RTCP streams will use the next-highest number port, which will obviously always be an odd number.  RTP handles a couple of “missing pieces” in the UDP protocol that are necessary for VoIP.  One of these key missing pieces is datagram sequencing.  UDP does not have sequence numbers, and this will cause some REALLY funky things to happen if there is a significant amount of jitter in the path.  The reason we can use language is because we know how to make the right sounds, and we know how to make them IN THE RIGHT ORDER.  If those sounds aren’t delivered in the right order, then those sounds lose all meaning.  In order to help guarantee that they are indeed played back in the right order, RTP has a 16-bit sequence number field in its packet header, which allows the receiving device to guarantee that the packets are correctly sequenced before playing them back to the eventual listener.

I don’t expect the CCIE Routing and Switching exams to go much deeper into this particular topic, so I’m not going much further here. (But all bets are off if I ever go after the CCIE Collaboration cert!)  One last thing that I will say though, is that it’s very cool that you can capture an RTP steam with Wireshark, and actually play back the RTP stream that was captured.  This can be both tremendously helpful for analyzing voice quality, especially when capturing on a Voice Gateway in the path, or terminating a PRI connection.  It’s also a great way to get a look under the hood of RTP and the call signaling protocols in use.

Coming Exam Topics

It works out great that this ended up being a shorter post, because it makes it easy to fit in some comments about some coming exam topics.  Looking at our next exam blueprint topics, we have section 1.2, which consists of the following:

1.2   Network implementation and operation
1.2.a   Evaluate proposed changes to a network
1.2.a (i)   Changes to routing protocol parameters
1.2.a (ii)   Migrate parts of a network to IPv6
1.2.a (iii)   Routing protocol migration
1.2.a (iv)   Adding multicast support
1.2.a (v)   Migrate spanning tree protocol
1.2.a (vi)   Evaluate impact of new traffic on existing QoS design

As you probably know, routing protocols, QoS, multicast and Spanning-tree are all HUGE domain objectives.  It doesn’t make much sense to me to briefly comment on each of these topics in a single post, given that we will be broaching each of these topics in earnest when we get to them.  So we’ll jump straight to objective 1.3 – Network Troubleshooting.  In the meantime, thanks for stopping by at Hack and Tinker!


  1. Nuri

    From the link: “The latency is the end to end delay. As mentioned above, the UDP is connectionless, the real effect of the latency on the UDP stream is that there would be a great delay in between the sender and the receiver. The jitter is the variance in the latency. It causes problems with the UDP stream. The Jiffer can be smoothed by buffering.”

    1. Nuri

      Jitter means inter-packet delay variance. When multiple packets are sent consecutively from source to destination, for example, 10 ms apart, and if the network is behaving ideally, the destination should be receiving them 10 ms apart. But if there are delays in the network (like queuing, arriving through alternate routes, and so on) the arrival delay between packets might be greater than or less than 10 ms. Using this example, a positive jitter value indicates that the packets arrived greater than 10 ms apart. If the packets arrive 12 ms apart, then positive jitter is 2 ms; if the packets arrive 8 ms apart, then negative jitter is 2 ms. For delay-sensitive networks like VoIP, positive jitter values are undesirable, and a jitter value of 0 is ideal.


Leave a Comment

Your email address will not be published. Required fields are marked *