General Network Challenges

My wife is sitting next to me at the moment, and after seeing the title of this thread, she told me “That’s a boring title…”  She’s right.  And it’s hard to come up with a creative theme to liven it up, because Cisco pretty much just took four concepts and lumped them together into a single category when they designed the exam blueprint.  I got the feeling that this was Cisco’s “I don’t know where else to put these” category.  But the bottom line is this: it’s on the test, so we need to learn it!  Here are the somewhat random-seeming exam objectives we’re working with:

1.1.c   Explain general network challenges
1.1.c (i)   Unicast flooding
1.1.c (ii)   Out of order packets
1.1.c (iii)   Asymmetric routing
1.1.c (iv)   Impact of micro burst

I’m going to take those slightly out of order and start by discussing asymmetric routing.  What is asymmetric routing?  Basically, it is what occurs when a packet takes one path to get from a source to its destination, but when that destination host sends a response packet as a part of that conversation, the packet takes a different route.  To begin with, let’s first state that there are times when this happens and it is not a problem at all.  It happens far more often that we’d suspect out there on the internet, and one of the reasons for that is that different load-balancing methods, like CEF, can choose different routes to their destinations.  So if that’s the case, then why is it a problem when it happens inside our environments?  One big reason is stateful firewalls.  A stateful firewall watches outgoing traffic and opens up the correct ports for return traffic to be able to come back in.  But what if the firewall was not in the path of the outgoing traffic?  Yep – that response traffic will get dropped all day long.  Another problem that asymmetric routing can cause is, in fact, another one of our main topics: unicast flooding.

It’s tempting to start talking about out-of-order packets next, just for the irony.  But since we’ve already broached unicast flooding, that’s our next destination.  Unicast flooding is a perfectly normal behavior for switches, they always have to do this when they receive a frame destined for a MAC that they do not have in their CAM table.  Normally, a host will end up responding back within a relatively short period of time, which will cause the switch to update its CAM table and the traffic can then be switched without being flooded.  What if, though, the response traffic does not come through the same switch because of asymmetric routing?  At that point, the traffic has to be flooded permanently.  If the asymmetric routing can’t be fixed, then you can address this issue with static MAC table entries with the mac address-table static mac-addr vlan vlan-id interface interface-id command.  Since that’s hardly scalable, it usually makes far more sense to fix the asymmetric routing.  Another thing that can cause this is an attack on your switch’s CAM table.  A malicious user can intentionally source TONs of different MAC addresses from their own computer in the hopes of overloading the CAM table of your switch.  This would essentially turn your switch into a hub.  To help prevent this, you can use port-security to limit the specific addresses and number of addresses that can appear as source hosts on a switch interface.  And since you’ve already read Philip’s post all about port-security I’m going to skip out on explaining just how to configure that.  Yet another cause of unicast flooding comes from STP.  Remember that by default, EVERY TIME there is a change in the topology, STP must reconverge.  That means that every time someone plug/unplugs their laptops, turns a printer on/off, STP must recalculate the Spanning-Tree.  That leads to two flooding issues: first, Topology Change Notifications (TCN) get generated and flooded everywhere to make sure that all of the switches in the L2 domain are aware of the change.  Second, switches age out their MAC tables after receiving these TCNs because of the fact that there was a topology change.  The best fix for this behavour: the interface spanning-tree portfast command.  It’s funny, because we generally think of portfast being a great idea because of the benefits for the directly attached host, namely that it can skip the listening/learning phases before transitioning into a forwarding state.  But the truth is that the benefits are just as great for all of the other hosts in the domain!

In addition to port-security, which prevents unknown source addresses from coming into the switch on protected ports, there is also Unknown Unicast Flood Blocking, or UUFB.  If you know that you don’t want this flooding behavior to send traffic out toward destination addresses, you can also activate UUBF by simply using the interface switchport block unicast command.  One great place for this to be a default is in isolated PVLAN ports.  It usually doesn’t make sense to flood traffic to these ports, as they are only supposed to be allowed to talk to a promiscuous port in the private VLANs associate primary VLAN, which usually is attached to a router.  Why bother flooding unknown traffic to these ports?

Next, let’s talk about microbursts.  We’ll make this short and quick.  (Get it?)  Microbursts are a condition where traffic experiences a sudden spike.  This can cause dropped packets and application failures.  It can also cause major issues with delay-sensitive traffic like voice and video.  One of your biggest friends in dealing with these issues is QoS.  Since QoS is an exam topic unto itself, I’m going to wait until we cross that bridge to really dive in deep on that topic.

Finally, let’s address out-of-order packets.  In our CEF section, we mentioned that performing per-packet load-balancing rather than per-destination load balancing can cause our packets to arrive out-of-order.  This can especially be the case of those different paths have differing QoS configurations.

While this was a short post, it yet again gets us one step closer to the CCIE dream!  Thanks for tuning in again!

1 Comment

  1. Robert Adams

    Hi,
    I found three negative consequences to out-of-order packets that are worth noting:

    – Causes Unnecessary Re-transmission: When the TCP receiver gets packets out of order, it sends duplicate ACKs to trigger fast re-transmit algorithm at the sender. These ACKs make the TCP sender infer a packet has been lost and re-transmit it
    – Limits Transmission Speed: When fast re-transmission is triggered by duplicate ACKs, the TCP sender assumes it is an indication of network congestion. It reduces its congestion window (cwnd) to limit the transmission speed, which needs to grow larger from a “slow start” again. If reordering happens frequently, the congestion window is at a small size and can hardly grow larger. As a result, the TCP connection has to transmit packets at a limited speed and can not efficiently utilize the bandwidth.
    – Reduce Receiver’s Efficiency: TCP receiver has to hand in data to the upper layer in order. When reordering happens, TCP has to buffer all the out-of-order packets until getting all packets in order. Meanwhile, the upper layer gets data in burst rather than smoothly, which also reduce the system efficiency as a whole.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *