When is the Update Coming
Finally the announcement is here, CCIE RS v5 is going live on June 4, 2014. That means
that the last day to take both the written and the lab for v4 is June 3, 2014.
As expected Cisco gives a 6 month heads up for candidates to prepare themselves for
the new version.
Which Version Should I Prepare for
When I started studying for the CCIE, my goal was to become a networking expert and
by that also pass the CCIE certification. That meant that I sometimes studied things
in excess of what was needed for the lab but that would help with my overall career.
I don’t understand why people get stressed out by a few extra topics added, passing
the lab should verify you as an expert, the goal should not be to just squeak by a PASS.
If you have a lab date coming up in the next months or think you can get ready by then,
give v4 a shot but realize that lab dates are probably hard to get by now that many people
are in panic mode. The new topics for v5 are things you could definitely use in your dayjob
so don’t be afraid to learn those.
Changes to CCIE Written
There are some major changes. This document from CLN shows how the different technologies
With Layer 3 Technologies at 40% that is the majority of the exam. What’s interesting is
that VPN Technologies and Infrastructure Security adds up to 30% which shows that security
is becoming an important part of the RS exam as well.
Cisco has done a great job of making the blueprint more detailed. If we expand the blueprint
we can see that it’s very detailed:
I get the feeling that Cisco has tried to make the new blueprint more relevant to
what people use in production and run into on those networks. I draw this conclusion
from added items like Asymmetric routing and Impact of micro burst. These are things
that can commonly cause issues in real networks.
As expected IPv6 is getting more important as well. There is a section dedicated to
migrating to v6.
There is also a section added for troubleshooting. This section contains items like
Embedded Packet Capture (EPC) and the use of Wireshark. These are great additions as well.
The Layer 2 section is basically the same as before. There is a section about VSS
and Stackwise. That might be some new topics.
The Layer 3 section hasn’t changed that much either. More focus on v6:
The addition of 4 byte ASNs is good since the 16 bit ones have pretty much run out:
It’s interesting to see that ISIS is back on the written. ISIS is not only useful in
itself. It is used by other protocols like TRILL so that might be why Cisco added it back.
The VPN Technologies is completely new and IPSEC is now included as well as DMVPN.
Although these are security topics they are important to know if you work with
routing/switching as well.
The Infrastructure Security section has mostly familiar topics with some additional
added for v6:
The Infrastructure Services has mostly familiar topics as well. Some additional v6 topics
have been added:
Some people at Twitter were disappointed to see v6 NAT and I agree that I don’t like
to see NAT for v6 unless it is used to migrate between v4 and v6.
Overall I think Cisco has done a great job. Topics are relevant and seem to be more
geared to what people work on at our daily jobs.
Some topics have been removed as well. The two major ones being Frame Relay and
Catalyst QoS. This makes sense as well, Fram Relay is rarely used now and Catalyst QoS
is very platform dependant.
Changes to CCIE Lab
There are some updates regarding the lab as well. The entire CCIE lab is now virtualized
including the configuration section. Expect to see larger topologies in the configuration
section now as the topology is virtualized. There has also been added a section called
DIAG. So the new format looks like this:
First out is the TS section. What’s interesting here is that 120 minutes is alotted to it
as before. However there is the possibility of using 30 minutes extra at the cost of having
less time for the configuration section. This should be good for people that feel stressed
for time on the TS. Be aware though that usually how fast you can solve the TS tickets is
a good indication of how prepared you are for the lab.
The DIAG section is completely new and is alotted 30 minutes. It seems to use a similar
content delivery like the CCDE practical. There are no devices to diagnose, instead the
candidate will read e-mails, look at diagrams, packet captures and logs. I am carefully
optimistic about this section. I think Cisco added it to both make sure that CCIEs have
qualities as expected by them and to make it more difficult to pass by cheating.
The configuration section is the same, it is alotted 330 minutes but if you used the 30
minutes for the TS then this section is 300 minutes. I’m not sure yet if the 30 minutes
is fixed or if it is dynamic so if you use 135 minutes for the TS, do you get 315 minutes
for the config? The configuration section is now virtualized. Expect to see larger topologies.
This is good news in my opinion, this should make it more difficult for people to memorize labs.
It will also be easier to create larger topologies where we can see networks that have
routers for all roles, P, PE, CE and so on. That was difficult to do with only 5 routers
Note that to pass the CCIE lab you must pass each section, TS, DIAG and Config. Each
section will have a minimum passing score which I could not find a reference to but
the passing score has been 80% before.
Summary of All Changes
This document describes all the updates from v4 to v5.
The big things being added are once again DMVPN and IPSEC. There is also a focus on IPv6
and on making the blueprint more realistic.
These things have been moved/removed:
Frame Relay is gone and Catalyst QoS has been moved to the written. To the joy of many
v4 candidates, PfR has been moved to the written as well.
The CCIE RS v5 lab blueprint is here.
Also this page at CLN is a portal for all documents relevant for the CCIE RS v5.
Good Work Cisco!
Overall I’m very happy with this announcement. Cisco has done a great job of making the
blueprint more relevant and have added topics that people should be seeing in todays
networks. They have also taken steps to increase the integrity of the lab.
Virtualizing the entire lab is interesting and should help to create good topologies
and to provide more integrity of the CCIE.
The CCIE has never been more relevant than now.
I received my CCIE plaque a while back. This is what it looks like.
Good luck to everyone pursuing the CCIE and one day you will have
one of these as well
While on IRC I had a request to describe my journey and the costs associated with becoming
a CCIE. Becoming a CCIE is not cheap but I’ve worked for great companies that have covered
all of my costs.
I first started studying for the written back in the summer of 2010. All my posts from back
then are still available in the archives. My strategy for the written was to build a strong
foundation to stand on beforing moving on to labs. I did not want to fast forward through
the written just to get on to the labs. Remember that the CCIE lab is about thinking at a
CCIE level, it is not about commands. You need to read for the CCIE, a lot! If you don’t like
reading then I’m sorry but this exam is not for you. I’ve probably read close to the
amount of someone becoming a doctor if I count the pages of everything I’ve read so far.
Here are some of the books that I read for the written and the costs associated with them:
Interconnections: Bridges, Routers, Switches, and Internetworking Protocols
TCP/IP Illustrated, Vol. 1: The Protocols
Internetworking with TCP/IP Vol.1: Principles, Protocols, and Architecture (4th Edition)
CCIE Routing and Switching Certification Guide (4th Edition)
Routing TCP/IP, Volume 1 (2nd Edition)
Routing TCP/IP, Volume II (CCIE Professional Development)
Developing IP Multicast Networks, Volume I
Sum of books for the written: 382$
In January of 2011 I went to take the written exam. The exam went good and I passed. It
was a bit different than the NP level exams but that was to be expected. The cost for
the written is 350$ Add that up with the cost of the books and you are looking at 732$
to get your ticket to the lab.
I needed to get some vendor workbooks and I decided to use INE due to their reputation and
instructors that were in place. I was able to pick up all the workbooks for something like
399$ on some deal.
I read Petr at INEs post on how to study for the CCIE lab exam
I decided to use the 12 month program because I was in no hurry and time is scarce when
you have kids. Basically you start out with doing all the core labs like the essential
features of the routing protocols which makes up the core knowledge you must have before
starting to do the full scale Vol2 labs. I was able to do most of the labs in Dynamips.
I converted the INE configs to Dynamips with a sed script that I’ve shared on my site earlier.
If you look at IEOC (INEs forum) you can find a user called relativitydrive that has already
converted all the configs for you if you want to run Dynamips.
For the switching tasks you need to either rent a rack or to buy your own switches and hook
them up to your Dynamips topology. My UK friend Darren has a nice post on how to connect
switches to your Dynamips topology.
I used rack rentals to practice the switching scenarios. I don’t know exactly how much I
spent on rentals but maybe around 500$
After I had done the Vol1 labs I started with Vol2. I was shocked, first of all the
diagrams and having to configure VLANs just from a diagram was a new experience for
me as for most. Also things like configuring OSPF which I felt pretty comfortable with
I could not even complete all those tasks. Expect to be crushed! Everything you thought
you knew will be put to test. CCIE is a whole different level than most of us are used
to so keep your head up even though you will be crushed the first couple of times you
do a Vol2 lab.
There are a few different ways you can do a Vol2 type lab. Either you do all the tasks
you think you can solve in one run and then you come back and look at the things you
could not solve. Or you do the tasks you can and then you peak at the SG for the
things that you could not solve yourself. You need to find what works best for you but
don’t be too worried about speed in the beginning. That will come in time, trust me.
What you should do straight away is abandon Google, no more Google for you my friend!
To find anything you want to reference you need to go to the DOCCD. You will eat, drink
and breathe the DOCCD until you pass the lab so get used to it Basically you will
be going to the IOS 12.4T section or to the 3560 switches. The DOCCD is located here.
INE has a free Vseminar on how to use the DOCCD.
Some people see the written and the lab as two entirely different beasts. I don’t think about
it that way because you are still working towards an end goal and that is to become a CCIE.
What you don’t want to do is stop reading just because you are labbing. You need to do
both. Don’t forget to use the RFC as sources, they are a resource you should tap into.
I can’t remember everyone that I read but these are some major ones.
RFC 791 – Internet Protocol
RFC 826 – An Ethernet Address Resolution Protocol
RFC 2328 – OSPF version 2
RFC 4271 – A Border Gateway Protocol 4 (BGP-4)
RFC 3031 – Multiprotocol Label Switching Architecture
RFC 4594 – Configuration Guidelines for DiffServ Service Classes
RFC 4577 – OSPF as the Provider/Customer Edge Protocol for BGP/MPLS IP Virtual Private Networks (VPNs)
This is a free resource and the RFCs are written by some of the smartest people in
the industry so don’t forget to use them.
If you decide to go for INE then don’t forget to use IEOC which is the
user community (forum) where you can ask questions about labs and most of what you
want to ask will already have been asked by someone previously. You will probably
find my face on a lot of threads in there
When you do Vol2 labs don’t be too strict about grading yourself. Your solution can be just as
valid as long as you don’t break any restrictions. Also try to get into the habit of doing
alternate solutions and throw some extra stuff in there to make you think a bit more. When
you start a lab you should not start typing immieditaly. Read through the entire lab and
look for dependencies. Do you need to run IPv6 on 3560? Might as well change the SDM
profile and reload at once. You don’t really want to reload when you have a stable
topology. While the switches are reloading you can do your VLAN config in Notepad or
something else. The CCIE lab is about being smart and effective, typing fast helps
but is not necessary to pass the lab.
Troubleshooting is a big part of the CCIE lab. You have a 2h session with just
troubleshooting and expect to at least mess something up during your config section
as well. Many people ask: How do I learn troubleshooting? The answer is: You don’t!
You can’t just practice troubleshooting like it was a separate skill. You need to
know the protocols! In some ways the troubleshooting is more difficult because you
already have a network running and you must understand what is going on in it.
You need to use the right tools and you need to know how the output looks like.
Sometimes you might have to match output to get something correct.
INE has some cool stuff coming up with their new TS racks. Other than that
I recommend that you make troubleshooting something you do regularly.
If you get stuck on something try to figure it out by yourself first and
use the proper tools before looking for a simple solution. What I did before my
2nd lab attempt was to configure a lof of different technologies like OSPF, EIGRP,
MPLS, BGP, Multicast etc etc. I made a working topology, this in itself is
good practice. If you can’t configure a topology without someone holding your hand
then your are not ready. Then I would try to break things and looked at what happened.
For MPLS, what happens if you disable CEF? What happens when you have a duplicate RID
in OSPF? Is the behaviour the same when you are running EIGRP? This worked very well
for me and for my last 2 attempts I had no issues with the TS section.
Always remember that the network was functioning and then something was altered
to make it break. You need to solve the core issue and not work around the issue.
As I mentioned earlier you don’t want to stop reading books just because you are labbing.
Here are some of the books I read for lab preparation:
OSPF: Anatomy of an Internet Routing Protocol
QOS-Enabled Networks: Tools and Foundations
Interdomain Multicast Routing: Practical Juniper Networks and Cisco Systems Solutions
MPLS-Enabled Applications: Emerging Developments and New Technologies
So that is another 268$ of books. Now I did not actually buy all these books. I got a Safari
account as well which is really nice. It costs a bit but then you have all the books you need.
Every lab attempt costs around 1800$ I need to go fly to Brussels and spend one night there.
Flying usually costs around 500$ Room for a night maybe 250$ Then you need to eat
something and maybe get a cab etc. So each attempt costs around 2600$
I passed in my 3rd attempt so that is 2600$ * 3 = 7800$
If we sum it all together:
Written exam 350$
Rack rental 500$
3x lab attempts 7800$
I did not include the bootcamp in this since I consider that
optional. But everyone needs books/workbooks and of course to take the tests. If you
live nearer a testing center you can save some on the lab attempts. Hopefully you can
pass in your first or second attempt but the average is somewhere around two to four
attempts before passing. So before starting your journey you should budget for 10-15k
to earn your CCIE. Hopefully if you are lucky as I have been your employer will fund
some/all of the costs but that is no given.
Finally, there is really no way of knowing when you are ready to go to the lab except
for going to the lab and finding out. Mock labs will give you some rough guidance
but it’s not 100% accuracte because you can never simulate the stress fully. What
I do recommend is that you try to get as comfortable as possibly by simulating the
test environment. Practice using only one monitor, use PuTTY, use a US keyboard.
Check out the lab exam demo before you go to the lab. Anything that can help
easen the stress a bit on the lab day will be good.
I hope this post gave you some insight to studying and that becoming a CCIE is
indeed expensive. Hopefully it is all worth it in the end
So by now you know that I passed my lab in Brussels yesterday. Here is my story.
I arrived at monday in Brussels around 13.30. I took a walk in the beautiful
weather to the lab location. By now I have no problems finding it but it’s
just kind of a routine. I spent the day doing some final reviews and then
visited the gym at NH hotel. It’s good to clear your head and to get sleep
in the evening if your body is tired. I did not sleep that great however.
I woke up at around 03.30 and then I went back to sleep and woke up at 5 AM
again. I got around 7h sleep so it wasn’t too bad anyway. It’s normal if
you don’t sleep that well. Don’t make too much of a deal of it.
I arrived just before 8 AM to the Cisco building and checked in at the reception
as usual. I was waiting for the proctor to come get us. The proctor goes through
the guidelines for the exam and you get assigned a rack number. It was now time
for the TS section.
I put my earplugs in and went to work. I think it is good to use earplugs for
zoning out from the environment around you. I always start by trying to solve tickets
that look easier. These are usually the ones that contain only a few devices.
The reasoning behind this is to build your confidence and to get the feeling
that time is not running out on you. For TS especially time management is
everything. As engineers we have a narrow mindset when troubleshooting and
we want to solve something before moving on. This can be your pitfall in the
TS. You MUST move on after spending 10 minutes on a ticket. Usually if you
think about something else for a while your mind starts thinking more
creatively and you can find a solution to what seemed impossible earlier.
For the TS it is very important to have a good understanding of the protocols.
You are expected to know what show output looks like so that you can gather
information from that. You need to user proper tools and don’t go hunting
with sh run. Sh run interface and sh run | section are useful though. I solved
all the tickets with about 50 min to go and then spent 15 minutes verifying
that they were still all working. Pay close attention to the restrictions
and don’t skip reading the guidelines in the beginning to save time!
It was now time for the configuration. I ate a banana to refuel some energy.
You are allowed to bring snacks to your desk if you like. I started looking
through the entire lab for dependencies and to see if any devices would need
to be reloaded. Always do this at the beginning! I started with the L2 section
and things were moving on smoothly. I used the L3 diagram to see what VLANs
I needed to configure where. You need to be comfortable with this, don’t expect
to have anything served, it’s all up to you! I did a lot more verification as
I moved along compared to my earlier attempts, don’t blindly trust your config!
I then moved on to the L3 section and that went well. I just finished the L3
section before lunch.
Previously I had only done the L2 before lunch so I knew
I was in a much better position this time. I kept doing all of the tasks
and didn’t run into any major issues. I finished with a lot of time to spare
and now comes the most important part, verification! You need some time at
the end to do extra verification, account for this! You WILL do some mistakes
just due to stress or mistyping. I went through every task and every single
bullet point and made 100% sure that I was meeting the requirement. This took
a while but it was worth it. I still had an hour to go after this so I asked
the proctor if it was possible to start the grading early but he told me that
the grading is not done by them. I decided to stay the full time and did
an extra round of verification. I actually found a small mistake in this round
of verification so my advice is to stick around even if you finish early to
make sure you have done everything that you possibly can.
It was time to head home and I had a good feeling but I did not want to think
too much about it because if you get too high then you come crashing down hard
if you fail. After I landed in Gothenburg I checked my phone and saw that I had
received an e-mail. I rushed through the air port to check my mail on the computer
and to login to the portal. To access the CCIE portal you need your CSCO number, written
date and passing score. I did not know this for my first attempt and you don’t want
to be stranded not being able to login to check your score
I had received the e-mail around 19.30 and I had a good feeling that I got the score
fast but I have heard both good and bad examples of receiving a fast score. I logged
in and I saw PASS. At first I thought it might be the written so I didn’t want to
take anything for granted but then I clicked it and there it was! My number!
You all know I’ve worked hard for a long time for this and I am grateful to everyone
that has helped me on the way. I am not abandoning the blog but it might not only
be CCIE focused from now. If you have things you want me to write about make a suggestion
and if it is interesting to me I might write about it. As I don’t have to focus on
studies only now I can explore more interesting technologies and write about them.
Thanks for following on this great journey!
I’m back from Brussels and I passed the lab! I am now CCIE #37149. I’ll write a longer post tomorrow
I’ve done a post earlier on Catalyst QoS. That described how to
configure the QoS features on the Catalyst but I didn’t describe
in detail how the buffers work on the Catalyst platform. In this
post I will go into more detail about the buffers and thresholds
that are used.
By default, QoS is disabled. When we enable QoS all ports
will be assigned to queue-set 1. We can configure up to two
sh mls qos queue-set Queueset: 1 Queue : 1 2 3 4 ---------------------------------------------- buffers : 25 25 25 25 threshold1: 100 200 100 100 threshold2: 100 200 100 100 reserved : 50 50 50 50 maximum : 400 400 400 400 Queueset: 2 Queue : 1 2 3 4 ---------------------------------------------- buffers : 25 25 25 25 threshold1: 100 200 100 100 threshold2: 100 200 100 100 reserved : 50 50 50 50 maximum : 400 400 400 400
These are the default settings. Every port on the Catalyst has
4 egress queues (TX). When a port is experiencing congestion
it needs to place the packet into a buffer. If a packet gets
dropped it is because there were not enough buffers to store it.
So by default each queue gets 25% of the buffers. The value is
in percent to make it usable across different versions of the Catalyst
since they may have different size of buffers. The ASIC will have
buffers of some size, maybe a couple of megs but this size is not known
to us so we have to use the percentages.
Of the buffers we assign to a queue we can make the buffers reserved.
This means that no other queue can borrow from these buffers. If we
compare it to CBWFQ it would be the same as the bandwidth percent command
because that guarantees X percent of the bandwidth but it may use more
if there is bandwidth available. The buffers work the same way. There is
a common pool of buffers. The buffers that are not reserved go into the
common pool. By default 50% of the buffers are reserved and the rest go
into the common pool.
There is a maximum how much buffers the queue may use and by default this
is set to 400% This means that the queue may use up to 4x more buffers than
it has allocated (25%).
To differentiate between packets assigned to the same queue the thresholds
can be used. You can configure two thresholds and then there is an implicit
threshold that is not configurable (threshold3). It is always set to the maximum the queue
can support. If a threshold is set to 100% that means it can use 100% of
the buffers allocated to a queue. It is not recommended to put a low value
for the thresholds. IOS enforces a limit of at least 16 buffers assigned
to a queue. Every buffer is 256 bytes which means that 4096 bytes are
Q1% Q1buffer Q2% Q2buffer Q3% Q3buffer Q4% Q4buffer buffers 25 25 25 25 Thresh1 100 50 100 50 100 50 100 50 Thresh2 100 50 100 50 100 50 100 50 Reserved 50 25 50 25 50 25 50 25 maximum 400 200 400 200 400 200 400 200
This table explains how the buffers works. Lets say that this port
on the ASIC has been assigned 200 buffers. Every queue gets 25% of the
buffers which is 50 buffers. However out of these 50 buffers only 50%
are reserved which means 25 buffers. The rest of the buffers go to the
common pool. The thresholds are set to 100% which means they can use 100%
of the allocated buffers to the queue which was 50 buffers. For packets
that go to threshold3 400% of the buffers can be used which means 200 buffers.
This means that a single queue can use up all the non reserved buffers
if the other queues are not using them.
To see which queue packets are getting queued to we can use the show
platform port-asic stats enqueue command.
Switch#show platform port-asic stats enqueue gi1/0/25 Interface Gi1/0/25 TxQueue Enqueue Statistics Queue 0 Weight 0 Frames 2 Weight 1 Frames 0 Weight 2 Frames 0 Queue 1 Weight 0 Frames 3729 Weight 1 Frames 91 Weight 2 Frames 1894 Queue 2 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 0 Queue 3 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 577
In this output we have the four queues with three thresholds. Note that queue 0
here is actually queue 1. Queue 1 is queue 2 and so on. Weight 0 is
threshold1, weight 1 is threshold2 and weight 3 is the maximum threshold.
We can also list which frames are being dropped. To do this we use the
show platform port-asic stats drop command.
Switch-38#show platform port-asic stats drop gi1/0/25 Interface Gi1/0/25 TxQueue Drop Statistics Queue 0 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 0 Queue 1 Weight 0 Frames 5 Weight 1 Frames 0 Weight 2 Frames 0 Queue 2 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 0 Queue 3 Weight 0 Frames 0 Weight 1 Frames 0 Weight 2 Frames 0
The queues are displayed in the same way here where queue 0 = queue 1.
This command can be good to find out if you are having packet loss for important
traffic like IPTV traffic or such that is dropping in a certain queue.
The documentation for Catalyst QoS can be a bit shady and by this post I
hope that you know have a better understanding how the egress queueing works.
This post will describe how NAT works. The reason for doing a blog post on NAT
is twofold. There is a lack of good documents out there describing NAT and I
want to do some learning for myself as well.
When we are talking about NAT we have some terminology that is commonly used.
Since the address is changed along the way we need to describe which address
we are referencing when talking about the IP address. The terminology is this.
Inside local address – This is the address as seen on the LAN (inside) before
the translation occurs.
Inside global address – This is the address as seen by other hosts on the Internet.
This is the address after translation has occurred.
Outside local – This is the address on the LAN of the other side. Note that if other side
is not running NAT the outside local and global may be the same. In the diagram the other
side is running public IP addresses on the inside (a valid design).
Outside global – The IP address as seen by other hosts on the Internet, this may be the
same as the inside local depending on if NAT is used or not.
When using NAT we need to define inside and outside interfaces (except if NVI is used).
The LAN interface(s) are the inside interfaces and the WAN interface(s) are the outside
interfaces. Translation is done when traffic is going from an inside interface to an
outside interface or vice versa.
The most basic NAT we can do is a 1:1 static NAT where the inside local address is
translated to an inside global address. We can map to an IP directly or to the
outside interface. To NAT to an interface the syntax is:
ip nat inside source static INSIDE_LOCAL interface OUTSIDE_INTERFACE
ip nat inside source static 192.168.1.11 interface f0/1 sh ip nat trans Pro Inside global Inside local Outside local Outside global --- 18.104.22.168 192.168.1.11 --- ---
Traffic sourcing from the inside local address 192.168.1.11 will be translated
to 22.214.171.124. When we are doing static NAT there is a bidirectional translation
so when traffic is coming back in the outside interface the destination is
translated from inside global to inside local address. We can see this when we
debug ip nat detail.
NAT*: i: icmp (192.168.1.11, 6) -> (126.96.36.199, 6)  NAT*: s=192.168.1.11->188.8.131.52, d=184.108.40.206  NAT*: o: icmp (220.127.116.11, 6) -> (18.104.22.168, 6)  NAT*: s=22.214.171.124, d=126.96.36.199->192.168.1.11 
First we see the ICMP packet coming in and the source gets translated and then
the return packet comes in and the destination address is translated. This is
how the translation table looks like.
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global icmp 188.8.131.52:6 192.168.1.11:6 184.108.40.206:6 220.127.116.11:6 --- 18.104.22.168 192.168.1.11 --- ---
We can also do a static NAT and choose what the inside global address should be. The syntax is this:
ip nat inside source static INSIDE_LOCAL INSIDE_GLOBAL
ip nat inside source static 192.168.1.11 22.214.171.124 R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global --- 126.96.36.199 192.168.1.11 --- ---
We now see that the source is translated to 188.8.131.52.
NAT*: i: icmp (192.168.1.11, 8) -> (184.108.40.206, 8)  NAT*: s=192.168.1.11->220.127.116.11, d=18.104.22.168  NAT*: o: icmp (22.214.171.124, 8) -> (126.96.36.199, 8)  NAT*: s=188.8.131.52, d=184.108.40.206->192.168.1.11 
The translation table looks like this:
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global icmp 220.127.116.11:8 192.168.1.11:8 18.104.22.168:8 22.214.171.124:8 --- 126.96.36.199 192.168.1.11 --- ---
When doing regular static NAT like this we can only map one inside
local to one inside global. What if we want to map several outside
addresses to the same inside address? To do that we need to create
extendable NAT translations. The syntax is the same but with the keyword
extendable at the end.
ip nat inside source static 192.168.1.11 188.8.131.52 extendable ip nat inside source static 192.168.1.11 184.108.40.206 extendable
The translation table now looks like this:
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global --- 220.127.116.11 192.168.1.11 --- --- --- 18.104.22.168 192.168.1.11 --- ---
If we ping 22.214.171.124 from the other side we will see that being translated to
NAT*: s=192.168.1.11->126.96.36.199, d=188.8.131.52  NAT*: o: icmp (184.108.40.206, 0) -> (220.127.116.11, 0)  NAT*: s=18.104.22.168, d=22.214.171.124->192.168.1.11 
The translation table is below.
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global icmp 126.96.36.199:0 192.168.1.11:0 188.8.131.52:0 184.108.40.206:0 icmp 220.127.116.11:10 192.168.1.11:10 18.104.22.168:10 22.214.171.124:10 --- 126.96.36.199 192.168.1.11 --- --- --- 188.8.131.52 192.168.1.11 --- ---
When we are doing static NAT translations we can also match on an access-list.
The good thing about matching on an ACL is that we can specify which hosts we
want to have translated and which we want to leave alone. We can create an ACL
so that traffic to 184.108.40.206 gets translated but traffic to 220.127.116.11 will arrive
with its original source address. The syntax is:
ip nat inside source list LIST_NAME interface INTERFACE_NAME
We can only translate to an interface or a pool of addresses when using a list
as the source.
R2(config)#ip access-list extended NAT R2(config-ext-nacl)#deny ip host 192.168.1.11 host 18.104.22.168 R2(config-ext-nacl)#permit ip any any R2(config-ext-nacl)#ip nat inside source list NAT interface f0/1
We will debug on the destination to see which address the ICMP packet coming in has.
ICMP: echo reply sent, src 22.214.171.124, dst 192.168.1.11 ICMP: echo reply sent, src 126.96.36.199, dst 192.168.1.11 ICMP: echo reply sent, src 188.8.131.52, dst 184.108.40.206 ICMP: echo reply sent, src 220.127.116.11, dst 18.104.22.168
Once again we look at the translation table.
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global icmp 22.214.171.124:12 192.168.1.11:12 126.96.36.199:12 188.8.131.52:12
So we can see that when sending traffic to 184.108.40.206 it does not get
translated but traffic to 220.127.116.11 does. We can confirm by looking
at the ACL counters.
R2#sh ip access-lists NAT Extended IP access list NAT 10 deny ip host 192.168.1.11 host 18.104.22.168 (2 matches) 20 permit ip any any (1 match)
We can also configure NAT to NAT 1:1 for an entire network. This means that
the inside local address 192.168.1.11 will be translated to 22.214.171.124. If
we were sourcing traffic from .20 then that would be translated to .20 so the
addressing consistency is kept. This can be useful if we have like a web server
that is reachable from 192.168.1.30 on the inside and when we want to access
it from the outside we now that it will have the IP 126.96.36.199. We should
rely on DNS for reaching web servers but knowing the IP can be good in case
of a DNS failure. Use the following syntax.
ip nat inside source static network INSIDE_LOCAL_NETWORK INSIDE_GLOBAL_NETWORK PREFIX_LENGTH_OR_MASK
R2(config)#ip nat inside source static network 192.168.1.0 188.8.131.52 /24
Now when we ping we should see the source getting translated to 184.108.40.206.
NAT*: s=192.168.1.11->220.127.116.11, d=18.104.22.168  NAT*: s=22.214.171.124, d=126.96.36.199->192.168.1.11 
And the translation table.
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global icmp 188.8.131.52:14 192.168.1.11:14 184.108.40.206:14 220.127.116.11:14 --- 18.104.22.168 192.168.1.11 --- --- --- 22.214.171.124 192.168.1.0 --- ---
We can also do NAT for a pool of addresses. Say that we have been granted
a new pool of addresses from our ISP. The pool is 126.96.36.199/29. We create
a NAT pool matching this and then we enable NAT for the 192.168.1.11 address.
The syntax is:
ip nat inside source list ACL_NAME pool POOL_NAME
R2(config)#ip nat pool NAT_POOL 188.8.131.52 184.108.40.206 prefix-length 29 R2(config)#ip nat inside source list NAT pool NAT_POOL
We do a ping to look at the translation.
NAT*: s=192.168.1.11->220.127.116.11, d=18.104.22.168  NAT*: s=22.214.171.124, d=126.96.36.199->192.168.1.11 
We can see that the source address got translated. As you can see
we can do NAT for networks that are not configured on the router.
This is the translation table.
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global icmp 188.8.131.52:16 192.168.1.11:16 184.108.40.206:16 220.127.116.11:16 --- 18.104.22.168 192.168.1.11 --- ---
When doing NAT pools we can also make the host portion of the address match
if we want to. We do that like this.
ip nat pool prefix-length type match-host
ip nat pool NAT_POOL 22.214.171.124 126.96.36.199 prefix-length 27 type match-host ip nat inside source list NAT pool NAT_POOL
Now when we ping the IP should get translated to 188.8.131.52.
NAT*: s=192.168.1.11->184.108.40.206, d=220.127.116.11  NAT*: s=18.104.22.168, d=22.214.171.124->192.168.1.11 
Which it did. So even with pools we can match the host part of the address.
This is the translation table.
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global icmp 126.96.36.199:17 192.168.1.11:17 188.8.131.52:17 184.108.40.206:17 --- 220.127.116.11 192.168.1.11 --- ---
With NAT pools we can also do rotary assignments if we want or overload
the pool if we want to do Port Address Translation (PAT).
Now we will also create a scenario where using a route-map. Using route-maps
we can create more advanced scenarios. For this scenario telnet traffic
going to 18.104.22.168 will get one source IP and HTTP traffic will get
another source address.
R2(config)#ip access-list extended ISP1 R2(config-ext-nacl)#permi tcp any host 22.214.171.124 eq telnet R2(config-ext-nacl)#ip access-list extended ISP2 R2(config-ext-nacl)#permit tcp any host 126.96.36.199 eq www R2(config-ext-nacl)#ip nat pool POOL_ISP1 188.8.131.52 184.108.40.206 prefix-length 24 R2(config)#ip nat pool POOL_ISP2 220.127.116.11 18.104.22.168 prefix-length 24 R2(config)#route-map RM_ISP1 R2(config-route-map)#match ip add ISP1 R2(config-route-map)#route-map RM_ISP2 R2(config-route-map)#match ip add ISP2 R2(config-route-map)#ip nat inside source route-map RM_ISP1 pool POOL_ISP1 R2(config)#ip nat inside source route-map RM_ISP2 pool POOL_ISP2
Now to verify the configuration, first we send telnet to 22.214.171.124.
NAT: map match RM_ISP1 NAT*: i: tcp (192.168.1.11, 29183) -> (126.96.36.199, 23)  NAT*: i: tcp (192.168.1.11, 29183) -> (188.8.131.52, 23)  NAT*: s=192.168.1.11->184.108.40.206, d=220.127.116.11 
Now we send to port 80 instead.
NAT: map match RM_ISP2 NAT*: i: tcp (192.168.1.11, 15942) -> (18.104.22.168, 80)  NAT*: i: tcp (192.168.1.11, 15942) -> (22.214.171.124, 80)  NAT*: s=192.168.1.11->126.96.36.199, d=188.8.131.52 
And the translation table.
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global tcp 184.108.40.206:27143 192.168.1.11:27143 220.127.116.11:80 18.104.22.168:80 tcp 22.214.171.124:64511 192.168.1.11:64511 126.96.36.199:23 188.8.131.52:23
So using route-maps we can do more advanced scenarios. If we have multiple
inside interfaces we could even match on those.
NAT can also be used to do a form of basic load balancing. Several
inside local addresses will be mapped to one inside global address.
A pool of inside local addresses will be created and handed out in a
R2(config)#access-list 1 permit 184.108.40.206 R2(config)#ip nat pool ROTARY_POOL 10.0.0.1 10.0.0.3 prefix-length 24 type rotary R2(config)#ip nat inside destination list 1 pool ROTARY_POOL
IP addresses 10.0.0.1, 10.0.0.2 and 10.0.0.3 will now be handed out in a rotary
fashion when someone tries to access the IP 220.127.116.11. We can see this when
debugging the NAT translation.
NAT*: s=18.104.22.168, d=22.214.171.124->10.0.0.2  NAT*: s=126.96.36.199, d=188.8.131.52->10.0.0.3  NAT*: s=184.108.40.206, d=220.127.116.11->10.0.0.1 
So this performs a basic form of load balancing. The only thing different here is
that we are using the ip nat inside destination command. This translates the
destination of the packet. Usually we translate the source of the packet but
since static NAT is bidirectional in the other direction the destination of
the packet will be translated. When doing this form of NAT we need to trigger
it by sending TCP packets. Just sending ICMP will not trigger the NAT translation.
We have been through a lot of scenarios so far. Almost all of the scenarios
describe how to translate the source IP of the packet going out from the local
network. What if we want to translate the source of the address coming in on
the outside interface instead? This can be useful in scenarios
where there are overlapping subnets, e.g. the same subnet is used on two different
companies and they need to connect through a VPN tunnel or such. Syntax is the
ip nat outside source static OUTSIDE_GLOBAL OUTSIDE_LOCAL
NAT: s=18.104.22.168->22.214.171.124, d=10.0.0.1 
Here we see that the source of the packet is translated when coming in
on the outside. And this is the translation table.
R2#sh ip nat trans Pro Inside global Inside local Outside local Outside global --- --- --- 126.96.36.199 188.8.131.52 icmp 10.0.0.1:1 10.0.0.1:1 184.108.40.206:1 220.127.116.11:1
The final scenario I want to describe is NAT on a stick. It’s not a very
common scenario but the idea is this. Look at the topology below.
R3 has only one interface which leads to a problem because we need to define
one interface as inside and the other as outside. How can we solve this?
We will use what is called NAT on a stick. R3 will do policy routing and
send traffic to its loopback to trigger the NAT process. R1 and R2 need
to have default routes towards R3 which will be doing the NAT. When R1
pings with a source of its loopback (10.1.1.1) that should be translated
to 18.104.22.168. When R2 pings from its loopback (10.2.2.2) then it should
be translated to 22.214.171.124. We start by setting up the policy routing.
We create an access-list matching traffic from 10.1.1.1 to 126.96.36.199.
Then we create a route-map matching the ACL and set the interface to
loopback0. The loopback interface will be the NAT inside interface.
R3(config)#access-list 100 permit ip host 10.1.1.1 host 188.8.131.52 R3(config)#route-map PBR R3(config-route-map)#match ip add 100 R3(config-route-map)#set interface lo0 R3(config-route-map)#exit R3(config)#int f0/0 R3(config-if)#no ip redirects R3(config-if)#ip policy route-map PBR R3#sh ip policy Interface Route map Fa0/0 PBR
We also disable ICMP redirects so that R1 does not bypass R3 when
sending traffic to R2. We need to add a few routes on R3 for the
scenario to work. The network 10.1.1.0 is routed to R1. Then
10.2.2.0 and 184.108.40.206 is routed to R2. Why do we need to route
the 220.127.116.11 network to R2? This is because the order of operations
in Cisco routers. On inside to outside policy routing is done first, then routing
and then NAT. On outside to inside NAT is done first, then policy routing
and after that routing. Before we add the rest of the configuration
lets think about the traffic flow.
R1 sends an ICMP packet with (S= 10.1.1.1 , D= 18.104.22.168). The packet comes
inbound on R3 on Fa0/0. The traffic coming in matches the policy and the packet
is looped through R3 lo0. Loopback0 has ip nat inside so this triggers the
NAT process. The source IP 10.1.1.1 is translated to 22.214.171.124 and the destination
is translated to 10.2.2.2, then the packet is sent out Fa0/0.
The packet reaches R2 with (S= 126.96.36.199, D= 10.2.2.2). R2 sends
an ICMP reply with (S= 10.2.2.2, D= 188.8.131.52). The packet comes in on R3
Fa0/0 which is the NAT outside interface. That triggers a translation of the
source from 10.2.2.2 to 184.108.40.206. The destination is also translated from
220.127.116.11 to 10.1.1.1. R3 then checks the routing table and
sends the packet back out Fa0/0 to R1. The packet reaches R1 with
(S= 18.104.22.168 , D=10.1.1.1). And that finishes the flow. Now to
R3(config)#ip route 10.1.1.0 255.255.255.0 22.214.171.124 R3(config)#ip route 10.2.2.0 255.255.255.0 126.96.36.199 R3(config)#ip route 188.8.131.52 255.255.255.0 184.108.40.206 R3(config)#int lo0 R3(config-if)#ip nat inside R3(config-if)#int fa0/0 R3(config-if)#ip nat outside
Take a look at the translation table.
R3#sh ip nat trans Pro Inside global Inside local Outside local Outside global --- --- --- 220.127.116.11 10.2.2.2 --- 18.104.22.168 10.1.1.1 --- ---
Now to see if it works. We will debug NAT on R3 to see what is happening while
pinging from R1.
NAT: s=10.1.1.1->22.214.171.124, d=126.96.36.199  NAT: s=188.8.131.52, d=184.108.40.206->10.2.2.2  NAT*: s=10.2.2.2->220.127.116.11, d=18.104.22.168  NAT*: s=22.214.171.124, d=126.96.36.199->10.1.1.1 
Finally here is a drawing that is describing the traffic flow.
This has been
a very big post and I wrote it to have as a reference for my studies. You
don’t have to read the whole post at once but I hope that you find some
useful scenarios that you can try out for yourself. One final piece of advice
is that if you run NAT in Dynamips you should assign that router 256MB of
memory or you will see some strange things happening like sh run not working.
OSPF is one of the protocols where the details are very important. It has lots
of bits and pieces to make it run in a proper way. I have described the forwarding
address in an earlier post and this time I want to show how the IP that is used
as the forwarding address is selected. We start out with this simple topology.
It’s a very basic config where R1 is redistributing a route and running in a
R1#sh run | s router ospf|ip route router ospf 1 router-id 188.8.131.52 log-adjacency-changes area 10 nssa redistribute static subnets ip route 184.108.40.206 255.0.0.0 Null0
Which IP will R1 use for its forwarding address? We look at R3.
R3#sh ip route ospf | i E2 O E2 220.127.116.11/8 [110/20] via 18.104.22.168, 00:57:59, FastEthernet0/0 R3#sh ip ospf data ex 22.214.171.124 OSPF Router with ID (126.96.36.199) (Process ID 1) Type-5 AS External Link States Routing Bit Set on this LSA LS age: 120 Options: (No TOS-capability, DC) LS Type: AS External Link Link State ID: 188.8.131.52 (External Network Number ) Advertising Router: 184.108.40.206 LS Seq Number: 80000005 Checksum: 0x4AC0 Length: 36 Network Mask: /8 Metric Type: 2 (Larger than any link state path) TOS: 0 Metric: 20 Forward Address: 220.127.116.11 External Route Tag: 0
It has chosen its interface address towards R2. What if we enable OSPF on the other
Ethernet interface of R1?
R1(config)#int f0/1 R1(config-if)#ip ospf 1 area 10
We check R3 again.
R3#sh ip ospf data ex 18.104.22.168 OSPF Router with ID (22.214.171.124) (Process ID 1) Type-5 AS External Link States Routing Bit Set on this LSA LS age: 25 Options: (No TOS-capability, DC) LS Type: AS External Link Link State ID: 126.96.36.199 (External Network Number ) Advertising Router: 188.8.131.52 LS Seq Number: 80000006 Checksum: 0x6676 Length: 36 Network Mask: /8 Metric Type: 2 (Larger than any link state path) TOS: 0 Metric: 20 Forward Address: 184.108.40.206 External Route Tag: 0
The forwarding address has changed. It selected the IP of the other Ethernet interface
of R1. We can see that it prefers to choose a higher IP address. What if we announce
the loopback of R1 in the NSSA area?
R1(config-if)#int lo0 R1(config-if)#ip ospf 1 area 10
R3#sh ip ospf data ex 220.127.116.11 OSPF Router with ID (18.104.22.168) (Process ID 1) Type-5 AS External Link States Routing Bit Set on this LSA LS age: 27 Options: (No TOS-capability, DC) LS Type: AS External Link Link State ID: 22.214.171.124 (External Network Number ) Advertising Router: 126.96.36.199 LS Seq Number: 80000007 Checksum: 0xAE53 Length: 36 Network Mask: /8 Metric Type: 2 (Larger than any link state path) TOS: 0 Metric: 20 Forward Address: 188.8.131.52 External Route Tag: 0
Now the loopback IP is chosen instead. So since the loopback has a lower IP but still
is preferred we can see that loopbacks are preferred in the selection. To see this
clearly defined in words we reference RFC 3101 section 2.3.
When a router is forced to pick a forwarding address for a Type-7 LSA, preference should be given first to the router's internal addresses (provided internal addressing is supported). If internal addresses are not available, preference should be given to the router's active OSPF stub network addresses. These choices avoid the possible extra hop that may happen when a transit network's address is used. When the interface whose IP address is the LSA's forwarding address transitions to a Down state (see [OSPF] Section 9.3), the router must select a new forwarding address for the LSA and then re- originate it. If one is not available the LSA should be flushed.
So the selection process is to choose the highest IP of a loopback advertised
into the NSSA area. If no loopback is advertised then choose the highest
physical interface IP advertised into the NSSA area.
I hope that I have provide another piece to the OSPF puzzle and you now have
a good understanding of the forwarding address.
We start out with a basic topopology of 3 routers.
R2 and R3 will peer to each others loopback. I have setup OSPF for full reachability
in the network. First we test connectivity.
R2#ping 184.108.40.206 so lo0 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 220.127.116.11, timeout is 2 seconds: Packet sent with a source address of 18.104.22.168 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 40/53/80 ms
There is connectivity. We setup the peering and set ebgp-multihop to 2 since
this is what most people do. I will explain why this is not a good idea.
R2(config)#router bgp 1 R2(config-router)#nei 22.214.171.124 remote-as 3 R2(config-router)#nei 126.96.36.199 update-source loopback 0 R2(config-router)#nei 188.8.131.52 ebgp-multihop 2
R3(config)#router bgp 3 R3(config-router)#nei 184.108.40.206 remote-as 1 R3(config-router)#nei 220.127.116.11 update-source loopback 0 R3(config-router)#nei 18.104.22.168 ebgp-multihop 2
The session comes up.
%BGP-5-ADJCHANGE: neighbor 22.214.171.124 Up
All good so far. We are not advertising anything yet. We add another loopback
on R3 and advertise that into BGP. We check if R2 is receiving it.
R2#sh bgp ipv4 uni BGP table version is 3, local router ID is 126.96.36.199 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, r RIB-failure, S Stale Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path *> 188.8.131.52/32 184.108.40.206 0 0 3 i
It looks good so far. Now lets think for a while what ebgp-multihop
actually does. The default setting for eBGP is to check that incoming BGP
packets are destined for a directly connected interface. So the default is
to do a connected-check and ebgp-multihop = 1. When we set ebgp-multihop 2
the outgoing TTL is set to 2 and the connected-check is disabled. We confirm
this with a packet capture.
So the TTL is set to 2, is this really necessary? The common argument is that
because we are peering to a loopback the TTL must be set to 2 because the
TTL is decremented before reaching the loopback. When do routers modify packets
before transmitting them? On the egress interface right? We try this theory by
setting up a peering between R1 and R3. We will use no ebgp-multihop to begin
with and then we will debug ip icmp. We have to disable the connected-check
otherwise BGP will only stay idle because a loopback can never be directly
R1(config-router)#nei 220.127.116.11 remote-as 3 R1(config-router)#nei 18.104.22.168 update-source lo0 R1(config-router)#nei 22.214.171.124 disable-connected-check
R3(config-router)#nei 126.96.36.199 remote-as 1 R3(config-router)#nei 188.8.131.52 update lo0 R3(config-router)#nei 184.108.40.206 disable-connected-check
We can now see that R2 is sending ICMP time exceeded message to R1 and R3.
R1: ICMP: time exceeded rcvd from 220.127.116.11 R3: ICMP: time exceeded rcvd from 18.104.22.168
This is because the TTL was set to 1. The TTL expired while in transit.
Now we setup a peering between R1 and R2 using the loopbacks. We will disable
R1(config-router)#nei 22.214.171.124 remote-as 1 R1(config-router)#nei 126.96.36.199 update lo0 R1(config-router)#nei 188.8.131.52 disable-connected-check
R2(config-router)#nei 184.108.40.206 remote-as 1 R2(config-router)#nei 220.127.116.11 update lo0 R2(config-router)#nei 18.104.22.168 disable-connected-check
Now according to the people that say that TTL must be 2 for peering to come up
we will prove that this is wrong. The reason peering does not come up when using
loopbacks is that BGP is checking if it is directly connected or not. We take a
look at a BGP packet sent when using the disable-connected-check.
We clearly see that the TTL is 1 but the session still comes up. This proves
that is is not TTL that is expiring when peering to loopbacks!
R1#sh bgp all sum For address family: IPv4 Unicast BGP router identifier 22.214.171.124, local AS number 1 BGP table version is 9, main routing table version 9 2 network entries using 240 bytes of memory 2 path entries using 104 bytes of memory 3/2 BGP path/bestpath attribute entries using 372 bytes of memory 1 BGP AS-PATH entries using 24 bytes of memory 0 BGP route-map cache entries using 0 bytes of memory 0 BGP filter-list cache entries using 0 bytes of memory Bitfield cache entries: current 1 (at peak 2) using 32 bytes of memory BGP using 772 total bytes of memory BGP activity 5/3 prefixes, 5/3 paths, scan interval 60 secs Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 126.96.36.199 4 2 83 80 9 0 0 00:02:45 1
Finally I want to bring up another disadvantage of using the ebgp-multihop
command when peering between directly connected routers using loopbacks.
We have a peering between R2 and R3. What happens when we shutdown the
interface on either router?
R2(config-router)#int f1/0 R2(config-if)#sh R2(config-if)# %OSPF-5-ADJCHG: Process 1, Nbr 188.8.131.52 on FastEthernet1/0 from FULL to DOWN, Neighbor Down: Interface down or detached R2(config-if)# %LINK-5-CHANGED: Interface FastEthernet1/0, changed state to administratively down %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet1/0, changed state to down R2(config-if)#do sh bgp ipv4 uni BGP table version is 11, local router ID is 184.108.40.206 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, r RIB-failure, S Stale Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path * 220.127.116.11/32 18.104.22.168 0 0 3 i
When we shutdown the interface the peering still stays up. This is because when using
ebgp-multihop the fast-external-fallover feature can not be used at the same time. This could
lead to blackholes since the peering stays up until the hold time expires (180s). In our
case we have no valid next-hop but what if we put in a default route?
R2(config)#ip route 0.0.0.0 0.0.0.0 22.214.171.124 R2(config)#int f1/0 R2(config-if)#sh R2(config-if)#do %OSPF-5-ADJCHG: Process 1, Nbr 126.96.36.199 on FastEthernet1/0 from FULL to DOWN, Neighbor Down: Interface down or detached R2(config-if)#do %LINK-5-CHANGED: Interface FastEthernet1/0, changed state to administratively down %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet1/0, changed state to down R2(config-if)#do sh bgp ipv4 uni BGP table version is 12, local router ID is 188.8.131.52 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, r RIB-failure, S Stale Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path *> 184.108.40.206/32 220.127.116.11 0 0 3 i R2(config-if)#do sh bgp ipv4 uni BGP table version is 12, local router ID is 18.104.22.168 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, r RIB-failure, S Stale Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path *> 22.214.171.124/32 126.96.36.199 0 0 3 i
Now the route stays in the BGP table until the holdtime expires which creates a
black hole. The default route is now functioning to make sure there is a next-hop
By this post I hope you have got a better understanding of these BGP features
and how a router handles control plane packets. As usual post in comments
if you have any feedback or questions.
I’m back from London and it’s been a great experience. Many readers are interested in what
the bootcamp is like. It is a big investment to go for so it is understandable that you
want to know if it will be worth it. I’ll start by describing the teacher and his teaching
Brian Dennis is a well known and respected man in the network industry. He is CCIE #2210
and has 5x CCIEs. That is among the very best in the world. Brian is not one of those
academic guys that only knows what is written in a book. He as a solid background in the
industry which means he can explain WHY things are the way they are and not just stating
facts without any reasoning behind it. There will be NO powerpoints, it is CLI only and
although he has a topology he is using the configurations are not prebuilt. He will do
them live which means there will be issues, which is GOOD. You get to see a 5x CCIE
troubleshooting and since he hasn’t prepared the faults before you will see how he would
troubleshoot a live problem which is very good practice for the TS lab in the CCIE lab.
Brian is a strong believer in that there are no tips and tricks. If you have an
instructor teaching you all these tips and tricks then that instructor is a fake.
If you know the technology there are no tips and tricks. Sure he can teach you some
useful commands but there are no tips and tricks in routing protocols.
Jeremy Brown is the bootcamp coordinator. He’s a very nice guy and he will help you
with any queries you have about the bootcamp. If you are attending you will be
talking to him for sure.
When you start the class the first day you will be handed a folder with paper and
a pen and some contact information. Brian will introduce himself and give some
general guidelines and explain how the real lab works with TS section and
configuration section etc. Then everyone gets to introduce themselves. My class
had a lot of nationalities, Bolivia, France, Venezuela, Sweden, UK, Ireland,
Norway, Hungary were all represented.
The bootcamp runs from 9 AM in the morning to about 19-20 PM in the evening.
There will be some 15 minute breaks and a lunch break for 1.5h. It is long
days indeed so make sure to get enough sleep in the evening. This is a pure
learning experience, leave the partying for another time. If you want to
have some fun there will be time in the weekend for that.
The first day is about layer two. Since the configuration is built from
scratch it makes sense to start out with layer two. The topology used
is based on Cisco 360 with 5 routers and 4 switches. The routers are ISR
routers and the switches are 3560′s. It is good that this topology is
used since that is very similar to what is being used in the real lab.
When attending the bootcamp you are expected to have a good knowledge
of protocols and that you have watched the INE ATC videos. This is so
that you don’t get overwhelmed by the information in the bootcamp.
The layer two section focused on MST, PPP and frame relay and
spanning tree features like BPDU guard, BPDU filter etc. One advice
that Brian gave is to try to mix in things like PPP, PPPoE, PPPoFR
etc in your labs so that you get used to using these technologies.
Later in the week we moved on to IGPs. OSPF will be the main topic.
This is natural since OSPF will guaranteed be in your lab and you
REALLY need to know OSPF to pass the lab. Brian is an OSPF
machine, he knows the LSDB like the back of his hand. He is very
methodical and will confirm each step and show you in the LSDB
what we are seeing and why we are seeing it. He’s not one of
those guys that clears the routing table when he runs into a
glitch, he will explain how and why it is there. He had a very
good section about the forwarding address, this is an important
part of OSPF and Brian explained why it is used. He had a very
good analogy with BGP where basically if the FA is not set then
you are using next-hop-self and if it is set then the next-hop
is preserved. He also had a good explanation of the capability
transit feature and he did some great diagrams showing which
LSAs go where. This is basic knowledge but he put it so well in
that diagram. We also talked about virtual links and things like
that. One good command he showed was the show ip ospf rib
command. EIGRP and RIP will be shorter sections, he will only
show some more advanced configuration since these protocols are
a bit simpler to understand. For EIGRP he showed hot do do
unequal cost load balancing and how to calculate the metric
if you want to get a certain ratio. He showed how to do
offset-list, leak maps and authentication.
After we were done with IGPs we moved on to route redistribution.
This topic alone is enough to provide a good bootcamp experience.
Brian will in detail explain the difference between control plane
and data plane loops and why loops can occur. The important thing
to remember is that we are trying to protect the routes with a
high AD from being learned in a protocol with a lower AD. Usually
RIP is involved or EIGRP external routes since those have a high
AD. Brian will show you how to take any INE Vol2 lab topology
diagram and just look at it and identify potential issues.
This is a very good practice and when you can look at a diagram
and know what to do without even thinking about configuration
yet then you are in a good place. Brian will with his diagrams
show you where every command lives like the OSPF LSDB, OSPF RIB,
RIB, FIB etc. This is very good practice to make sure you have
a full understanding of what is going on.
BGP is of course an important topic and Brian is covering that
for sure. Brian starts by describing peering and goes through
some common misconceptions. BGP has no authentication,
wait for it…TCP has, this is a common misconception. It is
TCP providing the authentication of packets and not BGP.
He will explain concepts like hot potato vs cold potato routing.
He will show you the difference between disable-connected-check and
ebgp-multihop. He will teach you about route reflectors and
confederations and why you want to use the one or the other.
He will also explain MED in detail, something I found very useful,
explaining how deterministic MED works and always-compare-med.
He has such knowledge of everything and one thing I didn’t know
before is that networks in the BGP table are sorted by age where
the youngest network is listed first.
Building on BGP means MPLS comes naturally. These go hand in
hand and for the v4 CCIE lab you need to know MPLS. Brian
will of course explain the use of RD and RT. Remember that RD
only has a use in BGP. He shows where all the commands and
routes live and how to do troubleshooting for MPLS. The good
thing is that you will run into things that you didn’t maybe
think about and that will provide great troubleshooting. OSPF
is the most complicated PE-CE protocol and he will give you all
the details how to use Domain-ID, sham links and how the
external route tag and DN bit works.
First week is over. Time for some recovery. Have some fun and
go for some sightseeing or just do labs, the choice is yours.
Just make sure that you are well rested for when monday comes.
The second week started out with multicast. This was maybe my
favourite topic and I learned a lot from this section.
As I mentioned earlier Brian doesn’t believe in tips and tricks
and multicast is one of those topics where people have a lack
of understanding and that is why they go looking for tricks.
Multicast is 90% about PIM, you need to know PIM if you want
to be good with multicast. Brian shows common errors like having
a broken SPT or RPF failures and things like that. These usually
occur when hub and spoke frame relay is involved. With just a
few commands you can become very good with analyzing multicast.
Show ip pim interface, show ip pim neighbor,
show ip rpf x.x.x.x and show ip pim rp mapping will give you most
of the information you will need. The best thing about the
multicast section was that when we ran into errors Brian was very
methodical, instead of just pinging over and over he showed us
what was wrong and then cleared the mroute table, this will
make the mtree build again so that you always go back to a
well known state. It is probably common to have the correct
configuration but move away from it due to lack of patience
or lack of understanding of what is really going on.
Time for the killer topic, probably the most hated topic in
the entire blueprint for most candidates. You guessed it, it is
time for PfR. Where does this hate come from? Well it comes
from the fact that the 12.4 implementation of PfR is just so
incredibly bad. If I were to select one topic that is difficult
to study on your own and that you can really benefit from going
to a bootcamp then that would be PfR. Brian starts out with some
basic topologies and then moves on to some more advanced scenarios.
This topic runs for one day or even a bit more. You WILL run into
a lot of issues due to the implementation of PfR in 12.4. If you
have seen the PfR Vseminar then this will be a lot like that
with the added benefit that you can ask Brian questions of course.
The next big topic is QoS. Brian goes through frame relay
traffic shaping using both legacy syntax and MQC. He will go
through how to use policing and shaping. The coolest thing
about this part was how we configured values for policing
like Bc and then Brian showed by sending ICMP packets how the
token buckets are really working. You might be in for some
surprises here! No powerpoints here for sure! He will explain
the difference between single rate and dual rate policers and
why you would configure them for which scenarios. Then he will
go through the Catalyst QoS. This is a confusing section for
many since the Catalyst QoS is a bit convoluted. Brian shows
how the L2 QoS is very similar to MQC but the syntax is just
a bit strange. He shows how to use the priority queue and how
to use the share and shape queues for the SRR queues.
Whatever time is left will be spent on topics like EEM and
services that you would like to go through. If you feel that
you are weak in some service then this would be a good time to
ask Brian to go through it. I left the bootcamp at 3 PM on
friday and I probably missed a couple of hours in the end.
If you can find a later flight or go home on saturday that
could be a good option.
So now you have gone through a wall of text and you are
whondering what I think about it? Well if it wasn’t obvious
from my text then Yes! Go for it! Yes it costs to go and
with everything to account for like living expense and hotel,
yes it is costly. However if you look just at the price for the
bootcamp which is around 5990$. That is actually a good price,
if you consider that you can get 1500$ paid for your lab then
the cost is actually 4500$ Where I live one week of training
at Global Knowledge is usually around 3000$ for a week and
then often you get some Power Point guy reading slides or you
doing labs while the instructor is watching. The one thing I
found best about the bootcamp was that you learn how to think
at a higher level. Being a CCIE is not about knowing a lot of
commands, it is about thinking at a high level. You get to pick
the brain of a 5x CCIE with real world experience, you won’t
find many guys like that in the world and from what I’ve seen
I would rank Brian among the very best of them. The IGP, Multicast,
Redistribution, PfR sections were very good and you will learn a lot
for sure even if you were strong in these areas before.
Hopefully in class you will meet some new friends. I met some
people people in class I had only seen online before and also made
some new friends. I had a great time with David Rothera, Gian Paolo,
Jose Leitao, Susana and Harald. I also met Darren
for the first time, we have known each other online for a while now
but never met. I also had the chance to meet Patrick Barnes which is
another of my online friends
I’ve tried to cover as much as I can remember but always feel free
to ask questions in the comments section if you have anything you are
still thinking about.