As you’ve noticed I’ve been studying SSM and what better way to learn than to blog about it. I recently got a Safari subscription which has been great so far. I’ve been reading some in the book Interdomain Multicast Routing: Practical Juniper Networks and Cisco Systems Solutions which has been great so far.
We are still using the same topology and now we will look a bit more in detail what is happening.
R1 will be the source, sending traffic from its loopback. R3 will be the client running IGMPv3 on its upstream interface to R2. As explained in previous post I am doing this to simulate an end host otherwise I would configure it on R3 downstream interface and then it would sen a PIM Join upstream.
To run SSM we need IGMPv3 or use some form of mapping as described in previous post. It is important to note though that IGMPv3 is not specific for SSM. With SSM a (S,G) pair is described as a channel. Instead of join/leave it is now called subscribe and unsubscribe.
So the first thing that happens is that the client (computer or STB) sends IGMPv3 membership report to the destination IP 22.214.171.124. This is the IP used for IGMPv3. This is how the packet looks in Wireshark.
The destination IP is 126.96.36.199 which corresponds to the multicast MAC 01-00-5E-00-00-16. 16 in hex is 22 in decimal.
We clearly see that it is version 3 and the type is Membership Report 0×22. Number of group records show how many groups are being joined.
Then the actual group record is shown (188.8.131.52) and the type is Allow New Sources. The number of sources is 1. And then we see the channel (S,G) that is joined.
Then R2 sends a PIM Join towards the source.
We can see that it is a (S,G) join. The SPT is built.
R2 will send general IGMPv3 queries to see if there are still any receivers connected to the LAN segment.
The query is sent to all multicast hosts (184.108.40.206) and if still receiving the multicast the host will reply with a report.
The type is Membership Query (0×11). The Max Response Time is 10 seconds which is the time that the host has to reply within.
We can see in this report that the record type is Mode is include (1) compared to Allow New Sources when the first report was sent.
Now R3 unsubscribes to the channel and the IGMP report is used once again.
The type is now Block Old Sources (6).
After this has been sent the IGMP querier (router) has to make sure that there are no other subscribers to the channel so it sends out a channel specific query.
If noone responds to this the router will send a PIM Prune upstream as can be seen here.
Finally. How can we see which router is the IGMP querier? Use the show ip igmp interface command.
R2#show ip igmp interface fa0/0 FastEthernet0/0 is up, line protocol is up Internet address is 220.127.116.11/24 IGMP is enabled on interface Current IGMP host version is 3 Current IGMP router version is 3 IGMP query interval is 60 seconds IGMP querier timeout is 120 seconds IGMP max query response time is 10 seconds Last member query count is 2 Last member query response interval is 1000 ms Inbound IGMP access group is not set IGMP activity: 2 joins, 1 leaves Multicast routing is enabled on interface Multicast TTL threshold is 0 Multicast designated router (DR) is 18.104.22.168 (this system) IGMP querying router is 22.214.171.124 (this system) Multicast groups joined by this system (number of users): 126.96.36.199(1)
We can see some interesting things here. We can see which router is the designated router and IGMP querier. By default the IGMP querier is the router with the lowest IP and the DR is the one with highest IP. DR can be affected by chancing the DR priority. We can also see which timers are used for query interval and max response time among other timers.
So by now you should have a good grasp of SSM. It does not have a lot of moving parts which is nice.
This is a followup post to the first one on SSM. The topology is still the same.
If you want to find it in the documentation it is found in the IGMP configuration guide
I guess the reason to place it under IGMP is that SSM requires IGMPv3. To find SSM mapping go to Products-> Cisco IOS and NX-OS Software-> Cisco IOS-> Cisco IOS Software Release 12.4 Family-> Cisco IOS Software Releases 12.4T-> Configure-> Configuration Guides-> IP Multicast Configuration Guide Library, Cisco IOS Release 12.4T-> IP Multicast: IGMP Configuration Guide, Cisco IOS Release 12.4T-> SSM mapping
So why would we use SSM mapping in the first place? IGMPv3 is not supported everywhere yet. Maybe the Set Top Box (STB) is not supporting IGMPv3 but your ISP wants to support SSM. Then some transition mechanism must be used. There are a few options available like IGMPv3 lite, URD and SSM mapping. IGMPv3 lite is daemon running on the host supporting a subset of IGMPv3 until proper IGMPv3 has been implemented. With URD a router intercepts the URL requests from the user and the router joins the multicast stream to the correct source even though the user is not sending IGMPv3 reports. This requires that the multicast group and source is coded into the web page with links to the multicast streams.
SSM mapping takes IGMPv2 reports and convert them to IGMPv3. We can either use a DNS server and query it for sources or use static mappings as I will explain here. Static mapping is done on the Last Hop Router (LHR) and it is fairly simple. This is how we configure it.
R2(config)#access-list 2 permit 188.8.131.52 R2(config)#ip igmp ssm-map enable R2(config)#ip igmp ssm-map static 2 184.108.40.206 R2(config)#no ip igmp ssm-map query dns
The config is pretty self explanatory. First we create an access-list that defines the groups to be used for SSM mapping. Then we enable SSM mapping. Then we tie together the ACL with the sources that are allowed to send to those groups. Now we need to verify the mapping. First we take a look at R2 with show ip igmp ssm-mapping.
R2#show ip igmp ssm-mapping SSM Mapping : Enabled DNS Lookup : Disabled Mcast domain : in-addr.arpa Name servers : 255.255.255.255
Looks good so far. We will use R3 to simulate a client joining to 220.127.116.11 via IGMPv2. We will debug IGMP to see the report coming in. R3 will join the group via the igmp join-group command. One thing is important to note here. Usually we configure ip igmp-join group on downstream interface to simulate LAN segment and then PIM Join is sent upstream. In this case we want only IGMP join to be sent so therefore we configure the igmp join-group on the upstream interface. Also no PIM should be enabled. This makes the router act as a pure host and not do any multicast routing. What would happen otherwise is that the router will have RPF failures when the source is sending traffic because for traffic not in SSM mode a RPF lookup is done against the RP. Since no RP is configured the RPF would fail, as a workaround we can configure a static RP even though it isn’t really used it would make the RPF check pass.
R3(config)int fa0/0 R3(config-if)#ip igmp join-group 18.104.22.168
This is the debug output from R3.
IGMP(0): Send v2 Report for 22.214.171.124 on FastEthernet0/0
We can clearly see that IGMPv2 report was sent. Now we go to R2 to see if it is converting the IGMPv2 join to IGMPv3.
IGMP(0): Received v2 Report on FastEthernet0/0 from 126.96.36.199 for 188.8.131.52 IGMP(0): Convert IGMPv2 report (*, 184.108.40.206) to IGMPv3 with 1 source(s) using STATIC
It is clear that the conversion is taking place. We look in the MRIB as well.
R2# sh ip mroute | be \( (*, 220.127.116.11), 03:18:48/00:02:54, RP 0.0.0.0, flags: DCL Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet0/0, Forward/Sparse, 03:18:48/00:02:54 Serial0/0, Forward/Sparse, 03:18:48/00:02:44 (18.104.22.168, 22.214.171.124), 03:18:26/00:02:57, flags: sTI Incoming interface: Serial0/0, RPF nbr 126.96.36.199 Outgoing interface list: FastEthernet0/0, Forward/Sparse, 03:18:26/00:02:57
We see that we now have (S,G) joins in R2! As a final step we will also verify in R1.
sh ip mroute | be \( (*, 188.8.131.52), 03:20:44/stopped, RP 0.0.0.0, flags: DCL Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: Serial0/1, Forward/Sparse, 03:20:44/00:00:49 (*, 184.108.40.206), 03:20:43/stopped, RP 0.0.0.0, flags: SP Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: Null (220.127.116.11, 18.104.22.168), 00:01:01/00:02:28, flags: T Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: Serial0/0, Forward/Sparse, 00:01:01/00:03:27
Now the ping should be successful.
R1#ping Protocol [ip]: Target IP address: 22.214.171.124 Repeat count : 5 Datagram size : Timeout in seconds : Extended commands [n]: y Interface [All]: serial0/0 Time to live : Source address: 126.96.36.199 Type of service : Set DF bit in IP header? [no]: Validate reply data? [no]: Data pattern [0xABCD]: Loose, Strict, Record, Timestamp, Verbose[none]: Sweep range of sizes [n]: Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 188.8.131.52, timeout is 2 seconds: Packet sent with a source address of 184.108.40.206 Reply to request 0 from 220.127.116.11, 16 ms Reply to request 1 from 18.104.22.168, 16 ms Reply to request 2 from 22.214.171.124, 16 ms Reply to request 3 from 126.96.36.199, 16 ms Reply to request 4 from 188.8.131.52, 16 ms
So the important thing here is to make R3 act as a pure host otherwise it will not work. This is a bit overkill for verification but I just wanted to show how it could be done.
Regular multicast is known as Any Source Multicast (ASM). It is based on a many to many
model where the source can be anyone and only the group is known. For some applications
like stock trading exchange this is a good choice but for IPTV usage it makes more
sense to use SSM as it will scale better when there is no need for a RP.
ASM builds a shared tree (RPT) from the receiver to the RP and a
Shortest Path Tree (SPT) from the sender to the RP. Everything must pass through the RP
until switching over to the SPT building a tree directly from receiver to sender.
The RPT uses a (*,G) entry and the SPT uses a (S,G) entry in the MRIB.
SSM uses no RP, instead it uses IGMP version 3 to signal what channel (source) it wants
to join for a group. IGMPv3 can use INCLUDE messages that specify that only these
sources are allowed or they can use EXCLUDE to allow all sources except for these ones.
SSM has the IP range 184.108.40.206/8 allocated and it is the default range in IOS but we can
also use SSM for other IP ranges. If we do we need to specify that with an ACL.
SSM can be enabled on all routers that should work in SSM mode but it is only
really needed on the routers that have receivers connected since that is the place
where the behavior is really changed. Instead of sending a (*,G) join to the RP
the Last Hop Router (LHR) sends a (S,G) join directly to the source.
This is the topology we are using.
It is really simple. R1 is acting as a multicast source and R2 will both simulate a client
and do filtering. R3 will simulate an end host. R1 will source the traffic from its loopback.
OSPF has been enabled on all relevant interfaces.
We will start by enabling SSM for the range 220.127.116.11/24 on R2.
R2#conf t Enter configuration commands, one per line. End with CNTL/Z. R2(config)#access-list 1 permit 18.104.22.168 0.0.0.255 R2(config)#ip pim ssm range 1
R2 will now use SSM behavior for the 22.214.171.124/24 range. R2 will join the group 126.96.36.199.
We will debug IGMP and PIM to follow everything that happens. When using igmp join-group
on an interface the router simulates IGMP report coming in on that interface. We will see
later why this is important. So first we enable debugging to the buffer.
Also we must enable multicast routing and enable PIM sparse-mode on the relevant interfaces.
R1#conf t Enter configuration commands, one per line. End with CNTL/Z. R1(config)#ip multicast-routing R1(config)#int s0/0 R1(config-if)#ip pim sparse-mode R1(config-if)#do debug ip pim PIM debugging is on R1(config-if)#
R2(config)#ip multicast-routing R2(config)#int s0/0 R2(config-if)#ip pim sparse-mode R2(config-if)#int f0/0 R2(config-if)#ip pim sparse-mode R2(config-if)#ip igmp version 3 R2(config-if)# *Mar 1 00:18:37.595: %PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 188.8.131.52 on interface FastEthernet0/0 R2(config-if)#do debug ip igmp IGMP debugging is on R2(config-if)#do debug ip pim PIM debugging is on
Then we join the group on the Fa0/0 interface and look at what happens.
R2(config)#int f0/0 R2(config-if)#ip igmp join-group 184.108.40.206 source 220.127.116.11
We take a look at the log.
IGMP(0): Received v3 Report for 1 group on FastEthernet0/0 from 18.104.22.168 IGMP(0): Received Group record for group 22.214.171.124, mode 5 from 126.96.36.199 for 1 sources IGMP(0): Updating expiration time on (188.8.131.52,184.108.40.206) to 180 secs IGMP(0): Setting source flags 4 on (220.127.116.11,18.104.22.168) IGMP(0): MRT Add/Update FastEthernet0/0 for (22.214.171.124,126.96.36.199) by 0 PIM(0): Insert (188.8.131.52,184.108.40.206) join in nbr 220.127.116.11's queue IGMP(0): MRT Add/Update FastEthernet0/0 for (18.104.22.168,22.214.171.124) by 4 PIM(0): Building Join/Prune packet for nbr 126.96.36.199 PIM(0): Adding v2 (188.8.131.52/32, 184.108.40.206), S-bit Join PIM(0): Send v2 join/prune to 220.127.116.11 (Serial0/0) IGMP(0): Building v3 Report on FastEthernet0/0 IGMP(0): Add Group Record for 18.104.22.168, type 5 IGMP(0): Add Source Record 22.214.171.124 IGMP(0): Add Group Record for 126.96.36.199, type 6
R2 is receiving an IGMP report (created by itself) and then it generates a PIM join and
sends it to R1. We look how R1 is receiving it.
PIM(0): Received v2 Join/Prune on Serial0/0 from 188.8.131.52, to us PIM(0): Join-list: (184.108.40.206/32, 220.127.116.11), S-bit set PIM(0): RPF Lookup failed for 18.104.22.168 PIM(0): Add Serial0/0/22.214.171.124 to (126.96.36.199, 188.8.131.52), Forward state, by PIM SG Join
Then we verify by looking at the mroute table and by pinging.
R1#sh ip mroute 184.108.40.206 | be \( (*, 220.127.116.11), 00:09:42/stopped, RP 0.0.0.0, flags: SP Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: Null (18.104.22.168, 22.214.171.124), 00:01:49/00:01:40, flags: T Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: Serial0/0, Forward/Sparse, 00:01:49/00:02:39
Now we do a regular ping which should fail since we are not sourcing traffic from the loopback.
R1#ping 126.96.36.199 re 3 Type escape sequence to abort. Sending 3, 100-byte ICMP Echos to 188.8.131.52, timeout is 2 seconds: ...
This is expected and what is good about SSM is that it makes sending to groups from any
source more difficult which is good from a security perspective.
Now we do an extended ping and source from the loopback.
R1#ping Protocol [ip]: Target IP address: 184.108.40.206 Repeat count : 5 Datagram size : Timeout in seconds : Extended commands [n]: y Interface [All]: serial0/0 Time to live : Source address: 220.127.116.11 Type of service : Set DF bit in IP header? [no]: Validate reply data? [no]: Data pattern [0xABCD]: Loose, Strict, Record, Timestamp, Verbose[none]: Sweep range of sizes [n]: Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 18.104.22.168, timeout is 2 seconds: Packet sent with a source address of 22.214.171.124 Reply to request 0 from 126.96.36.199, 52 ms Reply to request 1 from 188.8.131.52, 48 ms Reply to request 2 from 184.108.40.206, 48 ms Reply to request 3 from 220.127.116.11, 36 ms Reply to request 4 from 18.104.22.168, 40 ms
So our SSM is working and we didn’t even have to enable it on R1! What if we have
clients not supporting IGMPv3? Then we could do SSM mapping. I could do that in
another post if there is interest for it. For now lets look at filtering. If we
were using ASM then we use a standard ACL and match which multicast groups are
allowed to send joins for. The joins would be (*,G) which is the same as
host 0.0.0.0 in an ACL.
To filter SSM we use an extended ACL where the source in the extended ACL
is the multicast source and the destination is which group to match. We will
create an ACL permitting 22.214.171.124 as source for the groups 126.96.36.199, 188.8.131.52
and 184.108.40.206. Anything else will be denied which we will see by debugging IGMP.
When we are doing filtering it is important to rembember that the IGMP report
generated by the router itself (igmp join-group) will also be subject to the ACL
so make sure to include that.
R2(config)#ip access-list extended IGMP_FILTER R2(config-ext-nacl)#permit igmp host 220.127.116.11 host 18.104.22.168 R2(config-ext-nacl)#permit igmp host 22.214.171.124 host 126.96.36.199 R2(config-ext-nacl)#permit igmp host 188.8.131.52 host 184.108.40.206 R2(config-ext-nacl)#deny igmp any any R2(config-ext-nacl)#int f0/0 R2(config-if)#ip igmp access-group IGMP_FILTER
Now we make R3 join a group not allowed and look at the debug output on R2.
R3(config)#int f0/0 R3(config-if)#ip igmp version 3 R3(config-if)#ip igmp join-group 220.127.116.11 source 18.104.22.168
This is from the log on R2.
IGMP(0): Received v3 Report for 1 group on FastEthernet0/0 from 22.214.171.124 IGMP(*): Source: 126.96.36.199, Group 188.8.131.52 access denied on FastEthernet0/0 R2#sh ip access-lists IGMP_FILTER Extended IP access list IGMP_FILTER 10 permit igmp host 184.108.40.206 host 220.127.116.11 (6 matches) 20 permit igmp host 18.104.22.168 host 22.214.171.124 30 permit igmp host 126.96.36.199 host 188.8.131.52 40 deny igmp any any (7 matches)
As we can see that group is not allowed so the IGMP join will not make it through.
SSM can be very useful and it is not difficult to setup. In fact it is mostly
easier than ASM to setup.
I had some requests for the final configs so I have fixed those. You can download them here. Also I had some issues getting the traffic through but thanks to my helpful readers like zumzum I now have it figured out.
Lets start on R4 since this is the source of the traffic.
R4 wants to send traffic to 184.108.40.206 with a source of 220.127.116.11. We know that route via RIP and the next-hop is 18.104.22.168. That network is directly connected (secondary). We need to find out the MAC address of 22.214.171.124 for our ARP entry. R3 has proxy arp enabled, which is the default. However it will not respond to R4 ARP request since it does not have the subnet 126.96.36.199/24 connected. R4 must therefore have a static ARP entry. I did an error here earlier by typing in R1 MAC but this should be the MAC of R3 Fa0/0 since that is the link connecting us. We create the static ARP with arp 188.8.131.52 xxxx.xxxx.xxxx arpa. R4 now has all the info needed.
Packet travels to R3. R3 does not have a route for 184.108.40.206. We create a static route, ip route 220.127.116.11 255.255.255.0 18.104.22.168. We also need a static route back to R4 for the 22.214.171.124 IP. Ip route 126.96.36.199 255.255.255.0 188.8.131.52.
Packet now travels to R2. R2 also needs to know about 184.108.40.206 so we add a route there as well. Ip route 220.127.116.11 255.255.255.0 18.104.22.168. R2 also needs to find its way back to R4 so we add a static route, ip route 22.214.171.124 255.255.255.255 126.96.36.199.
Packet goes to R1 which will respond. It will send the packet out Fa0/0. R1 needs to know the MAC address for 188.8.131.52. R2 has proxy ARP enabled so it will reply with its own MAC address to R1. R1 will insert this into ARP cache and adjacency table and then we are good to go.
So except learning a multicast feature we also got to practice how to make connectivitiy in an unusual way and think through the whole process of packet flow.
The multicast helper map is an interesting feature. It can be used in scenarios where we want to transport broadcast. Routers don’t forward broadcast by default but we can convert this to multicast and transport it across our network and then convert it back to broadcast. Might not be that common in real life if you don’t work at a stock exchange but is fair game for the lab and a topic that we should not be surprised to see at the lab.
So the basic idea is to convert broadcast packets, transport them as multicast and then convert it back to broadcast. First lets look at our topology.
The idea here is to take broadcast coming from R1 in on R2 Fa0/0, convert it to multicast. Transport it to R3 which then converts it back to broadcast and sends it out to R4. Using this technology we can actually exchange routes between non adjacent routers, pretty cool?
You can download a .net file and initial configs for the routers here.
This is our assigned task:
Create a Loopback99 interface on R1 and assign it an IP address of 184.108.40.206/24; advertise this network on R1 using RIP v1 via FastEthernet0/0.
Configure R4 to receive the RIP advertisements from R1. You must use a multicast solution for this. In other words you may not configure solutions involving tunnel interfaces, enable RIP elsewhere than R1 and R4, use bridging, IPsec, magic, etc.
· Use PIM dense mode only.
· You may use a secondary IP address of 220.127.116.11/24 as part of your solution.
· Use access list 120 for any lists you need.
· You do not need to be able to ping 18.104.22.168 from R4
· Use the multicast group 22.214.171.124
So we start by configuring R1. RIP version 1 is broadcast only and it does not include the subnet mask. All we need to do on R1 is configure RIP. We simply announce the network and leave the default settings which is to announce with version 1 and receive version 1 or 2. All the magic will happen on R2 and R3.
Then we proceed to configure R2. This is where most of the magic happens. We need to enable PIM dense mode on both Fa0/0 and S0/0. To be able to process switch the UDP RIP packets coming in we need to configure ip forward-protocol udp rip. Then we will configure the multicast helper map. Lets have a look.
So broadcast coming in on Fa0/0 matching access-list 120 is converted to multicast with a destination of 126.96.36.199. We are running PIM dense mode so later we should be able to see (S,G) entries in the mroute table.
We proceed to configure R3. The config will be very similar to R2.
On R3 we convert back 188.8.131.52 to the broadcast address of the segment between R3 and R4. We need to turn on directed broadcast otherwise we will not be allowed to send this packet out.
On R4 we simply turn on RIP. We were allowed to create a second subnet but lets wait with that.
So now everything should be running. However, we will have some issues. I am not running any IGP between R2 and R3. R3 will not know how to reach the source 184.108.40.206 so we will have a RPF failure which you can see below.
We can also see this if we check the mroute counters.
So how can we fix the RPF failure? We have a few different options. Either we need to run an IGP between R2 and R3 or we could add a static route or we could add a static mroute. This time I chose to use a static mroute because it is easy. Lets add that.
Now the traffic should be reaching R4. However the RIP rout will not be installed. This is because RIP validates the update source. If we receive an update from a non locally connected source the update will not be accepted. We can configure RIP to not validate the update source or we can configure a secondary IP address in same subnet as the source (we were allowed to do this according to the task).
Now we have the route installed. What will the multicast routing table look like? Lets look at R2 and R3. Since we are running dense mode we should only have (S,G) entries.
R2 has the (S,G) entry as expected. It has a flag of T since it is a SPT tree. We don’t have an incoming RPF neighbor since R1 is not running PIM.
Now we will look at R3.
We have R2 as the incoming RPF neighbor. RPF is done via static mroute. We have no outgoing interface. This is usually bad but since we are converting back to broadcast this is OK. If we do a debug of mpacket we will see an error message that OIL is null.
We have completed our task and if this was the lab we would stop here. To take it to the next step lets think what we would need to be able to have reachability from R4 to R1 loopback.
As we can see route recursion fails since we don’t know how to reach 220.127.116.11. Earlier we used a hack to not validate the update source. Lets remove this and add the secondary subnet to R4 instead.
Now the route recursion is working. R4 is using outgoing interface Fa0/0. This is Ethernet so we need to encapsulate the packet and send it to R1 MAC address. Usually we would send ARP request but the devices are not locally connected. Lets try adding a static ARP entry.
Traffic should be able to reach R3. R3 will not know how to reach 18.104.22.168 though. I’ll add static routes on R3 and R2.
Finally I’ll also add a static ARP entry on R1.
Now lets try a ping. It didn’t work. No matter how I try by adding static ARP entries and the correct routes I could not get the ping to succeed. I even tried on R4 to source traffic from 22.214.171.124 but that did not work either. I could see the traffic leaving R4 but not entering on R3. Very strange. If you have any suggestions post them in the comments section.
Almost all of us know the difference between a static route and a dynamic one, longer match wins. If same length then AD decides and so on.
I have noticed that there are some misconceptions about the equivalent to ip route in multicast, the ip mroute command. One would assume that it works the same way as ip route but it does not.
First, when we are talking about multicast it is pretty much the opposite of regular routing since we care about the source of the packet instead of the destination. In multicast we check that incoming packets are coming in on the interface we would use to send traffic back to the source, the RPF check. It’s basically the same check that can be used for security reasons to avoid spoofing.
Sometimes we have assymmetric routing meaning that the source send traffic in on an interface we are not using to send traffic back to the source. This means we will have a RPF failure and traffic will be dropped. We can solve this by reconfiguring IGP or putting in a static mroute.
So many people think, “Oh I’ll just put in a mroute and traffic MAY come in that interface as well”. The error here is that mroute tells that traffic MUST come in on that interface. So it might not be the quick fix you thought it would be. Another common error is that you put a broad mask or a default route like 0.0.0.0 0.0.0.0. In regular routing anything with a longer match would override the static default but with the mroute the default route takes precedence over IGP routes. The reason is that the AD is lower and the longest match does not apply here. So maybe you just had problems with one source before and now you have problems with all sources except for that one which you just fixed.
If you put several mroute statements in then longest match will apply since the AD will be the same. The easiest way of checking what interface is the RPF interface for a source is to use the show ip rpf command. This command will show what is the RPF interface and where the information was sourced from (IGP or static).
I hope this post has cleared some of the common misconceptions of the static mroute.
Added some multicast content to the flash deck. There is now a total of 151 questions in the deck. The next section that will be added when I have finished the labs is MPLS VPN. As usual, the deck is here.
In unicast routing we are interested in how to forward packets to their destination. In multicast routing we are interested where the source came from, multicast packets need to pass a RPF (Reverse Path Forwarding) check. Packets that are received on an interface are checked that the route back to the source is through the same interface, otherwise the RPF check will fail. RPF check failing is one of the most common errors in multicast networks. Lets look at the topology.
The goal of the scenario is that R6 should be able to ping SW4 which has joined multicast group 126.96.36.199. PIM dense mode has been enabled on R6 -> R4, between R4 and R5 PIM is only enabled on the frame-relay connection. PIM is enabled between R5 -> SW2 and SW2 -> SW4. Dense mode is being used. This is configuration from SW4.
Ping from R6 to SW4.
Not successful, lets look at what multicast packets are being sent. We need to disable fast switching on the interface to see any packets.
Packets are not coming in on the RPF interface. Lets look at the multicast routing table.
We are interested in the (188.8.131.52, 184.108.40.206) which is a dense mode group. Our RPF neighbor is 220.127.116.11 which is the address of R4 on the S0/1/0 interface which is not enabled for PIM. How do we reach 18.104.22.168?
Traffic to R6 is sent over the S0/1/0 interface which is not enabled for PIM, this is a problem…How can we pass the RPF check? By adding a static mroute we can enable the frame-relay interface to be a valid RPF interface.
The ping should now be successful.
Traffic is flowing, one final look at the mroute table.
The RPF neighbor is now 22.214.171.124 which is the next-hop over frame-relay. When doing multicast we need to think more about traffic patterns and ensuring that interfaces that are in the multicast transit path should be PIM enabled or not be used for multicast traffic.