Messin Around With Multicast Boundary

By | December 30, 2021

We will check below some multicast commands and how these works:

We will use below topology:

R5—R6—R1—R2—R3—R4

R1 = MA and RP for 232/8, 233/8, 234/8

R4#show ip pim rp mapping 
PIM Group-to-RP Mappings

Group(s) 232.0.0.0/8
  RP 1.1.1.1 (?), v2v1
    Info source: 1.1.1.1 (?), elected via Auto-RP
         Uptime: 00:10:08, expires: 00:02:48
Group(s) 233.0.0.0/8
  RP 1.1.1.1 (?), v2v1
    Info source: 1.1.1.1 (?), elected via Auto-RP
         Uptime: 00:10:08, expires: 00:02:47
Group(s) 234.0.0.0/8
  RP 1.1.1.1 (?), v2v1
    Info source: 1.1.1.1 (?), elected via Auto-RP
         Uptime: 00:10:08, expires: 00:02:46

R4 has the following on interface loopback 0:

interface Loopback0
 ip address 4.4.4.4 255.255.255.255
 ip pim sparse-mode
 ip igmp join-group 233.0.0.1
 ip igmp join-group 234.0.0.1

R3 has set up a multicast boundary as follows:

access-list 1 permit 232.0.0.0 0.255.255.255
access-list 1 permit 233.0.0.0 0.255.255.255
 
interface Serial1/0
 ip address 192.168.34.3 255.255.255.0
 ip pim sparse-mode
 ip multicast boundary 1

Now R3 only allows PIM joins that are in 232/8 or 233/8.

R3#sh ip mroute 234.0.0.1
Group 234.0.0.1 not found

Let’s do ping 233.0.0.1:

R6#ping 233.0.0.1 re 100

Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to 233.0.0.1, timeout is 2 seconds:
......................................................................
..........

Remember, we only allowed 2 groups. what does Auto-RP use to propagate messages? Group 224.0.1.40! So even if you start passing traffic to 233.0.0.1 after you enable the boundary, eventually R3 will lose state for the Auto-RP discovery group and R4 will lose the RP information. All multicast traffic will then fail the RPF check.

hence we have modified ACL on R3:

R3#sho run | inc access
access-list 1 permit 224.0.1.40
access-list 1 permit 233.0.0.0 0.255.255.255
access-list 1 permit 232.0.0.0 0.255.255.255

224.0.1.39 is what the MA’s listen to so we don’t need to worry about that for this example. Now we can ping:

R6#ping 233.0.0.1 re 2  

Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 233.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.34.4, 212 ms
Reply to request 0 from 192.168.34.4, 216 ms
Reply to request 1 from 192.168.34.4, 184 ms
Reply to request 1 from 192.168.34.4, 184 ms

Why should R4 even know about the RP if R3 is going to prevent mroute state from being created for 234.0.0.1 on that interface. If we could prevent R4 from learning that RP information, that would be good. Well on R3 we can modify the boundary as follows:

R3(config)#int s1/0               
R3(config-if)#ip multicast boundary 1 filter-autorp

Now R3 only sends RP information for the groups which is permitted in the ACL:

R4#show ip pim rp mapping 
PIM Group-to-RP Mappings

Group(s) 232.0.0.0/8
  RP 1.1.1.1 (?), v2v1
    Info source: 1.1.1.1 (?), elected via Auto-RP
         Uptime: 00:00:03, expires: 00:02:55
Group(s) 233.0.0.0/8
  RP 1.1.1.1 (?), v2v1
    Info source: 1.1.1.1 (?), elected via Auto-RP
         Uptime: 00:00:03, expires: 00:02:53
R4#

Thanks for reading the blog….

Recommended links:

Leave a Reply

Your email address will not be published.