CCIE: Multicast
Preface: To clear old entries in the multicast table, use “clear ip mroute *”. This command usually will allow changes to be sync:ed, but not always. In the worst case scenario, you may have to reload the device. Modifications to a working multicast environment is not recommended if you cannot interrupt traffic forwarding. Be sure to schedule maintenance window in a REAL production environment.
PIM: Signaling protocol that uses the unicast routing table to preform RPF checks.
Dense mode: Flood to all multicast enabled interfaces and downstream routers prune back. Interface is excluded from the OIL for the pruned group. This results in excessive flooding as the state expires in 3 minutes by default and then flooding out that interface will resume. This is a plug & play method, but is not scaleable. State Refresh is enabled by default to send control messages (60 seconds) and keep interfaces pruned if necessary.
RPF Failure: This is one of the most common issues in a multicast routing domain. Unicast shortest paths not matching the multicast distribution tree is a common example. Simple fix would be a static mroute.”ip mroute 0.0.0.0 0.0.0.0 x.x.x.x (correct RPF interface).
Be very careful of static routes in a multicast environment as they change the local routers perception of shortest path.
In the output of “sh ip mroute” look for an S,G entry that has an incoming (towards the source) interface of NULL. This is an indication that the multicast path is different from the unicast route table entry for the source.
“debug ip mpacket”
enable process switching on the multicast interface
“no ip mroute-cache”
PIM Assert Message vs. PIM DR:
This is something that took me sometime to fully understand. On a multi-access (LAN) network, one router may win the assert process, while another may become the IGMP querier (PIM DR or IGMP Querier v2). The winner of the Assert is the one responsible for forwarding multicast one the LAN and the IGMP querier is responsible for managing the IGMp process and sending IGMP query messages on the LAN.
IGMP v1 had not querier, so it required a PIM-DR.
Auto-RP:
interface loopback 0
ip pim sparse-dense-mode (required dense mode SPT for 224.0.1.40 and 224.0.1.39), make sure to use “no ip pim dm-fallback” in a live network. You could also define static RP for the Auto-RP groups with “override” option. The best option for Auto-RP is SM with auto-rp listener.
ip pim send-rp-announce loopback 0 scope 12 (cRP)
ip pim send-rp-discovery loopback 0 scope 12 (Mapping Agent) selects best RP for group range
Negative ACL: “DENY” will cause group to fall back to dense mode. Effectively, a single cRP could announce a deny any and cause all groups to be treated as dense. The reason being is in the order process negative/deny are first.
Filter Auto-RP Messages with TTL Scoping (low number for boundary threshold) or “ip multicast boundary”. Multicast boundary filters at control plane (PIM/IGMP/Auto-RP) and data plane (multicast route state). WIth IOS 12.3(17)T and higher the in/out keywords are possible. In affects control and Out affects the data plane.
Bootstrap Router (BSR):
Standards based for PIMv2, does not use any dense mode groups like Auto-RP.
Configure candidate RP with “ip pim rp-candidate interface | group-list | interval | priority ”
Configure BSR (MAPPING) with ” ip pim bsr-candidate interface | hash | priority|
Filtering BSR: Filter RP info with “ip pim bsr-border” on the edge of the multicast domain.
Stub Multicast (IGMP Helper):
Head-end/Hub runs sparse mode pim. Remote/stub uses dense mode. Remote router acts as a “dumb” packet forwarder.
R1: Stub/Remote
int fa0/0
ip pim dense-mode
ip igmp helper-address 10.0.0.5
int ser 0/0/0
ip pim dense-mode
ip add 10.0.0.1 255.255.255.0
R5: Hub
int ser 0/0/0
ip pim sparse-mode
ip add 10.0.0.5
ip pim neighbor-filter 7
access-list 7 deny 10.0.0.3
access-list 7 permit any
SW1: Client side
int fa 0/1
ip pim dense-mode
ip pim neighbor-filter 8 (disallows R1 and SW1 to become PIM neighbor)
access-list 8 deny any
IGMP Timers:
Reports are sent ASYNC, so some might be missed by the router. On a shared segment, one IGMP querier is elected designated and send membership queries to hosts. Lowest IP address win election. This is confusing because the DR is elected by highest IP. 60 seconds is the default query time and the timeout is 2x that value (120). IGMP v1 has no leave group message and introduces leave latency.
“ip igmp querier-timeout”
MTRACE: Trace from the leaf to the root. mtrace 150.17.10.10 (leaf) 239.1.1.1 (group) output will trace back to the root (RP). Preform on the RP.