• Aucun résultat trouvé

Configuring IGMP and PIM

Dans le document • Table of Contents• Index (Page 137-140)

Chapter 9. Configuring and Verifying Multicast Routing on Juniper Networks Routers

9.1 Configuring IGMP and PIM

The following sections describe how to configure and manage JUNOS software to support multicast routing within a domain.

9.1.1 Enabling Interfaces for IGMP and PIM

Without any configuration, none of the router's interfaces is enabled for PIM or IGMP. Enabling PIM on the router automatically enables IGMP on all LAN interfaces. Use the following configuration to enable IGMP and PIM-SM with the version 2 packet format on all nonmanagement interfaces:

protocols {

IGMP Query Response Interval (1/10 secs): 100 IGP Last Member Query Interval (1/10 secs): 10 IGMP Robustness Count: 2

Derived Parameters:

IGMP Membership Timeout (usecs): 260000000

IGMP Other Querier Present Timeout (usecs): 255000000

Notice that enabling PIM on the t3-1/0/0.0 interface does not automatically enable IGMP because this interface is a point-to-point interface and is most likely not connected to any end hosts that could become group members. If a point-to-point interface needs to speak IGMP, it can be explicitly enabled.

Even if a router is attached to a LAN that has only IGMP-speaking hosts and no other PIM-speaking routers, the interface connected to that LAN must still run PIM for the router to function properly. If PIM is disabled on an interface, IGMP shows the UP state, but group membership reports will not be processed correctly. Use the show igmp group command to show the groups joined by directly connected hosts.

user@m20-a> show igmp group

By default, Juniper Networks routers assume the SSM range is 232.0.0.0/8, but you can configure the SSM range to be something other than 232.0.0.0/8. For example, to configure the SSM range to be 235.0.0.0/8, use the following configuration:

With this command, shared tree behavior is prohibited for groups configured as SSM groups. The router rejects any control messages for the groups in the SSM range that are not source-specific. Examples of messages that would be discarded for groups in the SSM range are IGMP group-specific reports and PIM (*,G) Joins.

In the output of the following command, notice that ASM groups have 0.0.0.0 in the Source column, whereas SSM groups have the IP address of the source.

user@m20-a> show igmp group

9.1.3 The Tunnel PIC and the pe and pd Interfaces

On Juniper Networks routers, the encapsulation and decapsulation of data packets into tunnels (GRE or IP-IP—IP in IP Tunneling) is executed in hardware. The data packets never touch the RE. Most other vendors' routers perform this function in the software running on the route processor. The advantage of doing it in hardware in the PFE is that packets can be encapsulated or decapsulated and forwarded at a much faster rate. Additionally, because the RE does not have to process these packets, routing control processes are not adversely affected.

In order to create tunnel interfaces on a Juniper router, a Tunnel PIC must be installed. The Tunnel PIC serves as a placeholder for the packet memory in the FPC. If a native data packet comes into the router and its next hop resolves to a tunnel interface, the packet is forwarded by the packet-switching board to the Tunnel PIC. The packet is

encapsulated in the appropriate header, and the Tunnel PIC loops it back to the packet-switching board. Then it is forwarded out a physical interface based on the destination IP address in the tunnel header.

PIM Register messages encapsulate and decapsulate data packets similarly to GRE tunnels. The PIM Register function is also performed in hardware. A Tunnel PIC is required if a router is going to encapsulate or decapsulate data packets into or out of Register messages. Thus, all RPs and all PIM-SM designated routers (DRs) that are directly connected to a source require a Tunnel PIC.

To determine whether a tunnel PIC is installed, use the following command:

user@m20-b> show chassis hardware | match "fpc|tunnel"

FPC 0 REV 01 710-001292 AB4751

PIC 3 REV 01 750-002982 HF2515 1x Tunnel FPC 2 REV 09 710-000175 AA4843

The preceding output shows that a Tunnel PIC is installed in PIC slot 3 of the FPC in FPC slot 0. If PIM-SM is configured and this router is serving as RP, a virtual interface, named pd (which is short for "PIM Register

decapsulation interface"), will be configured. The following configuration and two commands show that this router is acting as an RP:

user@m20-b> show pim rps

RP address Type Holdtime Timeout Active groups Group prefixes 10.0.1.1 bootstrap 150 None 0 224.0.0.0/4 10.0.1.1 static 0 None 0 224.0.0.0/4 user@m20-b> show interfaces terse | match 10.0.1.1

lo0.0 up up inet 10.0.1.1 --> 0/0

The following command shows the newly created pd interface that is used for decapsulating Register messages. The 0/3/0 indicates the location of the Tunnel PIC.

user@m20-b> show pim interfaces

Name Stat Mode V State Priority DR address Neighbors ge-0/0/0.0 Up Sparse 2 DR 1 10.0.5.2 0 lo0.0 Up Sparse 2 DR 1 10.0.1.1 0 pd-0/3/0.32768 Up Sparse 2 P2P 0

If a router running PIM-SM is using another router as an RP and it has a tunnel PIC installed, it will create a virtual interface named pe (which is short for "PIM Register encapsulation interface"). The pe interface does not appear until the RP is discovered. The 4/1/0 in the following output indicates the position of the Tunnel PIC:

user@m40-a> show pim rps

RP address Type Holdtime Timeout Active groups Group prefixes 10.0.1.1 bootstrap 150 108 0 224.0.0.0/4 user@m40-a> show chassis hardware | match "fpc|tunnel"

FPC 1 REV 10 710-000175 AA7674

To configure a router to be a statically defined RP, use the following configuration, where 10.0.1.1 is the IP address of one of this router's interfaces (preferably lo0.0):

protocols {

To configure a non-local RP address for an RP that expects PIMv2 Register messages, use the following configuration:

By default, both the rp local and rp static set the RP for all groups in the 224.0.0.0/4 range. It is possible to specify a more specific range for an RP. For example, the following configuration sets the local router to be RP for groups in 224.0.0.0/5 and a nonlocal RP address (10.0.1.2) for groups in the 234.0.0.0/8 group range:

protocols {

9.1.5 Configuring the PIM Bootstrap Mechanism

To configure the bootstrap mechanism in a PIM-SM domain, select which routers are candidate RPs and which routers are candidate BSRs. A single router can be both a candidate RP and candidate BSR. You need at least one candidate RP and one candidate BSR for the bootstrap mechanism to work. All the interfaces within the PIM-SM domain must run PIM version 2.

To configure a router as a candidate RP, use the following configuration:

protocols {

To configure a router as a candidate BSR, use the following configuration:

protocols {

The value of the bootstrap-priority can be 0–255. The default is 0, which means that the router is not a candidate BSR. Turning on PIM tracing with the option of flagging RP-related messages on a router that is the elected BSR yields the following messages:

Apr 5 06:26:36 PIM RECV 10.0.1.1 -> 10.0.2.1 V2 CandidateRP sum 0xe963 len 22 Apr 5 06:27:32 PIM RECV 10.0.1.1 -> 10.0.2.1 V2 CandidateRP sum 0xe963 len 22 Apr 5 06:27:36 PIM SENT 10.0.5.1 -> 224.0.0.13 V2 Bootstrap sum 0x69ed len 36 Apr 5 06:27:36 PIM RECV 10.0.1.1 -> 10.0.2.1 V2 CandidateRP sum 0xe963 len 22 Apr 5 06:28:34 PIM RECV 10.0.1.1 -> 10.0.2.1 V2 CandidateRP sum 0xe963 len 22 Apr 5 06:28:36 PIM SENT 10.0.5.1 -> 224.0.0.13 V2 Bootstrap sum 0x2ded len 36 Apr 5 06:28:36 PIM RECV 10.0.1.1 -> 10.0.2.1 V2 CandidateRP sum 0xe963 len 22 Apr 5 06:29:34 PIM RECV 10.0.1.1 -> 10.0.2.1 V2 CandidateRP sum 0xe963 len 22 Apr 5 06:29:36 PIM SENT 10.0.5.1 -> 224.0.0.13 V2 Bootstrap sum 0xf1ec len 36 Apr 5 06:29:36 PIM RECV 10.0.1.1 -> 10.0.2.1 V2 CandidateRP sum 0xe963 len 22

The interfaces that connect to other PIM-SM domains should have BSR filters configured to prevent BSR messages from leaking across domain boundaries. The following configuration prevents BSR messages from entering or leaving the router through interface so-0/0/0:

policy-statement bsr-import-filter { from interface so-0/0/0.0;

then reject;

}

policy-statement bsr-export-filter { from interface so-0/0/0.0;

then reject;

} }

9.1.6 Configuring Auto-RP

The steps to configure auto-RP are as follows:

1.

On all routers, configure sparse-dense mode on all PIM-enabled interfaces:

protocols {

On all routers, configure 224.0.1.39 and 224.0.1.40 as dense groups. Only groups that are configured as dense groups are forwarded according to dense mode operation.

protocols {

In a PIM domain running auto-RP, each router falls into one of four categories. The following lists each category along with its configuration:

o

Discovery: Listen for auto-RP Mapping messages:

protocols {

Announce-only: Transmit auto-RP Announcement messages and listen for auto-RP Mapping messages:

protocols {

Mapping-only: Listen for auto-RP Announcement messages, perform RP-to-group mapping function, and transmit auto-RP Mapping messages:

Announce and mapping: Perform combined tasks of both announce-only and mapping-only categories:

protocols {

The following command shows the RPs learned through auto-RP as well as a static entry if the router is configured as a local RP:

user@m20-b> show pim rps

RP address Type Holdtime Timeout Active groups Group prefixes 10.0.1.1 auto-rp 150 148 2 224.0.0.0/4 10.0.1.1 static 0 None 2 224.0.0.0/4

A domain running auto-RP needs at least one router providing the announce functionality and one router providing the mapping functionality. The same router can provide both announce and mapping functions. Here is a sample of messages in a PIM trace file on a router configured for announce-only:

Jan 30 04:59:21 PIM RECV 10.0.2.1+496 -> 224.0.1.40 AutoRP v1 mapping hold 150 rpcount 1 len 20 rp 10.0.1.1 version 2 groups 1 prefixes 224.0.0.0/4 Jan 30 04:59:25 PIM SENT 10.0.1.1 -> 224.0.1.39+496 AutoRP v1 announce hold 150 rpcount 1 len 20 rp 10.0.1.1 version 2 groups 1 prefixes 224.0.0.0/4 Jan 30 04:59:25 PIM RECV 10.0.1.1+496 -> 224.0.1.39 AutoRP v1 announce hold 150 rpcount 1 len 20 rp 10.0.1.1 version 2 groups 1 prefixes 224.0.0.0/4 Jan 30 05:00:19 PIM RECV 10.0.2.1+496 -> 224.0.1.40 AutoRP v1 mapping hold 150 rpcount 1 len 20 rp 10.0.1.1 version 2 groups 1 prefixes 224.0.0.0/4 Jan 30 05:00:24 PIM SENT 10.0.1.1 -> 224.0.1.39+496 AutoRP v1 announce hold 150 rpcount 1 len 20 rp 10.0.1.1 version 2 groups 1 prefixes 224.0.0.0/4 Jan 30 05:00:24 PIM RECV 10.0.1.1+496 -> 224.0.1.39 AutoRP v1 announce hold 150 rpcount 1 len 20 rp 10.0.1.1 version 2 groups 1 prefixes 224.0.0.0/4

At the boundaries of a domain running auto-RP, the auto-RP control messages should be administratively scoped, which prevents them from leaking into other domains. Use the following configuration to prevent auto-RP leaking:

routing-options {

The following command shows that the 224.0.1.39 group is administratively scoped on interface t3-5/2/0.0:

user@m40-a> show pim join extensive

Group Source RP Flags 224.0.1.39 10.0.2.1 dense Upstream interface: local

Downstream interfaces:

local

t3-5/2/0.0 (Administratively scoped) ge-4/2/0.0

9.1.7 Configuring Anycast RP

Anycast RP can use any RP-set mechanism (that is, static, bootstrap, or auto-RP) to distribute the RP-to-group mapping. Anycast RP with static group-to-RP mapping is the most common strategy used by ISPs because it provides load balancing and redundancy in the simplest and most intuitive manner. Anycast RP requires additional configuration only on the routers that are serving as the RPs. All other routers are configured with the standard static RP configuration pointing to the shared anycast address.

To configure anycast RP, perform the following steps on each RP:

1.

Configure two addresses on the lo0.0 interface. One is unique to the router, and the other is the shared anycast address. Make the unique address the primary address for the interface to insure that protocols such as OSPF and BGP select it as router ID.

interfaces {

Use the shared anycast address as the local RP address:

protocols {

Use the unique address as the local address for the MSDP sessions to the other RPs in the domain:

protocols {

9.1.8 Monitoring PIM Join State and Multicast Forwarding

Once PIM is enabled on all the routers and the RP has been discovered through one of the aforementioned

mechanisms, the network is ready to carry multicast traffic. The command used to show PIM Join state is show pim join extensive. The two scenarios described in the following sections examine the output of this command during different phases of PIM operation.

9.1.8.1 Scenario 1: RPT is Set Up, No Active Sources

The following is the output for a router sitting along the RPT between the RP and a group member. At this point, no active sources are sending to the group.

user@m20-b> show pim join extensive 226.1.1.1 Group Source RP Flags

226.1.1.1 0.0.0.0 10.0.2.1 sparse,rptree,wildcard Upstream interface: ge-0/0/0.0

Upstream State: Join to RP Downstream Neighbors:

Interface: ge-0/1/0.0

10.0.4.1 State: Join Flags: SRW Timeout: 196

The source is displayed as 0.0.0.0, which indicates that this state is (*,G). The address 10.0.4.1 is the address of the PIM neighbor that sent the (*,G) Join message. The upstream interface is the RPF interface for the RP address, which is confirmed by the following command:

user@m20-b> show multicast rpf 10.0.2.1 Multicast RPF table: INET.2

Source prefix Protocol RPF interface RPF neighbor 10.0.2.1/32 IS-IS ge-0/0/0.0 10.0.5.1

Here is the output showing the Join state on the RP. Notice that the upstream interface is local.

user@m40-a> show pim join extensive 226.1.1.1 Group Source RP Flags

226.1.1.1 0.0.0.0 10.0.2.1 sparse,rptree,wildcard Upstream interface: local

Upstream State: Local RP Downstream Neighbors:

Interface: ge-4/2/0.0

10.0.5.2 State: Join Flags: SRW Timeout: 162

9.1.8.2 Scenario 2: A Source Is Active, a Host Joins the Group Later

The following is the output on the PIM-SM DR connected to a source for a group that has no members:

user@m20-b> show pim join extensive 227.1.1.1

Group Source RP Flags 227.1.1.1 10.0.4.1 sparse Upstream interface: ge-0/1/0.0

Upstream State: Local Source Downstream Neighbors:

The upstream interface is the RPF interface for the source as shown with the following commands:

user@m20-b> show multicast rpf 10.0.4.1 Multicast RPF table: INET.2

Source prefix Protocol RPF interface RPF neighbor 10.0.4.0/30 Direct ge-0/1/0.0

user@m20-b> show pim source 10.0.4.1

RPF Address Prefix/length Upstream interface Neighbor address 10.0.4.1 10.0.4.0/30 ge-0/1/0.0 Direct

The PIM statistics show that Register messages were sent and Register-Stop messages were received:

user@m20-b> show pim statistics

PIM Message type Received Sent Rx errors V2 Hello 5488 5483 0 V2 Register 0 32 0 V2 Register Stop 32 0 0

Without any members, the RP has no Join state for the group.

user@m40-a> show pim join extensive 227.1.1.1

Group Source RP Flags

The fact that the DR has registered this source to the RP is kept in the RP's Register state. To view the RP's Register state, use the following command:

If a host joins the group, the RP receives a (*,G) Join message for 227.1.1.1. The RP sets up (*,G) Join state for 227.1.1.1. The RP scans its PIM Register state and MSDP Source-Active cache looking for any active sources for the group. For each source it finds, it sets up (S,G) state, copying the downstream neighbors from the (*,G) entry and forwards an (S,G) Join message toward the source.

Here is the output on the PIM-SM DR connected to a source after a host has joined the group. This router sits along the SPT, so it has no (*,G) state.

user@m20-b> show pim join extensive 227.1.1.1

Group Source RP Flags 227.1.1.1 10.0.4.1 sparse Upstream interface: ge-0/1/0.0

Upstream State: Local Source Downstream Neighbors:

Interface: ge-0/0/0.0

10.0.5.1 State: Join Flags: S Timeout: 198

This command shows traffic statistics based on group and source:

user@m20-b> show multicast usage

The following sequence of commands illustrates how PIM Join state is transferred to the PFE, so packets entering the router are forwarded out the correct interfaces. The first command shows the multicast route that is installed based on the PIM (S,G) Join state:

user@m20-b> show multicast route extensive group 227.1.1.1

Group Source prefix Act Pru NHid Packets IfMismatch Timeout 227.1.1.1 10.0.4.1 /32 AF 30 1734 0 319 Upstream interface: ge-0/1/0.0

Session name: Unknown

The following command shows the entry in the multicast routing table (inet.1):

user@m20-b> show route table inet.1 detail 227.1.1.1/32

inet.1: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both

The following command shows the entry in the RE's copy of the IPv4 forwarding table:

user@m20-b> show route forwarding-table matching 227.1.1.1/32

The following command shows the entry in the PFE's copy of the IPv4 forwarding table:

user@m20-b> show pfe route ip

Destination NH IP Addr Type NH ID Interface -- --- --- --- ---227.1.1.1.10.0.4.1/64 MultiRT 30 ge-0/1/0.0

The following command shows the entry in next-hop database stored in the PFE:

user@m20-b> show pfe next-hop

This document is created with the unregistered version of CHM2PDF Pilot

This document is created with the unregistered version of CHM2PDF Pilot

Dans le document • Table of Contents• Index (Page 137-140)