• Aucun résultat trouvé

Label Distribution Protocol (LDP)

Dans le document [ Team LiB ] (Page 95-99)

So how are the desired label sequences laid down in the LSRs to effect the LSPs that we want? One possibility is to precompute and centrally download a set of label swapping tables that actually constitute a full logical mesh on the set of nodes in the MPLS domain.

Even for a 100 node network this requires only 9,900 (unidirectional) LSPs and label-swapping tables at each LSR that have no more than 100 entries per port. But more generally we want a more scalable solution and some means of requesting and releasing LSPs on demand. This is the role of LDP. For MPLS to function correctly each connected pair of LSRs must have the same interpretation of the labels used to forward traffic. This is achieved by LDP by having LSRs inform one another of the "label bindings" they have made.

Although the LDP establishes LSPs, it is itself a protocol that operates via normally routed TCP/IP packets and has more than one implementation, including having its packets piggyback on those of some other protocols. There are also two basic approaches to LDP.

In basic or generalized LDP, LDP sessions are established between connected LSRs to exchange their label spaces. In this approach there is no explicit mechanism to direct the creation of end-to-end LSPs. Rather this happens indirectly as a result of the local actions of each LSR to find a label mapping with which to replace normal routing decisions for packets that arrive on a given label for which no forwarding label is yet established. To illustrate, assume that an LSR X knows the existing label space of its neighbors, A, B, C, D. The label space indicates the outgoing port to which that node will forward any packet arriving on a certain input port and label at its site.

(Such label spaces are built by the node first observing the outcome of a normal routing decision for such packets using the OSPF-built routing tables.) Now consider two possible packet arrivals at node X, from node A, as an example. The first case conveys the idea of how a local label space at the node is developed. The second case will convey how LSPs are implicitly created as needed to move packet flows from normally routed handling to LSP-expedited handling.

In the first case, assume that a normal IP packet arrives at the LSR from node A. That is, it arrives on the port from node A not having an MPLS encapsulation and label. One option the LSR has is to make a normal IP routing decision and forward the packet in the

conventional way. Since this packet has arrived without MPLS encapsulation the normal routing decision has to be made in any case.

The result of the normal routing decision is which output port to send the packet to. Let us say this is to send it to node B. Having made this routing decision, however, the LSR is in a position to add this destination IP address to the list of all IP destinations associated with output port B (i.e., the forwarding equivalence class (FEC) for port B).[2]

Once this is done, subsequent packets on that IP address are automatically sent to port B by virtue of appearing in the FEC for port B. But more importantly, on the next LDP session with adjacent

node A, node A will learn of this forwarding policy established at B and can therefore, henceforth MPLS-encapsulate such traffic with a label that is in effect a shortcut to the routing decision for node X. In effect the label says: "node X, when you get this packet, switch it right to your port to node B." This shows how label space is generated at a node, and then when disseminated to neighbors, effectively shifts flows from normal routing to faster label-switching treatments. Another way to interpret this is that each new FEC-to-label binding that a node A learns that its neighbor X is maintaining is like an additional virtual output port at node A. In fact functionally, the label assigned at node A is the entry to a virtual circuit directly to the destination.

[2] FEC as an acronym for forwarding equivalence class in the context of IP routing should not be confused with FEC as an acronym for forward error correction in digital transmission or data storage applications. Both are formally established acronyms in use within their respective fields.

In the second case, we imagine an MPLS-encapsulated packet arriving in the port from node A at node X and node X finds that it does not have a currently established label-switching entry to forward this packet via label switching. It does, however, have the label spaces of its neighbors via LDP sessions. All node A has to do, therefore, is forward the packet to the port indicated in the MPLS label (say this is label T), then look up the label associated with that IP address in the FEC of neighbor node T (let us say, this is K) and create a new label switching table entry recording "if from X on label T, switch to port T, with label K." The same process applied at all nodes inherently builds label switching tables that indirectly define a whole fabric of LSPs. The key to making this work is LDP for reliable and timely exchange of the FEC-to-label mappings. FEC-to-label bindings can also be made to dissolve with time so that the current set of implicit LSPs tracks actual traffic flows over some time scale. This is referred to in general as a "soft state" mode of operation.

[ Team LiB ]

[ Team LiB ]

2.9 Extensions for IP-Centric Control of Optical Networks

We can now look at a number of enhancements to adapt existing Internet protocols for direct application to the task of dynamic path setup and tear-down on a "call by call" basis.

2.9.1 OSPF-TE

OSPF-TE [KoRe00] refers to the extension of OSPF to have awareness of link capacities and other technical attributes of the links in the topology seen by the routers. Generally in the Internet engineering community the term "traffic engineering" is used to refer to a context where the actual capacity of a link or a path is taken into account. Basic OSPF views the network purely as a simple-graph; either a link is present or it is not. So adding the concept of a link having a specific capacity to be respected in routing, etc. is the reason for OSPF-TE. The key change is a new type of LSA that carries much more information about links and the available ways of accessing bandwidth on each link through its end-nodes. The resulting network view obtained in each node becomes a complete global capacitated view of the network.

With an OSPF-TE database, paths can thus be computed by any node as an originator of a new path using various criteria, often different from the shortest path, to route around a failed link or congested link or to establish a disjoint backup route, for example.

OSPF-TE can also support the setup of LSPs that are capacity aware (and hence performance managed) to support service level agreements (SLAs). With TE information it will also be possible to set up backup LSPs as a protection arrangement, with knowledge of the number of other primary LSPs also making backup arrangements over such links, and even with consideration of shared-risk link group (SRLG) information that indicates the mapping of common physical structures into logical links that have a correlated failure risk.

The TE-LSA describes the link's transmission resources and current usage in terms of:

A list of Shared Risk Link Groups to which the link belongs.

The maximum bandwidth and available bandwidth on the link.

An assigned link usage cost.

The basic unit of bandwidth management on the link.

An assigned LSP setup cost.

A current measure of the total oversubscription of bandwidth reservations on the link.

A TE-LSA also includes information about the switching level capabilities of the end node of the link issuing the LSA. This includes indications of whether the node can provide:

LSR capability

LER (label-switching edge router) capability Basic Packet Switch capability

STS cross-connection

Wavelength channel cross-connection Fiber level cross-connection

One issue with TE extensions to OSPF is the frequency of LSA updating to disseminate changes in capacity usage on links. Advertising the entire link state in response to a unit change on any single link could be excessive in terms of flooding the network with TE-router LSAs. The TE incremental link update LSA is therefore a smaller packet that advertises only incremental link updates and may be issued only when link capacity changes by a preset threshold amount.

2.9.2 Link Management Protocol (LMP)

LMP [LaMi01] is an IP-based protocol to handle issues that arise when the logical link between nodes is actually a channelized optical line transmission system, as opposed to a single bit-pipe link model in the original IP networking paradigm. LMP runs as an ongoing session established between each pair of directly connected OXC nodes over a mutually agreed choice of one of the available overhead channels on the optical span. This may be either a predefined OSC or one of the general purpose overhead channels provided by digital wrappers on the lightwave channels. LMP is primarily concerned with making sure that OXCs at each end of the span have coherent channel numbering schemes and summarizing the current status in terms of available capacity and other properties of the span for efficient dissemination in TE-LSAs to support capacity-aware provisioning processes. LMO also detects and isolates failures and generates alarm information from the span to the host OXC. More specifically, LMP functions are:

Link Verification: Procedures for the two ends to determine the logical mapping of link ends between themselves. (This is analogous to using a tone sender to "buzz out" the mapping of wire pairs at each end of a twisted pair cable.)

Link Summarization: Procedures to synchronize the span-information view for each associated OXC and produce summary LSAs that describe the span as a single overall TE-link from a network-wide view.

Link Group ID: To reduce control traffic during failures, the alarm status of all channels is organized as a single link group ID.

Shared Risk Link Group Identifier (SRLG): Identifies which SRLGs this span is affected by. This information is manually configured by the user to support diverse route path computation under backup path type protection schemes to be discussed later.

Bit Error Rate (BER) Estimate.

Optical Protection Span-Switching: This is an indication of whether the optical line includes a same-span protection switching arrangement to protect against single channel failures or dedicated diverse routed 1+1 protection switching arrangements.

This information can be used in service provisioning to influence the restoration or protection arrangements that might otherwise be put in place.

Total Span Length: This is manually entered configuration data that may be used as a routing metric or to estimate delay.

Fault Detection, Localization and Notification functions: LMP includes a very frequent lightweight type of "Hello" protocol to serve as a constant monitor on the span's continuity and gets an assured failure notification to the attached OXCs in the event of a failure. If the OXC is an o-e-o type it may immediately detect the failure on its own as well, but in a purely transparent OXC, the LMP notification may be the only source of alarm activation to initiate OXC-based restoration or protection. LMP also provides support for interaction with optical line terminating systems for section-level fault location.

Alarm Management: To suppress cascading and/or spurious alarms during normal connection procedures.

Trace Monitoring: To allow an OXC to request that unique marking code be applied to the overhead bytes of a certain channel, to support applications such as physical path trace audits and other tests of the optical line system continuity and integrity.

2.9.3 MPlS and Generalized MPLS (GMPLS)

What has been called "MPlS" is based on the analogy between an LSP and a lightpath. Both are generically "switched" path constructs

in that once the switching relationships are set up, the result is an inescapable sequence of relaying actions that direct any input to the predetermined output in a completely circuit-like way. Thus MPlS is just MPLS where the label space of each span is simply the set of wavelength channels on the span. The analogy is that the completely predetermined sequence of relaying actions that will route a packet over an LSP established on a set of LSRs is just as circuit-like and hard-wired as is the route of a bit stream applied to a wavelength path cross-connected through a set of OXCs. The only difference is a technical one having to do with the switching fabric involved. Instead of label-swapping relating each input to output of an LSR, the physics of optical cross-connect switching relate each input wavelength channel to the corresponding output once the path is set up. In MPlS the available wavelengths of each fiber thus are called the implicit label set that is available allocation at the adjacent OXCs when those OXCs are viewed (from a control-plane standpoint) as if they were LSRs.

But it is just one more conceptual step to realize that any transmission channel resource can be viewed as an implicit label. Then the same processes that set up an LSP in an MPLS network can be used to provision entirely general paths through a transport network. If fast enough, certain forms of fast reprovisioning may even be considered for restoration purposes. [DoYa01]. This is referred to as generalized MPLS (GMPLS) [GMP01]. In GMPLS any unit of allocation, of any type of physical transport resource, is thought of as a generalized label. For instance, in GMPLS a router port ID, an ATM VPI/VCI, an STS-1 VT1.5 number, a fiber, a waveband, or a wavelength on a fiber can all be thought of as labels through which label-switched paths can be established. For cases that we would otherwise just recognize as circuit switching or cross-connection, the actual "label swapping" mechanism is the physical switching process for the respective medium, and the labels are the identity numbers of the respective channels, timeslots or fibers being switched together. The physics of light reflection in a MEMs array is thus re-conceptualized as a "generalized label" swapping mechanism.

This is much more that just an interesting analogy, however. The practical advantage is that by identifying and treating the ID numbers of all different transmission resources as labels, we allow all forms of path setup to follow fairly simple extensions of the same logical process and protocol implementations, and use the same form of databases, as MPLS. The extensions include mechanisms to support hierarchical and type-consistent label association at each node. Type consistency is obvious: a label of type {lightwave channel} cannot be connected to a label of type {STS-1}, but could be connected to a label of the same type or of type {fiber} or {waveband}. The sense of hierarchical label distribution is illustrated, for example, by establishment of a new DS-1 layer path, that may need to trigger creation of a new STS-1 layer path and, if yet further needed, also trigger a new lightpath. But in all cases the process will first try to "pack" the new path into any already commissioned paths at each level.

Dans le document [ Team LiB ] (Page 95-99)

Documents relatifs