• Aucun résultat trouvé

Synchronized Optical Networks

Dans le document Network Processors (Page 94-148)

The data applications that drove most of the traffi c in the core networks forced SDH/SONET architecture to be upgraded to better serve this type of traffi c. The enhancements offered, collectively called next-generation SDH/SONET (NG-SDH/

SONET) [247], include Generic Framing Procedure (GFP) [206], Link Capacity Adjustment Scheme (LCAS) [207], and Virtual Concatenation (VCAT) [208]. These were standardized by the ITU-T as G.7041, G.7042, and G.7043, respectively, and are integrated into recent versions of G.707 [195] and G.803 [197] (SDH standards), as well as in G.709 [196] (Optical Transport Networks—OTN Standard). These solu-tions are only implemented at the source and the destination equipment, leaving the core network equipment intact, and therefore offer advantages in terms of costs and implementation, but some disadvantages in terms of network utilization.

Returning to the discussion of SONET/SDH that we began in the previous chapter, contiguous concatenation of containers was used to carry higher band-widths of data. For a TDM environment, this contiguous concatenation is quite effi cient. However, continuous, fi xed rates of data streams, mapped directly to SDH/

SONET, might waste the space of SDH/SONET containers (tributaries). For exam-ple, putting a Gigabit Ethernet (GbE) in SDH containers requires the use of C-4-16c, which carries 2.4 Gbps, as the C-4-4c is not enough; therefore, 1.4 Gbps out of the 2.4 Gbps are wasted (58% of the bandwidth). If it is not continuous data stream, as is often the case with bursty Ethernet or IP traffi c, the waste can be even higher.4

In order to solve the problem of wasted space, a fl exible concatenation of SONET/SDH payload is used, called VCAT (in contrast to contiguous concatenation).5

4Further analysis of data usage effi ciency over SDH, with and without VCAT, is provided in RFC 3255,

“Extending PPP over SONET/SDH” [241].

5Concatenation can be regarded as “inverse multiplexing,” since it uses many channels for one session between a pair of entities. This is the inverse operation of multiplexing, where one channel is used for many sessions between many pairs of entities.

First, the data streams are segmented into groups of virtual containers (VCs) that are smaller than the data stream rate, and then sent in “VC paths.” For example, the GbE is mapped into seven C-4 containers (each of 150 Mbps). These groups of VCs, called Virtual Concatenated Groups (VCGs), are then transmitted in the SDH network, each VC in its own path. In other words, each member of the VCG (that is, each VC) may be transmitted over some STM-n/OC-n link, not necessarily contigu-ously in the same link, and maybe even in different links. The source node arranges these VCGs, and after their arrangement, only the destination node is aware of the VCGs and is responsible for building the original data streams. Intermediate nodes are not required to be aware of any of this. However, due to the potential use of multiple paths by these VCGs, the destination equipment may encounter unor-dered VCs, as well as differences in delays, and must be prepared to handle this.

VCGs are notated in the SDH terminology as VC-m-nv, where m is the VC con-tainer number, n is the number of VCs used in the VCG, and v signifi es the use of VCAT rather then contiguous concatenation. In the case of GbE, for example, the VCG is called VC-4-7v (which means 7 VC-4’s).

VCGs can also be arranged with PDH channels; for example, E1-nv, where n stands for the number of E1 channels (each of 1.98 Mbps) concatenated together (up to 16). VCAT can also be implemented over OTN, and Optical Payload Unit (OPU) frames can be concatenated; for example OPU3-nv, where n stands for the number of OPU3 channels (each of 43 Gbps) concatenated together (up to 256), yielding a 10 Tbps link.

The LCAS is a protocol based on VCAT. Its purpose is to add fl exibility to the VCG-created links by dynamically changing the SONET/SDH pipes used for VCGs (i.e., increasing or decreasing VCG capacity) [51]. This is done without affecting the existing VCG services. The source and destination nodes exchange LCAS messages;

for example, to cope with bandwidth-on-demand requirements (according to time of day, special events, service granularities, etc.) by requesting the addition of more VCs to the VCG in one direction (capacity control is unidirectional). It is possible to save unused bandwidth (e.g., resulting from a failure of some member of a VCG, or from “right sizing” bandwidth for some application) by using LCAS, which automatically reduces the required capacity. LCAS can be used also to enhance load sharing functionalities, as well as QoS differentiation, or traffi c engineering (TE), described in Chapter 6.

The GFP is a traffi c adaptation protocol [172] that provides a mechanism for mapping packets and circuit-switched data traffi c onto SONET/SDH frames, while accommodating diverse data transmission requirements. Traffi c mapping is done at the source node into general-purpose GFP frames, and then the GFP frames are mapped into VCs or VCGs and onto the SDH network. The intermediate modes are not aware of the GFP mapping, and only the destination node de-maps the original traffi c. GFP uses two modes, referred to as Frame-Mapped GFP (GFP-F) and Transparent-Mapped GFP (GFP-T). GFP-F is optimized for a packet-switching environment and a variable-length packet, whereas GFP-T is intended for sensitive traffi c, bandwidth effi ciency, and transparency to the line code data.

3.2 From Telecom Networks to Data Networks 83

Typical applications of GFP-F are Point-to-Point Protocol (PPP), IP, Ethernet, MPLS, RPR, or Digital Video Broadcast (DVB) (according to ITU-T G.7041 [206]).

The entire data frame is mapped into one GFP-F frame, and the resulting GFP-F frames are variable in size.

GFP-T is used for Storage Area Networks such as Fiber Channel, Enterprise Sys-tems Connection, and Fibre Connection, or video applications such as Digital Visual Interface and DVB, or even GbE (according to ITU-T G.7041 [206]). The data may span over many fi xed-length GFP frames. It is mapped into the GFP-T frames byte by byte and organized in N 536-bit superblocks, each of which is constructed from eight 65-bit blocks and 16-bit CRC.6 For optimal usage of SDH link capacities (i.e., minimum GFP-T overhead), ITU-T recommends using a minimum number of super-blocks in a GFP-T frame. For example, for a 3.4 Gbps Fiber Channel on a VC-4-24v VCG path size, at least 13 superblocks per GFP-T frame are required, or for GbE on VC-4-7v, at least 95 superblocks per GPF-T frame are required.

The structure of the GFP frame is shown in Figure 3.3. The two transport modes are different in the information fi eld of the payload area. These two trans-port modes can coexist in a transmission channel. Generic Framing Procedure is a very useful adaptation layer, and one that is not necessarily restricted to SONET/

SDH or OTN.

6The organization of the superblock and its coding is beyond the scope of the book, and can be reviewed in ITU-T.7041 [206] and [172].

FIGURE 3.3

GFP frame structure (from left to right) Core Header

4 bytes

Header

Payload Area

Information, 0−65,531 bytes for GFP-F, FCS N*(8*65 + 16) bits for GFP-T Optional Extension

Headers 2 bytes

Length

Type tHEC cHEC

2 bytes Optional

4 bytes 4−64 bytes

2 bytes 2 bytes 0−58 bytes

eHEC 0/2 bytes

HEC—Header Error Check (CRC-16) cHEC—Core HEC

tHEC—Type HEC

eHEC—Extension headers HEC FCS—Frame Check Sequence (CRC-32)

3.2.3 Resilient Packet Ring

Resilient Packet Ring (RPR) was standardized by the IEEE (IEEE 802.17) [190] for enabling packet-oriented data networks to function effi ciently on optical rings.7 RPR offers a new Medium Access Control (MAC) scheme that provides effi cient band-width utilization, a fast protection mechanism, distributed fairness algorithms, differ-ential QoS, and network survivability similar to SONET/SDH (on which RPR resides).

RPR can work above SONET/SDH or Ethernet links, and can support up to 10 Gbps bandwidths, up to 255 station attachments on rings that can span up to 2000 km, and it is scalable and interoperable. Scalability is achieved by using bridges attached to the ring, as well as specifi c support of RPR in bridging (and remote stations).

RPR’s MAC is based on spatial reuse; in other words, at every segment of the ring topology, data can be transmitted simultaneously with other data over other segments, and is stripped at its destination. Data is transmitted in a dual, counter-rotating ringlet; that is, two close unidirectional rings, each forming a path of an ordered set of stations and interconnecting links.

The physical layer underneath RPR’s MAC layer, as well as the upper layers above RPR’s MAC, are agnostic to RPR, since well-defi ned interfaces are used. RPR addressing is 48 bits wide (the usual IEEE addressing scheme), of which 24 bits make up an Organizationally Unique Identifi er. RPR addressing supports unicasts, multicasts, and broadcasts. An additional mode of addressing is to fl ood the frames to a set of bridges that are possibly associated with an individual MAC address. This is like multicasting to all bridges, but where the destination address is a unique station’s address rather than a multicast address.

Each station controls two ringlets (called west and east) in its MAC layer, which are both used for data transmission, protection, and congestion avoidance. The sta-tion’s decision about which ringlet to send the information on is subject to the ring condition and known state, and obviously the destination address. Each station can add information to the ring (in either direction), copy the incoming information (for its upper layers, and retransmit it further on), strip it (i.e., take it off the ring;

stop transmitting it further), or discard it (retransmit it to the next station). In case of a ring failure, the information is quickly wrapped or steered according to the failure location on the ring (see Figure 3.4).

Some traffi c management modules exist in each station’s MAC. One of these mechanisms is a fairness control that is aware of the ring usage and congestions (due to periodical broadcasted fairness control frames). These fairness modules control some of the stations’ traffi c by shaping it, such that they will inhibit the stations from disproportional usage of the ring, which would prevent “downstream”

7Although we categorize RPR as technology that goes from telecom to data networks on the road to convergence, many categorize RPR as data network technology that goes to the tele-com for the same purpose (and remember its origin—the IEEE802). We position it here, since the telecom industry that traditionally manufactures, operates, or are used to optical rings, pushes RPR.

3.2 From Telecom Networks to Data Networks 85

stations from using it. Each station maintains two such fairness modules, one for each ringlet. Each of these fairness modules sends its control frames to its upstream neighbor via the opposing ringlet. In other words, each RPR station processes the incoming fairness control frames, and adjusts the rate at which its active queue is drained onto the ring. It further monitors its sending queues, and when they start to be congested, the node notifi es the “upstream” stations to slow down by using the other ringlet.

There are three service classes defi ned for RPR:8 Class A is for real time applica-tions; this class receives a guaranteed bandwidth, has a low jitter, and is not subject to the fairness control mechanism. Class B is for near real time applications, and has two subclasses: Committed Information Rate (CIR) and best effort transfer of the Excess Information Rate (EIR) beyond the CIR. The CIR subclass receives a guaranteed bandwidth, has a bounded jitter, and is not “fairness eligible” (subject to the fairness control), while the EIR subclass receives the same treatment as Class C.

Class C is a best-effort class, which has no guaranteed bandwidth and is subject to fairness control.

There are fi ve RPR frame formats, as depicted in Figure 3.5, which are:

I Data frame, which has two formats:

– Basic data frame (used for local-source frames) – Data-extended frame (used for remote-source frames)

I Control frame (used for various control tasks)

I Fairness frame (used for broadcasting fairness information)

I Idle frame (used to adjust rate synchronization between neighbors)

As noted previously, RPR can reside on top of Ethernet, on SONET/SDH directly, or in the GFP adaptation layer. RPR architectural positioning, layers and interfaces are depicted in Figure 3.6.

To conclude, RPR has the potential of being an important and effi cient transport mechanism, residing on top of the very common SDH rings or on future OTN rings.

8Service classes are described in more detail in Chapter 6.

FIGURE 3.4

RPR failure protection

B

A D

C

Normal Steer Wrap

B C

A

B

A D

C

D A B A D C B A

D

However, as of today, it is not widely adopted, and it might either end up like Fiber-Distributed-Data-Interface or be accepted extensively for packet data networks.

NG-SONET/SDH technologies are certainly not helping RPR’s acceptance, since they are not only more familiar to the carriers, but they can also potentially provide an answer to the data-centric traffi c for their already existing SONET/SDH rings, with less investment.

FIGURE 3.5

RPR frame formats (from right to left)

Frame Check Sequence

regular/jumbo frames) Protocol Type HEC ExtendCTL TTLBase Source

Frame Check Sequence TTLBaseExtendCTL

HEC (c) Control Frame Control Data Unit

(0-1592 bytes)

Control Ver Control Type Frame Check Sequence

Frame Check Sequence Fairness Header

(d) Fairness Frame

(e) Idle Frame

Fair Rate Idle Payload (4 bytes) TTL—Time to live (Hop count to destination)

BaseCTL—Frame type, service class, and baseline controls ttlBASE—Snapshot of TTL, for computing hop count to source ExtendCTL—extended flooding and consistency checks HEC—Header Error Check (CRC-16)

FCS—Frame Check Sequence (CRC-32) Control Ver—Control version

FIGURE 3.6

RPR architectual positioning

Media Access Control (MAC) datapath MAC Control (Fairness, Topology and protection, OAM)

Logical Link Control (LLC) PRS-1—1 Gbps Packet PHY reconciliation sublayer PRS-10—10 Gbps Packet PHY reconciliation sublayer SPI—System Packet Interface

XAUI—10 Gbps Attachment Unit Interface XGMII—10 Gbps GMII

3.3 From Datacom to Telecom 87

FIGURE 3.7

Networking technologies over transport networks

IP IP/MPLS

Optical Infrastructure–DWDM

OTN Ethernet

WIS PPP ATM PDH GFP-T

(POS)

Ethernet

RPR ESCON

GFP-F SONET/SDH

3.2.4 Multiservice Platforms

Although Multiservice Platforms (MSPs) do not present any specifi c technology, the term MSP is widely adopted by carriers and vendors, and is often mentioned in relation to data services in the telecommunications environments. MSPs are net-work elements (NEs)—Multiservice Provisioning Platforms (MSPP), Multiservice Transport Platforms (MSTP), or Multiservice Switching Platforms (MSSP)—which are actually composite, SONET/SDH based equipment, geared for packet-oriented services as well as TDM-oriented services. MSPP is associated with NG-SONET/SDH, and can be regarded as its implementation; MSTP is basically MSPP plus DWDM capabilities, while MSSP is MSPP that integrates cross-connect and other high-end switching capabilities. MSPs function as an important interim phase in the migra-tion process from telecommunicamigra-tions infrastructures toward data (and TDM) service infrastructures.

In general, MSPs are NEs that essentially implement NG-SONET/SDH, support CWDM and DWDM, and offer diverse optical and electrical interfaces, including GbE, IP, OTN, DS-n, and E-n PDH signals.

The shift from telecommunications networks to data networks through NG-SONET/SDH technologies and MSPs can be summarized in Figure 3.7, where all relations between technologies are shown. The abbreviations used in this fi gure are explained throughout this chapter.

3.3 FROM DATACOM TO TELECOM

The second convergence approach between data networks and telecommunica-tion networks is to use data networks (i.e., frame- or packet-oriented networks) as core global and national carrier networks, as metro carrier networks, and for metro access networks, carrying both TDM and data applications.

This subject is quite new and, as of this writing, is only a couple of years old.

The fi eld is still quite unstable, with many changes and new approaches and many standards being offered by many standardization bodies—not all of which will be adopted. This part tries to provide the most relevant and stable information.

3.3.1 Internet Engineering Task Force Approach

The Internet Engineering Task Force (IETF) is obviously Internet-centric, and it is mainly concerned with IP for data networks that operate in layer 3, strength-ened by higher layer protocols and applications (e.g., TCP, HTTP). IP networks and backbone IP networks are deployed extensively today, providing remark-able answers for most data sharing and usage requirements. IP networks are also adapted for plenty of other types of networks, channels, and technologies, and support many protocols above and under the IP layer. IP is principally used above Ethernet networks by enterprises, as well as above many other channels, links, and transport networks (e.g., Frame Relay, ATM, leased lines, dial lines, SONET/SDH).

The industry (and IETF) have extended pure IP-based networks to other tech-nologies in response to requirements for bridging multiple networking technolo-gies and protocols, interoperability, and integrated support of multi-protocols, including “telecom” type of communications (i.e., streaming, as well as real-time, scalability, reliability, and manageability demands from these networks, and the associated QoS). One major result is the introduction of a shim layer between all kinds of layer 2 networks and other layer 2 and layer 3 networks (hence protocol). This shim layer uses switching instead of routing (resulting speed and scalability) in preassigned “tunnels” in the network (hence, the ability to engineer the traffi c and assure QoS). This shim network, the MPLS [370], became a funda-mental concept, a network, and a tremendously successful IETF solution for carri-ers and service providcarri-ers, mainly in their core and long haul networks.

Carriers and service providers enable virtual separation of their Packet Switch Networks (PSN)—their IP or MPLS networks—by providing seemingly private net-works for their customers. These Provider Provisioned Virtual Private Netnet-works (PPVPNs) are provided according to several techniques suggested by the industry and standardized by the IETF.

This part focuses on two main technologies: MPLS and PPVPN. Just before diving into these two technologies, however, it should be noted that current IETF-based networks are actually interoperable combinations of technologies and architectures: PSNs (such as IP or MPLS) and other networks (such as ITU-T’s SONET/SDH, IEEE802’s Provider Backbone Bridged Networks [PBBN] and Pro-vider Backbone Bridges-Traffi c Engineering [PBB-TE], both of which are described in the next part). IETF-based carrier networks mostly originated from a routing standpoint rather than bridging and switching,9 as in the case of IEEE 802 driven networks.

3.3.1.1 Multiprotocol Label Switching

MPLS [370] is a forwarding paradigm for label switching, which came about as an effort to speed up IP datagrams in backbone networks. Instead of handling,

9Considering where IEEE802 and IETF came from (i.e., layer 2 LANs, bridges and switched Virtual LANs, and layer 3 Internet Protocol, respectively), their networking approaches are quite obvious.

3.3 From Datacom to Telecom 89

routing, and forwarding each IP datagram separately at each node (router) along the datagram path to its destination throughout the network, MPLS suggests imple-mentation of label-switched paths, or tunnels in the backbone networks, over the various existing link-level technologies, through which similar IP packets can fl ow.

MPLS is a transmission mechanism that incorporates IP and ATM ideas into a new sublayer, under the third layer and above the second one. In other words, it is a 2.5-layer protocol, or a shim 2.5-layer, which can carry IP, Ethernet, ATM, or TDM sessions (hence, the Multi-Protocol), and it can be implemented on top of L2 SONET/SDH, Ethernet, Frame-relay, ATM or Point to Point links. MPLS allows precedence han-dling (Class of Service, CoS), QoS assignments, TE, and Virtual Private Networking (VPN) services, as described in the following.

MPLS merges the connectionless type of traffi c with traditional oriented transmission, thus allowing streaming and multimedia applications to coexist with legacy IP applications.

3.3.1.1.1 Multi-Protocol Label Switching Overview

The basic idea in MPLS is that packets can be classifi ed into Forwarding Equiva-lence Classes (FECs), where each FEC contains packets that can be forwarded in the same manner (for example, they have similar destinations, similar paths, or similar service requirements from the network). When packets enter the MPLS network, they are classifi ed by the edge, ingress router (called Label Edge Router,

The basic idea in MPLS is that packets can be classifi ed into Forwarding Equiva-lence Classes (FECs), where each FEC contains packets that can be forwarded in the same manner (for example, they have similar destinations, similar paths, or similar service requirements from the network). When packets enter the MPLS network, they are classifi ed by the edge, ingress router (called Label Edge Router,

Dans le document Network Processors (Page 94-148)

Documents relatifs