• Aucun résultat trouvé

Private-Line Networking

Dans le document DAVID MCDYSAN DAVE PAW (Page 124-127)

Figure 6-3 depicts a network of three users’ DTEs connected via private lines. User A has a dedicated 56 Kbps circuit to user B, as well as a dedicated T1 (1.544 Mbps) circuit to user C. Users B and C have a dedicated 1.544 Mbps circuit between them. Users generally lease a private line when they require continuous access to the entire bandwidth between two sites. The user devices are voice private branch exchanges (PBXs), T1 multiplexers, rout-ers, or other data communications networking equipment. The key advantage of private lines is that a customer has complete control over the allocation of bandwidth on the pri-vate-line circuits interconnecting these devices. This is also the primary disadvantage, in that the customer must purchase, maintain, and operate these devices in order to make efficient use of the private-line bandwidth. Up until the 1980s, most voice networks were private line based, primarily made up of dedicated trunks interconnecting PBXs. The sit-uation changed once carriers introduced cost-effective intelligent voice network services.

Now, most corporate voice networks use these carrier services. Data networking appears to be moving along a similar trend toward shared carrier–provided public data services.

In the early 1990s, virtually every corporate data network was private line based. Now, many corporate users have moved from private lines to embrace frame relay, the Internet, and ATM for their data networking needs.

While private lines provide dedicated bandwidth, carriers don’t guarantee 100 per-cent availability, and sometimes a service provider statement of availability is related to the network and not an individual connection. Sometimes, a carrier provides for recovery of private-line failures via digital cross-connects, transmission protection switching, or SONET/SDH rings. However, in many cases, a private line comprises several segments across multiple carriers. For example, a long-haul private line typically has access circuits provided by local exchange carriers on each end and a long-distance segment in the mid-dle provided by an interexchange carrier. If the private line or any of its associated trans-mission equipment fails (because of, e.g., a fiber cut), the end users cannot communicate unless the user DTEs have some method of routing, reconnecting, or dialing around the failure. Thus, the user must decide what level of availability is needed for communica-tions between sites. Service providers usually provide a mean time to repair (MTTR) guarantee for a private-line user connection. This promises a user diligence in repairing a failed individual connection, usually within a time frame of two to five hours. There are two generic categories of restoration in TDM networks: linear and ring based. Linear res-toration, commonly called protection switching, was implemented before SONET and SDH [Goralski 00]. It uses the concept ofworkingandprotectchannels. Normally, a single protect channel protected n working channels, often indicated by the notation 1:n protec-tion, pronounced as “1 for n” or “1 by n” or “1 to n” protection. When a working channel fails, the equipment at each end of a linear system quickly switches over the working

Figure 6-3. Example private-line network

102

ATM & MPLS Theory & Application: Foundations of Multi-Service Networking

channel to the protect channel. If the protect channel is already in use (or already has failed), the working channel cannot be restored. For this reason, a commonly encoun-tered deployment configuration for short transmission spans is 1:1 protection over di-verse facilities; whereas, for longer spans, this 100 percent redundancy becomes expensive and is therefore often avoided. Also, if all n + 1 channels traverse the same physical route, then a single failure could disrupt all n working channels. Therefore, when protecting longer-distance systems, the working and protect channels should be on diverse physical facilities, although in reality only small values of n are practical because of a limited amount of diversity in the physical fiber plant. Before the advent of wavelength division multiplexing (WDM) systems, a 1 + 1 transmission system required four fibers for opera-tion (working and protect channels with respective transmit/receive pairs). The first step to better utilize the fiber plant was to multiplex transmit and receive signals onto one fi-ber, making it possible to double the capacity of the fiber. WDM systems now provide hundreds of wavelengths over a single fiber. However, WDM systems introduce another point of failure for SONET/SDH systems, and the ability to switch to redundant WDM systems becomes necessary. To provide 100 percent connection redundancy in such an environment would require 1 + 1 typ es of SONET/SDH systems together with redun-dant WDM systems, a very expensive solution. More common deployments use diverse redundant WDM systems together with 1:n SONET/SDH systems (sometimes with di-verse protection channels) or multinode ring systems.

Protection switching was the precursor to ring switching, a subject we cover later in this chapter after introducing the SONET/SDH architecture and inverse multiplexers (I-Muxes). Early digital private-line networks offered service at only 56/64 Kbps or DS1/E1 (i.e., 1.5 and 2.0 Mbps) rates. The gap in speed and price between these speeds created a market for inverse multiplexers, commonly called I-Muxes, that provided inter-mediate-speed connectivity by combining multiple lower-speed circuits. As illustrated in Figure 6-4, an I-Mux provides a single high-speed DTE-DCE interface by combining n lower-speed circuits, typically private lines. Inverse multiplexers come in two major cate-gories: nx56/64 Kbps and nxDS1/E1. The inverse multiplexer provides a DCE interface to the DTE operating at a rate of approximately 56/64 Kbps or DS1/E1 times n, the num-ber of circuits connecting the I-Muxes. These devices automatically change the DTE-DCE bit rate in response to circuit activations or deactivations. The I-Muxes also account for the differences in delay between the interconnecting circuits. The actual serial bit rate provided to the DTE is slightly less than the nx56/64 Kbps or nxDS1/E1 rate because of overhead used to synchronize the lower-speed circuits. Some I-Muxes also support cir-cuit-switched data interconnections in addition to private-line connections. A bonding standard defines how nxDS0 I-Muxes interoperate. Higher-speed nxDS1/E1 I-Muxes utilize a proprietary protocol and are hence incompatible among vendors. Chapters 7 and 11 describe frame relay and ATM standards for inverse multiplexing.

Dans le document DAVID MCDYSAN DAVE PAW (Page 124-127)