• Aucun résultat trouvé

Conclusions about Measured Characteristics

Dans le document [ Team LiB ]• Table of Contents (Page 34-40)

The slow downward slope of the measurements is due to clock skew between the source and the

destination. One machine's clock is running slightly faster than the other's, leading to a gradual change in the perceived transit time.

Several step changes in the average transit time, perhaps due to route changes in the network, can be observed.

The transit time is not constant; rather it shows significant variation throughout the session.

Figure 2.10. Variation in Network Transit Time: 400-Second Sample

Figure 2.11. Variation in Network Transit Time: 10-Second Sample

These are all problems that an application, or higher-layer protocol, must be prepared to handle and, if necessary, correct.

It is also possible for packets to be reordered within the network—for example, if the route changes and the new route is shorter. Paxson95 observed that a total 2% of TCP data packets were delivered out of order, but that the fraction of out-of-order packets varied significantly between traces, with one trace showing 15% of packets being delivered out of order.

Another characteristic that may be observed is "spikes" in the network transit time, such as those shown in Figure 2.12. It is unclear whether these spikes are due to buffering within the network or in the sending system, but they present an interesting challenge if one is attempting to smooth the arrival time of a sequence of packets.

Figure 2.12. Spikes in Network Transit Time (Adapted from Moon et al., 1998).91

Finally, the network transit time can show periodicity (as noted, for example, in Moon et al. 199889), although this appears to be a secondary effect. We expect this periodicity to have causes similar to those of the loss periodicity noted earlier, except that the events are less severe and cause queue buildup in the routers, rather than queue overflow and hence packet loss.

Acceptable Packet Sizes

The IP layer can accept variably sized packets, up to 65,535 octets in length, or to a limit determined by the maximum transmission unit (MTU) of the links traversed. The MTU is the size of the largest packet that can be accommodated by a link. A common value is 1,500 octets, which is the largest packet an Ethernet can deliver. Many applications assume that they can send packets up to this size, but some links have a lower MTU. For example, it is not unusual for dial-up modem links to have an MTU of 576 octets.

In most cases the bottleneck link is adjacent to either sender or receiver. Virtually all backbone network links have an MTU of 1,500 octets or more.

IPv4 supports fragmentation, in which a large packet is split into smaller fragments when it exceeds the MTU of a link. It is generally a bad idea to rely on this, though, because the loss of any one fragment will make it impossible for the receiver to reconstruct the packet. The resulting loss multiplier effect is something we want to avoid.

In virtually all cases, audio packets fall within the network MTU. Video frames are larger, and applications should split them into multiple packets for transport so that each packet fits within the MTU of the network.

Effects of Multicast

IP multicast enables a sender to transmit data to many receivers at the same time. It has the useful property that the network creates copies of the packet as needed, such that only one copy of the packet traverses each link. IP multicast provides for extremely efficient group communication, provided it is supported by the network, making the cost of sending data to a group of receivers independent of the size of the group.

Support for multicast is an optional, and relatively new, feature of IP networks. At the time of this writing, it is

relatively widely deployed in research and educational environments and in the network backbone, but it is uncommon in many commercial settings and service providers.

Sending to a group means that more things can go wrong: Reception quality is no longer affected by a single path through the network, but by the path from the source to each individual receiver. The defining factor in the

measurement of loss and delay characteristics for a multicast session is heterogeneity. Figure 2.13 illustrates this concept, showing the average loss rate seen at each receiver in a multicast session measured by me.

Figure 2.13. Loss Rates in a Multicast Session

Multicast does not change the fundamental causes of loss or delay in the network. Rather it enables each receiver to experience those effects while the source transmits only a single copy of each packet. The heterogeneity of the network makes it difficult for the source to satisfy all receivers: Sending too fast for some and too slow for others is common. We will discuss these issues further in later chapters. For now it's sufficient to note that multicast adds yet more heterogeneity to the system.

Effects of Network Technologies

The measurements that have been presented so far have been of the public, wide area, Internet. Many applications will operate in this environment, but a significant number will be used in private intranets, on wireless networks, or on networks that support enhanced quality of service. How do such considerations affect the performance of the IP layer?

Many private IP networks (often referred to as intranets) have characteristics very similar to those of the public Internet: The traffic mix is often very similar, and many intranets cover wide areas with varying link speeds and congestion levels. In such cases it is highly likely that the performance will be similar to that measured on the public Internet. If, however, the network is engineered specifically for real-time multimedia traffic, it is possible to avoid many of the problems that have been discussed and to build an IP network that has no packet loss and minimal jitter.

Some networks support enhanced quality of service (QoS) using, for example, integrated services/RSVP11 or differentiated services.24 The use of enhanced QoS may reduce the need for packet loss and/or jitter resilience in an application because it provides a strong guarantee that certain performance bounds are met. Note, however, that in many cases the guarantee provided by the QoS scheme is statistical in nature, and often it does not completely eliminate packet loss, or perhaps more commonly, variations in transit time.

Significant performance differences can be observed in wireless networks. For example, cellular networks can exhibit significant variability in their performance over short time frames, including noncongestive loss, bursts of loss, and high bit error rates. In addition, some cellular systems have high latency because they use interleaving at the data link layer to hide bursts of loss or corruption.

The primary effect of different network technologies is to increase the heterogeneity of the network. If you are designing an application to work over a restricted subset of these technologies, you may be able to leverage the facilities of the underlying network to improve the quality of the connection that your application sees. In other cases the underlying network may impose additional challenges on the designer of a robust application.

A wise application developer will choose a robust design so that when an application moves from the network where it was originally envisioned to a new network, it will still operate correctly. The challenge in designing audiovisual applications to operate over IP is making them reliable in the face of network problems and unexpected conditions.

Conclusions about Measured Characteristics

Measuring, predicting, and modeling network behavior are complex problems with many subtleties. This discussion has only touched on the issues, but some important conclusions are evident.

The first point is that the network can and frequently does behave badly. An engineer who designs an application that expects all packets to arrive, and to do so in a timely manner, will have a surprise when that application is deployed on the Internet. Higher-layer protocols—such as TCP—can hide some of this badness, but there are always some aspects visible to the application.

Another important point to recognize is the heterogeneity in the network. Measurements taken at one point in the network will not represent what's happening at a different point, and even "unusual" events happen all the time. There were approximately 100 million systems on the Net at the end of 2000,118 so even an event that occurs to less than 1% of hosts will affect many hundreds of thousands of machines. As an application designer, you need to be aware of this heterogeneity and its possible effects.

Despite this heterogeneity, an attempt to summarize the discussion of loss and loss patterns reveals several "typical"

characteristics:

Although some network paths may not exhibit loss, these paths are not usual in the public Internet. An application should be designed to cope with a small amount of packet loss—say, up to 5%.

Isolated packet losses compose most observed loss events.

The probability that a packet is lost is not uniform: Even though most losses are of isolated packets, bursts of consecutively lost packets are more common than can be explained by chance alone. Bursts of loss are typically short; an application that copes with two to three consecutively lost packets will suffice for most burst losses.

Rarely, long bursts of loss occur. Outages of a significant fraction of a second, or perhaps even longer, are not unknown.

Packet duplication is rare, but it can occur.

Similarly, in rare cases packets can be corrupted. The over-whelming majority of these are detected by the TCP or UDP checksum (if enabled), and the packet is discarded before it reaches the application.

The characteristics of transit time variation can be summarized as follows:

Transit time across the network is not uniform, and jitter is observed.

The vast majority of jitter is reasonably bounded, but the tail of the distribution can be long.

Although reordering is comparatively rare, packets can be reordered in transit. An application should not assume that the order in which packets are received corresponds to the order in which they were sent.

These are not universal rules, and for every one there is a network path that provides a counterexample.71 They do, however, provide some idea of what we need to be aware of when designing higher-layer protocols and applications.

[ Team LiB ]

Thus far, our consideration of network characteristics has focused on IP. Of course, programmers almost never use the raw IP service. Instead, they build their applications on top of one of the higher-layer transport protocols, typically either UDP or TCP. These protocols provide additional features beyond those provided by IP. How do these added features affect the behavior of the network as seen by the application?

UDP/IP

The User Datagram Protocol (UDP) provides a minimal set of extensions to IP. The UDP header is shown in Figure 2.14. It comprises 64 bits of additional header representing source and destination port identifiers, a length field, and a checksum.

Figure 2.14. Format of a UDP Header

The source and destination ports identify the endpoints within the communicating hosts, allowing for multiplexing of different services onto different ports. Some services run on well-known ports; others use a port that is dynamically negotiated during call setup. The length field is redundant with that in the IP header. The checksum is used to detect corruption of the payload and is optional (it is set to zero for applications that have no use for a checksum).

Apart from the addition of ports and a checksum, UDP provides the raw IP service. It does not provide any enhanced reliability to the transport (although the checksum does allow for detection of payload errors that IP does not detect), nor does it affect the timing of packet delivery. An application using UDP provides data packets to the transport layer, which delivers them to a port on the destination machine (or to a group of machines if multicast is used). Those packets may be lost, delayed, or misordered in transit, exactly as observed for the raw IP service.

TCP/IP

The most common transport protocol on the Internet is TCP. Although UDP provides only a small set of additions to the IP service, TCP adds a significant amount of additional functionality: It abstracts the unreliable packet delivery service of IP to provide reliable, sequential delivery of a byte stream between ports on the source and a single destination host.

An application using TCP provides a stream of data to the transport layer, which fragments it for transmission in appropriately sized packets, and at a rate suitable for the network. Packets are acknowledged by the receiver, and those that are lost in transit are retransmitted by the source. When data arrives, it is buffered at the receiver so that it can be delivered in order. This process is transparent to the application, which simply sees a "pipe" of data flowing across the network.

As long as the application provides sufficient data, the TCP transport layer will increase its sending rate until the network exhibits packet loss. Packet loss is treated as a signal that the bandwidth of the bottleneck link has been exceeded and the connection should reduce its sending rate to match. Accordingly, TCP reduces its sending rate when loss occurs. This process continues, with TCP continually probing the sustainable transmission rate across the network; the result is a sending rate such as that illustrated in Figure 2.15.

Figure 2.15. Sample TCP Sending Rate

This combination of retransmission, buffering, and probing of available bandwidth has several effects:

TCP transport is reliable, and if the connection remains open, all data will eventually be delivered. If the connection fails, the endpoints are made aware of the failure. This is a contrast to UDP, which offers the sender no information about the delivery status of the data.

An application has little control over the timing of packet delivery because there is no fixed relation between the time at which the source sends data and the time at which that data is received. This variation differs from the variation in transit time exhibited by the raw IP service because the TCP layer must also account for retransmission and variation in the sending rate. The sender can tell if all outstanding data has been sent, which may enable it to estimate the average transmission rate.

Bandwidth probing may cause short-term overloading of the bottleneck link, resulting in packet loss. When this overloading causes loss to the TCP stream, that stream will reduce its rate; however, it may also cause loss to other streams in the process.

Of course, there are subtleties to the behavior of TCP, and much has been written on this subject. There are also some features that this discussion has not touched upon, such as push mode and urgent delivery, but these do not affect the basic behavior. What is important for our purpose is to note the fundamental difference between TCP and UDP: the trade-off between reliability (TCP) and timeliness (UDP).

[ Team LiB ]

Dans le document [ Team LiB ]• Table of Contents (Page 34-40)