• Aucun résultat trouvé

The Traditional Packet Data Network

It is a generally agreed consensus in the industry that the structures and

protocols associated with the traditional packet network are obsolete in the new technological (and economic) environment. For the purpose of this discussion the Wide Area Networking parts of "Subarea" SNA and of APPN can be included among the plethora of "X.25" based packet switching networks.

1.1.4.1 Objectives

It is important to consider first the aims underlying the architecture of the traditional packet network.

1. The most obvious objective is to save cost on expensive, low speed

communication lines by statistically multiplexing many connections onto the same line. This is really to say that money is saved on the lines by spending money on networking equipment (nodes).

For example, in SNA there are extensive flow and congestion controls which when combined with the use of priority mechanisms enable the operation of links at utilisations above 90%. These controls have a significant cost in hardware and software which is incurred in order to save the very high cost of links.

As cost of long line bandwidth decreases there is less and less to be gained from optimisation of this resource.

2. Provide a multiplexed interface to end user equipment so that an end user can have simultaneous connections with many different destinations over a single physical interface to the network.

Chapter 1. Introduction 3

3. Provide multiple paths through the network to enable recovery should a single link or node become unavailable.

1.1.4.2 Internal Network Operation

There seems to be as many different ways of constructing a packet network as there are suppliers of such networks. The only feature that the commodity "X.25 Networks" have in common is their interface to the end user - internally they differ radically from one another. Even in SNA, the internal operation of subarea networks and of APPN are very different from one another. That said, there are many common features;

Hop-by-Hop Error Recovery

Links between network nodes use protocols such as SOLe or LAPB which detect errors and cause retransmission of error frames.

Implicit Rate Control

Because the data link is far slower than any computer device (the end user devices and the network nodes could handle all of the data that any link was capable of transmitting) the link provides implicit control over the rate at which data can be delivered to the network. (This is separate to the explicit controls contained in the link controL)

Software Routing

Software is used for handling the logic of link control, for making routing decisions on arriving data packets and for manipulating queues.

Connection Orientation

Most (but not all) networks are based on the concept of an end-to-end connection passing through a set of network nodes. In X.25 these are called virtual circuits, in SNA there are routes and sessions. Within

"intermediate nodes" a record is typically kept of each connection and this record is used by the software to determine the destination to which each individual packet must be directed.

Throughput

Based on the available link speeds in the 1970s and 1980s the fastest available packet switching nodes have maximum throughput rates of a few thousand packets per second.

Network Stability

In SNA and APPN (though not by any means in all packet networks) there is an objective to make the network as stable internally as possible thereby removing the need for attaching devices to operate stabilising protocols.

In SNA there is no end-to-end error recovery protocol across the network.1 In the environment of the 1970s and early 1980s this would have involved crippling extra cost (storage, instruction cycles and messages) in every attaching device. Instead, extra cost was incurred within the network nodes because there are very few network nodes compared to the number of attaching devices and the total system cost to the end user was minimised.

1 That is, there is no ISO "layer 4 class 4" protocol.

With recent advances in microprocessors and reductions in the cost of slow speed memories (DRAMs), the cost of operating a stabilising protocol within attaching equipment (or at the endpoints of the network) has reduced considerably.

Packet Size

Blocks of user data offered for transmission vary widely in size. If these blocks are broken up into many short "packets" then the transit delay for the whole block across the network will be considerably shorter. This is because when a block is broken up into many short packets, each packet can be processed by the network separately and the first few packets of a block may be received at the destination before the last packet is transmitted by the source.

Limiting all data traffic to a small maximum length also has the effect of smoothing out queueing delays in intermediate nodes and thus providing a much more even transit delay characteristic than is possible if blocks are allowed to be any length. There are other benefits to short packets; short packet size often resulted in the best data throughput because when a block was found to be in error then there was less data that had to be retransmitted.

However, there is a big problem with short packet sizes. It is a characteristic of the architecture of traditional packet switching nodes that switching a packet takes a certain amount of time (or number of (relatively) slow this characteristic is the most Significant limitation on network throughput.

It should be pointed out that SNA networks do not break data up into short blocks for internal transport. At the boundary of the network there is a function (segmentation) which breaks long data blocks up into

Every professional involved in data communication knows (or should know) the mathematics of the single server queue. Whenever you have a resource which is used for a variable length of time by many

requesters (more or less) at random then the service any particular requester will get is determined by a highly predictable but very unexpected result. (This applies to people queueing for a supermarket check out just as much as messages queuing for transmission on a link.)

Chapter 1. Introduction

5

As the utilisation of the server gets higher the length of the queue increases. If requests arrive at random, then at 70% utilisation the average queue length wi" be about 3. As utilisation of the resource approaches 1000/0 then the length of the queue tends towards infinity!

Nodes, links and buffer pools within communication networks are servers and messages within the network are requesters.

The short result of the above is that unless there is strict control of data flow and congestion within the network the network wi" not operate reliably. (Since a network with an average utilisation of 100/0 may still have peaks where utilisation of some resources exceeds 90% this applies to a" traditional networks.) In SNA there are extensive flow and congestion control mechanisms. In an environment of very high

bandwidth cost this is justified because these control mechanisms enable the use of much higher resource utilisations.

When bandwidth cost becomes very low then some people argue that there wi" be no need for congestion control at a" (for example if no

resource is ever utilised at above 30% then it is hard to see the need for expensive control mechanisms). It is the view of this author that

congestion and flow control mechanisms wi" still be needed in the very fast network environment but that these protocols wi" be very different from those in operation in today' networks.