• Aucun résultat trouvé

SWITCHING MECHANISMS IN MESSAGE PASSING

Message Passing Architecture

5.3 SWITCHING MECHANISMS IN MESSAGE PASSING

Switching mechanisms refer to the mechanisms used to remove data from an input channel and place it on an output channel. Network latency is highly dependent on the switching mechanism used. A number of switching mechanisms have been in use. These are thestore-and-forward,circuit-switching,virtual cut-through, wormhole, andpipelined circuit-switching. In this section, we study some of these techniques.

Incircuit-switchingnetworks, the path between the source and destination is first determined, all links along that path are reserved, and no buffers are needed in each node. After data transfer, reserved links are released for use by other messages. An important characteristic of the circuit-switching technique is that the source and destination are guaranteed a certain bandwidth and maximum latency when com-munication is established between them. This static bandwidth allocation regardless of the actual use is the main drawback of the circuit-switching approach. However, static bandwidth allocation leads to a simple buffering strategy. In addition, circuit-switching networks are characterized by having the smallest amount of delay. This is because message routing overhead is only needed when the circuit is set up;

subsequent messages suffer no, or minimal, additional delay. Therefore, circuit-switching networks can be advantageously used in the case of a large number of message transfers.

The store-and-forward switching mechanism provides an alternate data transfer scheme. The main idea is to offer dynamic bandwidth allocation to messages as they flow through the network, thus avoiding the main drawback of the circuit-switching mechanism. Two main types of store-and-forward networks are common.

These are packet-switched and virtual cut-through networks. In packet-switched

networks, each message is divided into smaller fixed size parts, called packets, before being transmitted. Each node must contain enough buffers to hold received packets before transmitting them. A complete path from source to destination may not be available at the start of transmission. As links become available, packets are moved from node to node until they reach the destination node. Since packets are routed separately through the network, they may follow different paths to the desti-nation node. This may lead to packets arriving out of order at the destidesti-nation. There-fore, an end-to-end message assembly scheme is needed, incurring additional overhead. Packet-switched networks suffer also from the need for routing overhead for each packet, rather than message, sent into the network. In addition to dynami-cally allocating bandwidth, packet-switched networks have the advantage of reduced buffer requirements in each node.

In virtual cut-through, a packet is stored at an intermediate node only if the next required channel is busy. Virtual cut-through is similar to the packet-switching tech-nique, with the following difference. In contrast to packet switching, when a packet arrives at an intermediate node and its selected outgoing channel is free, the packet is sent out to the adjacent node towards its destination before it is completely received.

Therefore, the delay due to unnecessary buffering in front of an idle channel is avoided.

In order to reduce the size of the required buffers and decrease the incurred network latency, a technique calledwormhole routing has been introduced. Here, a packet is divided into smaller units called flits (flow control bits). These flits move in a pipeline fashion with a header flit leading the way to the destination node. When the header flit is blocked due to network congestion, the remaining flits are also blocked. Only a buffer that can store a flit is required for a successful operation of thewormhole routingtechnique. The technique is known to produce a latency that is independent of the path length and it requires less storage at all nodes compared to the store-and-forward packet-switching technique.

Figures 5.5 and 5.6 illustrate the difference in performance between the store-and-forward (SF) and wormhole (WH) routing in terms of communication latency.

WL WL

WL

D

TSF Time

Figure 5.5 Communication latency in the store-and-forward (SF) technique.

In these figures, L represents the packet length in bits,W represents the channel bandwidth in bits/cycle, D is the number of channels, and Tc is the cycle time.

As can be seen from the figures, the latency of the SF and that of the WH are given respectively by

TSF¼Tc

L WD

and TWH ¼Tc

L WþD

Table 5.1 shows an overall comparison of a number of switching mechanisms.

WL

D

TWH Time Figure 5.6 Communication latency in the wormhole (WH) technique.

TABLE 5.1 Comparison Among a Number of Switching Techniques Switching

Mechanism Advantages Disadvantages

Circuit switching 1. Suitable for long messages Wasting of bandwidth 2. Deadlock-free

Store-and-forward 1. Simple 1. Buffer for every packet

2. Suitable for interactive traffic 2. Potential long latency 3. Bandwidth on demand 3. Potential deadlock Virtual cut-through 1. Good for long messages

2. Possible deadlock avoidance

1. Need for multiple message buffers

3. Elimination of data-link protocol

2. Wasting of bandwidth 3. Mainly used with

profitable routing Wormhole 1. Good for long messages 1. Possibility for deadlock

2. Reduced need for buffering 2. Inability to support backtracking 3. Reduced effect of path length

5.3.1 Wormhole Routing in Mesh Networks

An n-dimensional mesh is defined as the interconnection structure that has K0 K1 Kn1 nodes, wheren is the number of dimensions of the network and Ki is the radix of dimensioni. Each node is identified by an n-coordinate vector

(x0,x1,. . .,xn1), where 0xiKi1. A number of routing techniques have

been used for mesh networks. These includedimension-ordered,dimension rever-sal, turn model, and message flow model. In the following, we introduce the dimension-ordered ofX-Yrouting.

Dimension-Ordered (X-Y) Routing A channel numbering scheme often used in n-dimensional meshes is based on the dimension of channels. In dimension-ordered routing, each packet is routed in one dimension at a time, arriving at the proper coordinate in each dimension before proceeding to the next dimension. By enforcing a strict monotonic order on the dimensions traversed, deadlock-free routing is guaranteed. In a two-dimensional mesh, each node is represented by its position (x,y); the packets are first sent along thex-dimension and then along the y-dimension, hence the nameX-Yrouting.

InX-Yrouting, messages are first sent along theX-dimension and then along the Y-dimension. In other words, at most one turn is allowed and that turn must be from the X-dimension to the Y-dimension. Let (sx,sy) and (dx,dy) denote the addresses of a source and destination node, respectively. Assume also that (gx,gy)¼(dxsx,dysy). The X-Y routing can be implemented by placing gx andgyin the first two flits, respectively, of the message. When the first flit arrives at a node, it is decremented or incremented, depending on whether it is greater than 0 or less than 0. If the result is not equal to 0, the message is forwarded in the same direction in which it arrived. If the result equals 0 and the message arrived on theY-dimension, the message is delivered to the local node. If the result equals 0 and the message arrived on theX-dimension, the flit is discarded and the next flit is examined on arrival. If that flit is 0, the packet is delivered to the local node; other-wise, the packet is forwarded in theY-dimension. Figure 5.7 shows an example of the X-Y routing between a source node and a destination node in an 88 mesh network.

5.3.2 Virtual Channels

The principle of virtual channel was introduced in order to allow the design of deadlock-free routing algorithms. Virtual channels provide an inexpensive method to increase the number of logical channels without adding more wires. A number of adaptive routing algorithms are based on the use of virtual channels.

A network without virtual channels is composed of single lane streets. Adding virtual channels to an interconnection network is analogous to adding lanes to a street network, thus allowing blocked messages to be passed. In addition to increasing throughput, virtual channels provide an additional degree of freedom

in allocating resources to messages in a network. Consider the simple network shown in Figure 5.8.

In this case, two paths X-A-B-Z and Y-A-B-W share the common link AB. It is, therefore, required to multiplex link AB between the two paths (two lanes). A pro-vision is also needed such that data sent over the first path (lane) is sent from X to Z and not to W and similarly data sent over the second path (lane) is sent from Y to W and not to Z. This can be achieved if we assume that each physical link is actually divided into a number of unidirectionalvirtual channels. Each channel can carry data for one virtual circuit (one path). A circuit (path) from one node to another con-sists of a sequence of channels on the links along the path between the two nodes.

When data is sent from node A to node B, then node B will have to determine the circuit associated with the data such that it can decide whether is should route the data to node Z or to node W. One way that can be used to provide such infor-mation is to divide the AB link into a fixed number of time slots and statically assign each time slot to a channel. This way, the time slot on which the data arrives

Destination node Source node

Figure 5.7 Dimension-ordered (X-Y) routing in an 88 mesh network.

X

Y W

Z

A B

Figure 5.8 Path multiplexing through the same link.

identifies the sending channel and therefore can be used to direct the data to the appropriate destination.

One of the advantages of the virtual channel concept is deadlock avoidance. This can be done by assigning a few flits per node of buffering. When a packet arrives at a virtual channel, it is put in the buffer and sent along the appropriate time slot.