• Aucun résultat trouvé

Impact of the Electronic Architecture of Optical Slot Switching Nodes on Latency in Ring Networks

N/A
N/A
Protected

Academic year: 2021

Partager "Impact of the Electronic Architecture of Optical Slot Switching Nodes on Latency in Ring Networks"

Copied!
13
0
0

Texte intégral

(1)

HAL Id: hal-01132320

https://hal.inria.fr/hal-01132320

Submitted on 17 Mar 2015

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Switching Nodes on Latency in Ring Networks

Nihel Benzaoui, Yvan Pointurier, Thomas Bonald, Jean-Christophe Antona

To cite this version:

Nihel Benzaoui, Yvan Pointurier, Thomas Bonald, Jean-Christophe Antona. Impact of the Electronic

Architecture of Optical Slot Switching Nodes on Latency in Ring Networks. Journal of Optical

Com-munications and Networking, Piscataway, NJ ; Washington, DC : IEEE : Optical Society of America,

2014, pp.12. �10.1364/JOCN.6.000718�. �hal-01132320�

(2)

Impact of the electronic architecture of optical slot

switching nodes on latency in ring networks

N. Benzaoui, Y. Pointurier, T. Bonald, J.-C. Antona

Abstract—Optical slot switching has formerly been proposed as a flexible solution for metropolitan ring networks to carry data traffic with a sub-wavelength switching granularity and with a good energy efficiency, which is enabled by optical transparency. In this paper, for the first time we propose several architectures for the electronic side of optical slot switching nodes to increase flexibility through the addition of electronic switches, working either at client packet granularity or at slot granularity; such electronic switches can be located at either transmitter side, receiver side, or both sides of a node, thereby decreasing traffic latency, at the expense of increased node cost and/or energy consumption. This paper focuses on the latency aspect. We investigate the impact of a timer that can be used to upper bound the slot insertion time on the medium. We also propose, a novel client packet queuing model in an optical slot switching ring and assess and compare the latency of these node architectures analytically, and with simulations.

Index Terms—Optical packet switching; Network perfor-mance; queuing model.

I. INTRODUCTION

T

he emergence and rapid growth of data traffic sources,

such as CDNs (Content Delevery Network), nomad users and mobile applications, make the internet traffic unpredictable and bursty [1], especially in the access and metro segments. In this context, optical slot switching technologies were pro-posed for metro ring networks, to provide sub-wavelength switching flexibility in the optical domain [2]–[5]. For in-stance, “POADM” (packet optical add-drop multiplexers) [2] is a Wavelength-Division Multiplexed (WDM) time-slotted ring. It is composed of several (e.g., 40) data channels and one “control channel” which carries all slot headers and is electronically processed at each node. Each node is connected to one or several clients (e.g., an Optical Line Terminal of Passive Optical Network, an Ethernet switch, etc.). Data coming from/going to the clients is encapsulated/decapsulated within fixed-size packets called slots with a duration of few µs that can transit transparently, i.e. they are not converted to the electronic domain at the intermediate nodes [2]. Each node decides whether to drop or to let transit each slot on each wavelength according to the information carried by the con-trol channel. This transparency reduces energy consumption [6]. Each POADM node includes one or several burst-mode fixed-wavelength receivers (Rx), making the nodes able to receive data on predefined wavelengths, and one or several fast wavelength-tunable transmitters (Tx), which enable each node to communicate with every other node by tuning to the right wavelength on a per-slot basis [7].

N. Benzaoui, Y. Pointurier and J.-C. Antona are with from Alcatel-Lucent Bell Labs, Nozay, France.

T. Bonald is with Telecom ParisTech and LINCS, Paris, France.

The POADM technology has received much attention in the past: regarding network dimensioning [8], [9], energy consumption [6], technological issues [7], and MAC, perfor-mance and QoS management [10]–[17]. However, mapping issues between clients and nodes were never considered. In a plain, basic POADM node, clients are statically mapped to transponders (TRX; a transponder is a Tx and an Rx) or line cards. However, allowing a client to use any Tx or Rx at a given node provides more flexibility to send/receive data, and hence has a significant impact on network dimen-sioning and performance, measured for instance in terms of latency. Previous performance studies for optical slot switching networks considered client [10]–[17] packet aggregation into slots and slot insertion on the medium as separate processes, making assumptions on the latter that are uncorrelated with the aggregation mechanism output. We propose for the first time an end-to-end performance analysis that includes both client packet aggregation and slot insertion.

The contribution of this paper is two-fold. First, we pro-pose several optical slot switching node architectures enabling dynamic client/WDM TRX mapping, at several granularities (client packet or optical slot). Second, we assess the per-formance of each node architecture, essentially in terms of queuing delay, by analysis and simulation. The analytical performance evaluation is done using a new model based on queuing theory. Note that this paper applies to any WDM slotted ring where nodes are equipped with tunable transmit-ters and fixed-wavelength receivers; POADM is one possible implementation of this concept. This paper is also applicable to optical slot switching variations of the POADM concept such as [3]–[5]. Also, fairness issues, which could be handled by a specific MAC, are out of the scope of this paper. Note that, as shown by [18], there is no fairness problem as long as the network is correctly dimensioned, i.e. to carry load close to 1 without loss.

This paper is organized as follows. In Section II, we present the basic and alternate architectures of our optical slot switching node. In Section III we describe the optical slot filling and insertion process. Latencies for those architectures are derived analytically in Section IV and validated using simulations in Section V. We give in Section VI the most important conclusions of the performance studies.

II. NODE ARCHITECTURES

A POADM node is able to encapsulate and decapsulate client traffic into fixed-duration optical packets or “slots”. In addition, a POADM node can add, drop, and let optical slots transit on an optical ring. Hence, a POADM node consists

(3)

C1.1 C1.2 C2.1 C2.2 C4.1 C4.2 N1 N2 N3 N4 Optical slot switching Nodes POADM clients C3.2 C3.1 w2 Optical slot switching (POADM) Ring w1

Fig. 1. Optical slot switching ring, 4 nodes and 2 clients per node.

POADM L2 card RX 1.1 TX 1.2

POADM L2 card TX 1.1 RX 1.2 Client C1,1 Client C1,2

Optical slot switching (POADM) Ring (a) Basic node architecture

Optical slot switching (POADM) Ring POADM L2 card RX 1.1 TX 1.2 POADM L2 card TX 1.1 RX 1.2 Packet Switch Client C1,1 Client C1,2

(b) Node architecture with client granularity switch (c-switch) at transmitter.

TX 1.2 TX 1.1

Packet Switch

Optical slot switching (POADM) Ring POADM L2 card RX 1.2 POADM L2 card RX 1.1 Client C1,1 Client C1,2

(c) Node architecture with c-switch at receiver.

TX 1.2 TX 1.1

Slot granularity switch

Optical slot switching (POADM) Ring POADM L2 card RX 1.2 POADM L2 card RX 1.1 Client C1,2 Client C1,1

(d) Node architecture with slot granularity switch (s-switch) at receiver. Fig. 2. POADM Node Architectures

of: layer 2 (L2) client cards, for client traffic handling (es-sentially, en/decapsulation and, optionally, QoS management); an optical packet blocking fabric with the relevant controller, and line cards (TRX) directly connected to the metro ring to transmit and receive optical slots. We focus here solely on the client and line cards of POADM nodes, and we propose several solutions to electrically interconnect them within a node. Fig. 1 depicts the reference network for this study and Fig. 2 the investigated node architectures. The network

described in Fig. 1 is a ring with 4 POADM nodes (Nn,

with node index n = 1, . . . , 4) and 2 clients per node (Cn,i

with node index n and client index i=1 or 2). For the sake of clarity, packet blockers located between the transmitters and the receivers are not depicted, see [2] for details. This ring

supports here two wavelengths w1 and w2.

A. Basic node architecture

In the first architecture, each client is mapped via a (L2) client card, to a TRX (line card) as illustrated in Fig. 2a. Recall that, with the tunable Tx on the line side, a client can send its traffic on any wavelength, but the client is always mapped to a predefined wavelength for reception. In Fig. 1, for example, if

client C2,2on node N2is mapped to wavelength w2, then C1,2

must use w2to reach client C2,2on N2. Since each pair (client

card, line card) is independent, an optical slot can be formed only with packets from the same source client to the same destination client; thus in the previous example an optical slot

carrying traffic from C1,2 to C2,2 only contains packets from

C1,2 to C2,2.

B. Client packet electronic switch (c-switch)

To enhance the flexibility of the previous architecture and remove the fixed mapping between client and wavelength of the basic architecture, we propose to include, either at transmitter or receiver (or even both) side, an electronic switch working at client packet granularity, further denoted by “c-switch”. For instance, if the metro ring carries Ethernet traffic, those switches could be implemented with Ethernet switching fabrics. Note that this enhanced flexibility (and improved performance, as will be seen in Section IV), comes at the cost of adding at each POADM node one or two electronic

switches working at the client packet granularity1. When a

client packet switch is used at the transmitter side (Fig. 2b), the same line card will be shared between several clients (of the same node), such that packets coming from different clients can be encapsulated in the same optical slot, under the condition that these packets have the same (client and node) destination. In this configuration, the electronic client packet switch is placed at receiver (Fig. 2c), such that several clients can receive traffic encapsulated in the same optical slot. In this case, the optical slots are filled with packets coming from a single client on a source node, going to several clients on the destination node.

C. Slot electronic switch (s-switch)

We also propose to consider a less expensive, due to simpler

switching fabric, and less energy consuming2electronic switch

operating at slot granularity rather than client packet granular-ity. This switch is further denoted by “s-switch” and is placed between the client and the line cards as illustrated in Fig. 2d, to ensure a flexible mapping between clients and wavelengths. Note that fast (slot-granularity) wavelength tunability, already present at the Tx side, provides the same functionality as an electronic slot switch, therefore the scenario with an electronic slot switch at Tx is not considered here, and only scenarios where this switch is located at the Rx side are considered. This switch enables the mapping of client cards to line cards at the slot (rather than client packet) granularity: it can be seen as an intermediate scenario between the no-switch case (Section II-A) and the client packet switch case (Section II-B). D. Combinations of c-switches and s-switches

In addition to the aforementioned scenarios, we can also consider a combination of electronic switches at client packet granularity (at Tx, Fig. 2b; and/or Rx side, Fig. 2c) and at slot granularity (at Rx side, Fig. 2d), resulting in 8 possible node architectures overall.

1The introduction of electronic switches increases the energy consumption and the cost of the POADM equipment. Although today, for low data-rates the cost of electronic technology can be neglected compared to the optical technology, it is nevertheless an additional cost to take into account.

2The s-switch demands less computational power; the power consumption of electronic switches typically depend on the number of decisions per second, by design a c-switch performs K decisions when an s-switch performs only one decision, where K is the average number of client packets per slot.

(4)

III. OPTICAL SLOT FORMATION AND INSERTION PROCESS In this section we describe the mode of working of a POADM ring at a client packet level and take it as a basis for the analytical model in Section IV. We assume that all client packets have the same size (in bits) taking as a reference the benchmarking methodology of [19]. After being sent by the clients the packets are placed in a temporary packet queue waiting for the completion of the optical slot formation process. These client queues are emptied after the arrival of K packets, K being the number of packets necessary to fill an optical slot with:

K =j slot size [in bit]

packet size [in bit] k

. (1)

This implies that the first K − 1 packets must wait for the

arrival of the Kth packet before being sent. One of the

purposes of this paper is to show the impact of this optical slot filling time denoted by OSFT on the overall optical slot switching network latency.

After their formation the optical slots are queued in an optical slot queue. If there is no switch at the reception side, the optical slot queues are identified by the reception wavelength of the destination client, then by the pair (source node, destination client). Otherwise, if the node encompasses a switch at the reception side (either c- or s-switch) the optical slot queues are identified by the pair (source node, destination node). Indeed, as explained in Section II, the presence of switch at Rx side, in combination with tunable Tx, allows to share the wavelengths over the client sources on a same node and the constraint of having one optical slot queue per wavelength is relaxed. For simplicity, in the following we use the term queue to designate the optical slot queue.

One way to reduce the impact of the optical slot filling time on the latency is to upper bound it using a timer. The timer is initialized at the arrival of the first packet in the temporary packet queue. Its expiration instantly causes the ending of the optical slot formation process. The slot is queued in order to be sent in the channel. Hence the timer triggers the sending of the optical slot even if it is not fully filled. In this case the queue extraction will be done after the arrival of X packets with X depending on the packet arrival process, and 1 ≤ X ≤ K. The timer has the advantage of limiting the latency, but as we will see in Sections IV-B2 and V-B2, it has also the drawback of wasting the offered channel capacity due to unfilled slots.

When several flows are feeding the same optical slot, their superposition is equivalent to a single flow with a mean client packet arrival rate equal to the sum of the packet arrival rate of each flow. The optical slot filling time decreases with the client packet arrival rate increase, helping to better (or even fully) fill the optical slot before the timer expiration. Therefore, the optical slot filling time and ratio depend on the number and the size of flows participating in its filling. This process is called flow aggregation. Flow aggregation capability and hence network performance (queuing delay) strongly depends on the node architecture.

IV. QUEUING MODEL ANDANALYSIS

We define the packet queuing delay as the waiting time elapsed between the sending of packets by the source client and their insertion on the channel. Thanks to transparency (no optoelectronic conversion at intermediate nodes) the end-to-end latency can easily be calculated from the queuing delay by adding to it the propagation delay, which is deterministic for a given physical topology. The queuing delay results from the optical slot formation process and the waiting time spent in node buffers. The buffering time is directly related to the congestion of the channels, since the optical slot formation time depends on the source client mean arrival rate λ, and then strongly depends on the node architecture. In this section we describe the queuing model and we present analytical model. we also quantify and discuss the gain on performance brought by each node architecture in presence or not of timer.

A. Queuing model and notation

From the above definition, the Mean Packet Queuing Delay MPQD is the sum of the Mean Optical Slot Filling Time MOSFT , which is the mean time spent in the client queue waiting for the completion of the optical slot formation pro-cess, and the Mean Queuing Time M QT , which is the mean time spent in the queue waiting for the insertion of the optical slot on the channel, and hence:

MPQD = M QT + MOSFT. (2)

The model is based on a client to client traffic with Poisson packet arrivals, where the mean packet arrival rate is λ, and a service rate µ, which is equal to the inverse of the fixed optical

slot duration. We define Ds,i,d,j(in b/s) as the traffic demand

of (client-layer) data flow (s,i,d,j) from client i on node s to

client j on node d, with λ = Ds,i,d,j/packet size.

The optical slot filling process can be simplified as a group

of, on average, ¯X packet arrivals before being served (inserted

on the channel). The slot arrival process then follows a Gamma

distribution with a shape parameter r, where r = ¯X, and

a scale parameter 1/λ. This Gamma distribution corresponds

to a sum of ¯X independent exponentially distributed random

variables. Since the optical slots are sent on a time slotted channel in a deterministic way the corresponding model is then

a Gamma/D/1 queue. If ¯X is an integer, then the distribution

is an Erlang distribution, which is a special case of the Gamma distribution. In this case the model is an Erlang/D/1 queue.

B. Analysis

In this section we propose to calculate the Mean Packet Queuing Delay by computing each term (Mean Queuing Time and Mean Optical Slot Filling Time) of (2), without and with a timer.

1) Without timer: Without timer, the arrivals are grouped

into sets of K packets before being served. This description corresponds to an Erlang/D/1 model. In order to get explicit expressions it is necessary to approximate this queue by a more common one. We propose to approximate the Erlang/D/1

(5)

POADM

Ring

N1

N2

N3

C

0.0

C

1.0

C

2.0

C

3.0

b

b

b

N0

Fig. 3. A simple 4-node scenario.

queue by an M/D/1 queue with parameter (λ, rµ)3. The

mean queuing delay4 for an M/D/1 with parameters (λ, µ)

is given by [20]: M QTM/D/1= 1 2 · 1 µ· ρ 1 − ρ, (3)

with ρ = λ/µ. From (3) and considering that since the channel is time slotted an optical slot has to wait on average half a time slot before being inserted on the channel, we derive the mean queuing time for an Erlang/D/1 as:

M QT ≈ 1 2· 1 rµ· ρ 1 − ρ + 1 2µ, (4) where ρ = λ/(rµ), and r = K.

Without a timer, an optical slot is formed when it is fully filled (X = K). The Optical Slot Filling Time is then K − 1 client packet inter-arrivals times:

OSFT = (K − 1) · 1

λ. (5)

Since the first packet waits for time OSFT , the last one for time 0 and the arrivals are uniformly distributed in the interval [0,OSFT ], the mean optical slot filling time is :

MOSFT = OSFT

2 = (K − 1) ·

1

2λ, (6)

and hence, using (2), (4) and (6), we have:

MPQD =1 2 · 1 rµ· ρ 1 − ρ+ 1 2µ + K − 1 2λ . (7)

2) Using a timer: Using a timer T , the optical slot may

no longer be fully filled before it is sent, and its filling ratio is calculated based on the probability of having X arrivals during time T with 1 ≤ X ≤ K. Considering a Poisson packet arrivals, the mean number of arrivals N (T ) during time T is:

N (T ) = ∞ X n=0 n · P (N (T ) = n) = ∞ X n=0 n · e−λT ·(λT ) n n! . (8)

3We validated this approximation by simulation, not shown here, but the validation of the whole analytical model implicitly validates this approxima-tion.

4The insertion time of the optical slot itself on the channel is not included in the mean queuing delay, and since it is a common fixed duration for all optical slots we consider it as a constant to be added to the propagation delay to calculate the end to end latency

Since an optical slot contains at least one packet which triggers the timer (we are then interested in the probability of having K−1 arrivals during T ), and at most K packets, then we have:

¯

X = E[min(N (T ), K − 1)] + 1, which is equal to:

¯ X = 1 + K−1 X n=0 n · e−λT · (λT ) n n! ! + · · · (K −1) · ∞ X n=K e−λT ·(λT ) n n! ! . (9)

After simplification, ¯X is:

¯ X = 1 + (K−1) − K−2 X n=0 (K −1−n) · e−λT · (λT ) n n! ! . (10)

If ¯X is an integer, we are still considering an Erlang/D/1

system queue, and the new parameter r0, for the slot

depar-ture process, which is the sum of independent exponentially distributed variables during an interval T , is:

r0= ¯X. (11)

Otherwise ¯X is not an integer we are no longer in the

special case of an Erlang/D/1 queue, the queue model is

a Gamma/D/1 with parameter (r0,1/λ). The above formulas

are also valid with the Gamma/D/1 queue.

When ¯X < K, i.e. the load seen by the network or the

emulated load is larger than the clients demand ρ (because of the slots which leave partially filled), this emulated load is:

ρ0= ρ ·K¯

X, (12)

Hence, using (4), (11) and (12), we obtain:

M QT =1 2 · 1 r0µ · ρ0 1 − ρ0 + 1 2µ. (13)

The Optical Slot Filling Time can vary when the timer is used. In the paragraphs below we showed how to calculate the mean optical slot filling Time MOSFT . Since the optical slots may be not fully filled (due to the early timer expiration), (5) and (6) do not apply every time. The Optical Slot Filling Time is either equal to the timer value or to the Optical Slot Filling Time if the slot is filled before the expiration of the timer. Given that the Optical Slot Filling Time maybe determined by

the timer and no longer by the arrival of the Kthpacket, the

last packet arriving in the optical slot before the expiration of the timer may wait for a positive time before the slot is sent. If the slot is sent upon expiration of the timer the first packet in the slot will wait for time T and the other X − 1 packets wait, on average, for time T /2, otherwise if the slot is sent because it is full (before the expiration of the timer) all packets in the slot will wait, on average, for OSF T /2. We can express the Mean Optical Slot Filling Time as:

(6)

100 120 140 160 180 200

Sim. Node 0 Sim. Node 1 Sim. Node 2 Analy. Node 0 Analy. Node 1 Analy. Node 2

Q u e u in g d e la y (µ s)

Sim. C0.0 Sim. C1.0 Sim. C2.0

Analy. C0.0 Analy. C1.0 Analy. C2.0

0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 M e a n Q u e u in g β β β β (a) without timer

100 120 140 160 180 200

Sim. Node 0 Sim. Node 1 Sim. Node 2 Analy. Node 0 Analy. Node 1 Analy. Node 2

Q u e u in g d e la y (µ s)

Sim. C0.0 Sim. C1.0 Sim. C2.0

Analy. C0.0 Analy. C1.0 Analy. C2.0

0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 M e a n Q u e u in g Timer = 40µs β β β β (b) with timer

Fig. 4. Queuing delay for a simple 4-node scenario. Here Cx,y stands for the flow sent by client x on node y

MOSFT = 1¯ X K−2 X n=0  1 +n 2  · T · e−λT ·(λT ) n n! + . . . K(K −1) · T 2 ∞ X n=K−1 e−λT ·(λT ) n n! ! . (14)

MPQD is the sum of (13) and (14).

From (10) and (14) (which are applicable whether a timer is used or not), we can see that the mean optical slot filling time is directly related to the mean packet arrival rate of a client flow. As discussed in Section III when the client packet arrival rate increases the optical slot filling time decreases, as seen in (14), and the optical slots are better filled since

¯

X/K approaches 1, as seen in (10). The analysis given above confirms that the queuing delay and the usable channel capacity both strongly depend on the node architecture.

C. Validation of queuing and optical slot filling model In this section we evaluate our analytical model, on a single wavelength POADM ring with 4 nodes and one client per node. Each client sends a single flow with intensity (offered load) β to the client of node 3 as described in Fig. 3. We assume a channel capacity of 10 Gb/s and a slot duration of roughly 8 µs.

Fig. 4a shows the Mean Packet Queuing Delay without a timer and Fig. 4b using a timer, for each node, obtained theoretically using (2), and by simulation using NS3. Recall that transit traffic has priority over the traffic inserted due to the optical transparency. Given that no MAC is used to ensure the fairness, node 2 can access the channel only when β < 1/3, node 1 can access the channel only when β < 1/2 while node 0 has no constraint and can use all the offered capacity. Indeed for β below 1/3, all nodes have equivalent access to the channel; when β approaches 1/3 the segment between node 2 and node 3 saturates because of the traffic sent by node 0 and node 1, and node 2 can no longer access the channel. When β approaches 1/2 the segment between node 1 and node 2 saturates because of the traffic sent by node 0, causing

the loss of all client packets of node 2 and 3. High queuing delay and client packet losses are indicators of the limitation of channel exploitation. The aim of these comparisons is to validate the queuing model and to evaluate the accuracy of the MPQD formula (2), whether a timer is used or not. The analytical model fits with simulations with a mean error of 1% for loads ≤ 99%. This result confirms the coherence of the analytical model. In next sub-section, we derive analytical results for the gain brought by the c-switches in terms of slot filling time, or, equivalently, the client packet queuing delay assuming the absence of a timer capping the maximum slot filling time.

D. Performance gains with c-switches

1) No timer: In the following we derive the gain in latency

seen by the client layer thanks to c-switches at Tx, Rx, or both, in different network regimes. For the range of loads where congestion and losses are not significant the Mean Queuing Time is negligible and the Mean Packet Queuing Delay is dominated by the Mean Optical Slot Filling Time. In the absence of timer the queuing delay is given by (2), and can be simplified into:

MPQD ≈ 1

2µ + MOSFT. (15)

For this range of loads and in absence of timer, 1/2µ is negligible compared with MOSFT (MOSFT is considerably larger to 1/2µ), and we can consider:

MPQD ≈ MOSFT. (16)

Denote by GDA

s,d the gain in Mean Packet Queuing Delay,

using a node architecture A with respect to the basic

architec-ture with no switch, for a flow from node s to d, and GTA

s,d

the gain in Mean Optical Slot Filling Time, (16) yields:

GDAs,d≈ GTA s,d, (17) with GTs,dA =MOSFT no switch s,d MOSFTA s,d . (18)

(7)

In absence of timer the impact of the node architecture on the optical slot filling time is visible only on the queuing delay reduction (without timer there is no impact on load). If we consider a client layer flow (s, i, d, j), with no switch (neither at Tx nor Rx), using (5) the time to form an optical slot is:

OSFTs,i,d,jno switch= (K − 1) ·

PS Ds,i,d,j

, (19)

where PS is the client packet size (in bit). The average slot filling time between s and d using (6) is:

MOSFTs,dno switch= K − 1 2 · PS fs,d ·X i,j 1 Ds,i,d,j , (20)

where fs,dis the number of flows from node s to node d. With

a c-switch at Tx on node s, i.e. when all flows from all clients of s to some client j of node d are aggregated into a single

flow with intensity As,d,j =P

i

Ds,i,d,j, the time to form an

optical slot is: OSFTs,d,j(c-switch at Tx) = (K − 1) · APS

s,d,j.

Let Kd be the number of aggregated flows from s to d, i.e.

Kdis the number of clients on node d. The Mean Optical Slot

Filling Time from s to d is then:

MOSFTs,dT x= K − 1 2 · PS Kd ·X j 1 As,d,j . (21)

Hence, using (17), (20) and (21), the gain in queuing delay (during the slot formation process) for flow (s, i, d, j) brought by the presence of a c-switch at Tx side is:

GDs,dT x= MOSFT no switch s,d MOSFTT x s,d = Kd fs,d · P i,j 1 Ds,i,d,j P j 1 As,d,j . (22)

By symmetry, in the scenario with c-switch at Rx, where flows from one source client to several destination clients are aggregated within the same stream of optical slots, the gain is: GDRxs,d = MOSFT no switch s,d MOSFTRx s,d = Kd fs,d · P i,j 1 Ds,i,d,j P j 1 Bs,i,d , (23) where Bs,i,d=P j Ds,i,d,j.

When both source node and destination node are equipped with a c-switch, flows from all clients of a node s to all clients of a node d are aggregated into a single flow with intensity

Es,d =P

i,j

Ds,i,d,j , the time to form a slot from s to d is:

MOSFTs,d(c-switch at Tx and Rx) =

K − 1

2 ·

PS Es,d

. (24)

Using (17), (20) and (24)the gain in slot filling time brought by a c-switch at both Tx and Rx is thus:

GDs,dT x+Rx= Es,d fs,d ·P 1 i,j 1 Ds,i,d,j . (25)

For the range of loads for which congestion occurs on the network and is significant, the Mean Queuing Time MQT is no longer negligible. In this case the simplification done

Cm,1 (w Cn,1 a Cn,2 c Tx Rx w1) Cm,2 (w2) b d

Fig. 5. Clients/nodes/wavelengths mapping and traffic demand, n, m = 1 , . . . , 4 , n 6= m;

in previous paragraph is no longer possible, and the gains,

only for ρ0 < 1 such that MQT is finite, must be calculated

using (7); GDAs,d= MQT + MOSFT no switch s,d M QT + MOSFTA s,d . (26)

2) With timer: As seen in (12) the use of a timer may cause

the sending of optical slots that are not fully filled, hence the client traffic demand perceived by the network is higher then effective offered one. Unlike the case where no timer was used, here, the node architectures intervenes on both Mean Optical Slot Filling Time and Mean Packet Queuing Time but also on the maximum supported load by modifying the mean number of packet arrivals (λ) during an interval of time T . The utilization of c-switches will lead to aggregate flows for filling the same optical slots, which help to increases λ and X, as seen in (10) when λ increases X approaches K. The higher the intensity of the resulting aggregated flow, the lower the impact of the timer on exploitable channel capacity will be, thereby keeping the queuing delay under the timer value. For low loads the performance brought by the different node

architectures can be negligible GDA

s,d ≈ 1, since the use of

timer leads to obtain the same MOSFT for all the architectures. for the other load regimes (17) and (26) are used.

E. Performance impact of s-switches

The s-switch have no impact on flow aggregation, and their impact cannot be measured in terms of latency when the network is not congested. Its usefulness is appreciable in the cases where the load sharing is needed, as seen in the next section.

V. PERFORMANCE STUDIES A. Simulation scenario description

We evaluate the aforementioned node architectures for net-work performance (latency) using the analytical model and using a custom-made NS3-based network simulator to validate the analytical model on complex cases. The POADM unidi-rectional ring (a second direction could be used for protection) consists of 4 nodes, each connected to 2 clients, and 2 wave-lengths. As seen in Fig. 1 each node can transmit and receive on the two wavelengths via two transponders (fast tunable Tx + colored receiver Rx); Fig. 5 shows the wavelengths/node/client mappings and the client traffic demand. Each flow (s, i, d, j) is assumed to consist of fixed-size client packets of size

(8)

400 500 600 700 800 No switch Tx: c-sw.; Rx: no switch Tx: no switch; Rx: c-sw. Tx and Rx: c-sw. Tx: none; Rx: s-sw Tx: c-sw.; Rx: s-sw. Tx: none; Rx: both sw. Tx: c-sw.; Rx: both sw.

Q u e u in g d e la y (µ s) No C-Switch 0 100 200 300 0 0.2 0.4 0.6 0.8 1 M e a n Q u e u in g Load No C-Switch C-Switch at either Tx or Rx C-Switch at both Tx and Rx

(a) Simulation

400 500 600 700

800 (No switch), (Tx: none; Rx: s-sw)

(Tx: c-sw.; Rx: no switch), (Tx: c-sw.; Rx: s-sw.) (Tx: no switch; Rx: c-sw.), (Tx: none; Rx: both sw.) (Tx and Rx: c-sw.), (Tx: c-sw.; Rx: both sw.) Q u e u in g d e la y (µ s) No C-Switch 0 100 200 300 0 0.2 0.4 0.6 0.8 1 M e a n Q u e u in g Load No C-Switch C-Switch at either Tx or Rx C-Switch at both Tx and Rx

(b) Analysis Fig. 6. Mean packet queuing delay without timer, symmetric demand

PS = 5585 bytes, which corresponds to a typical average

packet size in an IP network [21], and client packet arrival

for each flow is Poisson6. Slots can be electronically buffered

once they are formed; at each client a buffer can contain up to 50 slots. The reported queuing delays only account for packets successfully received and client packets lost due to congestion are ignored. In the following we define the network load as the

total traffic demand ( P

s,i,d,j

Ds,i,d,j) normalized by twice the

network capacity (2W R): load =

P s,i,d,j

Ds,i,d,j

2·W R where W is the

number of wavelengths and R the capacity of each wavelength. Factor 2 comes from the fact that each slot cross on average a distance of 1/2 ring.

B. Performance for uniform traffic

1) Without timer: We evaluate the 8 node architectures

resulting from the combinations of a client packet switch (at transmitter and/or receiver) and/or an optical switch at receiver. We assume traffic is uniform, i.e. a = b = c = d in the demand matrix (Fig. 5).

Fig. 6a and Fig. 6b show respectively simulation and analytical results for queuing delay for different loads and for each node architecture. Comparing these two figures, it appears that the analytical model and the simulation results have the same shape, and actually fit with an error of less then 1% for loads below <0.95, the maximum analytical supported load (above which congestion and losses occur, and queuing delay diverges) is in agreement with the simulation results. Due to the symmetry of the traffic matrix, for high loads (>0.96) the ring is in an unstable regime which leads to have a queuing delay slightly higher than the analytical model.

5The choice of a 558B packet size comes from the fact that when a network is tested with the smallest or largest size, it would result in stressing a specific capacity. A medium value which is between typical internet packet sizes is an average case for those two extreme capacities. Note that in the analytical model the packet size is only a parameter and can easily be modified.

6100 ms-long simulations are run, Typically a simulation corresponds to the generation of a few several hundreds of thousands of client packets. The client-card and line-card rate is set to 10 Gb/s. The slot duration is set to roughly 8 µs, corresponding to 18 client packets.

At low loads (e.g., load < 0.4), the inter-arrival time of packets at each node is large, leading to a large optical slot filling time, and hence relatively large queuing delay. At medium and high loads (e.g., 0.4 < load < 0.95), the optical slot filling time decreases, reducing the queuing delay. At very high loads ( > 0.95), the buffering time is no longer negligible and queuing delays increase quickly due to the congestion of the network rather than increasing optical slot filling time; in fact the large number of arriving packets per second leads to fast formation of slots.

Impact of s-switch:

In the uniform traffic scenario, the load is balanced on the 2 wavelengths by construction and the addition of an s-switch at reception (which, as will be seen further, enables wavelength load balancing) brings no gain, hence in Figs. 6a and 6b the curves with and without s-switch are identical.

Impact of c-switch

Architectures with a c-switch at either Tx or Rx aggregate, respectively, the flows from several clients (on the same source node) to the same destination client (c-switch at Tx, Fig. 2b), or the flows from one client to several clients (on the same destination node) (c-switch at Rx; Fig. 2c). Flows are uniformly distributed over all pairs of clients, hence the packet inter-arrival times for the aggregated flows are equal for both cases (c-switch at either Tx or Rx) resulting in identical queuing delays. In the load regime where queuing delay is dominated by slot filling times (load < 0.95 ) and when c-switch is used at either Tx or Rx, using (22) and (23) (for any (s,d) pair) the gain in queuing delay w.r.t the no-switch

architecture is GDT x

s,d = GDRxs,d ≈ 2. This can be verified in

Figs. 6a and 6b in absence of congestion, for high loads where congestion occurs, the gain can be calculated using (26).

The largest reduction in queuing delay is provided by the architecture with c-switches at both Tx and Rx. This is due to the larger number of aggregated flows, leading a higher reduction of the optical slots filling time, compared to the other architectures, and a better pooling of the network resources (Tx, Rx, wavelengths) across the clients. In this case the gain

in latency w.r.t. the no-switch scenario is GDT x+Rxs,d ≈ 4

(9)

and medium loads; for high loads the gain can be calculated using (26).

2) With timer: Fig. 7a presents the simulation results of the

same previous simulation scenario, now with a timer value T = 15 slots (i.e. 120 µs). Fig. 7b presents the analytical results. Here again we can see that the analytical model and the simulation results match very closely. We notice, compared with Fig. 6a and Fig. 6b, that the queuing delay is effectively bounded by the slot filling time (=15 slots) for the lower loads, until congestion occurs at higher loads. As discussed in Section IV, the timer can negatively impact the system by causing a waste of capacity. This impact, using a timer value of 15 slots is minimal (reduction of maximum supported load from 0.98 to 0.97) and visible only for node architectures without switches.

The use of a timer of less then 15 slots can results in packet losses; a timer of 15 slots is a reasonable value since there is no loss for loads lower than 0.95. Hence, in all further simulations, we use a timer value of T = 15 slots.

C. Performance for non-uniform traffic, with timer, case:a = c,

b = d, a << b

In order to refine the node architectures comparison, we modify the traffic matrix; we assume now that a = c and b = d but a << b. In this case the traffic matrix is uniform at the node level (each node sends/receives the same quantity of traffic) but not at the client level, such that the two wavelengths carry a different amount of traffic (traffic sent by each node:

6a on w1 and 6b on w2). The behavior of the network differs

depending on whether a is small or large, as seen below. Fig. 8a shows the simulation queuing delay results, for each node architecture, for a = 50 Mb/s and b varying between 50 and 1570 Mb/s. Fig. 8b shows the analytical results of the queuing delay. Analytical and simulation curves essentially fit, however some curves in the simulation plot reach a plateau due to the limitation of buffer size while the analytical model assume an infinite buffer size.

Impact of s-switch:

In the absence of a switch at the receiver side, there is no possibility to balance the load over the wavelengths, resulting in a rapid increase of the queuing delay and important losses at load higher than 0.5 (the plateau in Fig. 8a corresponds to

the congestion of w2, buffers overflows, and important losses,

while w1 is lightly used), corresponding to the maximum

capacity supported by a single wavelength. Using an s-switch at Rx enables load balancing and removes the plateau.

Impact of c-switch:

Since the traffic is not symmetric, the slot filling time varies depending on whether aggregation of the client flows is enabled by a c-switch at Tx or at Rx. When queuing delay is dominated by slot filling times (0.4 <load < 0.95), when c-switch is used at Tx only, using (18) (for any (s,d) pair) the gain in Mean Optical Slot Filling Time w.r.t the no-switch

architecture is GTs,dT x = 2. This cannot be seen directly on

Figs. 8a and 8b, since even at these loads the network starts to be congested because of the use of the timer, making M QD

no longer negligible and then GDs,dT x 6= GTT x

s,d. The gain in

queuing delay must be calculated using (26). However, the absence of a switch at Rx results in a plateau, as explained above.

Using a c-switch at Rx removes the latency plateau (see Sec-tion V-D). When queuing delay is dominated by slot filling times, the gain in Mean Optical Slot Filling Time with c-switch at Rx only is larger than with a c-c-switch at Tx only:

GTRx

s,d = (a + b)

2/(2ab) > 2. This tendency is also confirmed

for the gain in queuing delay obtained using (26).

When a c-switch is used at both Tx and Rx the timer has no impact and the system is in a stable load regime;

MPQD ≈ MOSFT making GDT x+Rxs,d ≈ GT

T x+Rx s,d .

Using a c-switch at both sides Tx and Rx bring a gain even larger than the one with c-switch at either Rx, and GDT x+Rxs,d = (a + b)2/(ab).

The maximum load supported by the network strongly depends on the node architecture. Figs. 8a and 8b show that the highest supported load is below 0.75 using the node architecture without c-switches, 0.86 using the c-switch at Tx, 0.95 using c-switch at Rx and 0.95 with switches at both Tx and Rx. For certain node architectures, enumerated below, queuing delay diverges before load = 1 because the channel capacity is exceeded. Indeed, with a timer (with value T ) the

optical slots are not fully filled ( ¯X/K < 1), which leads to

have an emulated load seen by the network superior to the real demand as seen in (12). The architectures with no c-switch, do not offer the possibility of client flows aggregation, and it can easily be shown that for T = 15 slots a flow consuming a capacity of 600 Mb/s is generated by the client even if its real demand is below 600 Mb/s. In the presence of c-switches at both Tx and Rx, the slot filling time is always below T and optical slots are always completely filled, ensuring that all channels capacity can be used, leading to a low queuing delays even for loads very close to 1. For the architectures with c-switches at either Tx or Rx, the slot filling time is below T but is larger than the one with c-switch at both Tx and Rx. This allows a higher channel use than with the basic architecture, but lower than with the architecture with both c-switches.

D. Performance for non-uniform traffic, with timer,a = b,c = d, a << c

In this case the traffic matrix is again uniform at the node level but not at the client level; however the traffic demand is now uniformly distributed over the two wavelengths. As in Section V-B1, queuing delays relative to the basic architecture and the architecture with s-switch at Rx are the same. Fig. 9a shows the simulation results for a = 50 Mb/s and b = c ∈ [50, 1570] Mb/s. Fig. 9b shows the analytical results. Compared with Section V-C, the delays of the median curves are reversed, i.e. the electronic switch at Tx side significantly reduces latency and decreases the capacity wasted through the use of the timer, w.r.t. the use of electronic switch at Rx side. In the load regime where queuing delay is dominated by slot filling times, when c-switch is used at Rx only, using (18) (for any (s,d) pair) the gain in Mean Optical Filling Time delay w.r.t the

(10)

400 500 600 700 800 No switch Tx: c-sw.; Rx: no switch Tx: no switch; Rx: c-sw. Tx and Rx: c-sw. Tx: none; Rx: s-sw Tx: c-sw.; Rx: s-sw. Tx: none; Rx: both sw. Tx: c-sw.; Rx: both sw.

Q u e u in g d e la y (µ s) C-Switch at either Tx or Rx 0 100 200 300 0 0.2 0.4 0.6 0.8 1 M e a n Q u e u in g Load Timer = 120µs No C-Switch C-Switch at either Tx or Rx

C-Switch at both Tx and Rx

(a) Simulation

400 500 600 700

800 (No switch), (Tx: none; Rx: s-sw)

(Tx: c-sw.; Rx: no switch), (Tx: c-sw.; Rx: s-sw.) (Tx: no switch; Rx: c-sw.), (Tx: none; Rx: both sw.) (Tx and Rx: c-sw.), (Tx: c-sw.; Rx: both sw.) Q u e u in g d e la y (µ s) 0 100 200 300 0 0.2 0.4 0.6 0.8 1 M e a n Q u e u in g Load No C-Switch C-Switch at either Tx or Rx

C-Switch at both Tx and Rx

Timer = 120µs

(b) Analysis Fig. 7. Mean packet queuing delay, using a timer = 15 slots, symmetric demand

250 300 350 400 450 500 No switch Tx: c-sw.; Rx: no switch Tx: no switch; Rx: c-sw. Tx and Rx: c-sw. Tx: none; Rx: s-sw Tx: c-sw.; Rx: s-sw. Tx: none; Rx: both sw. Tx: c-sw.; Rx: both sw. Q u e u in g d e la y (µ s) 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 M e a n Q u e u in g Load (a) Simulation 250 300 350 400 450 500 (No switch) (Tx: c-sw.; Rx: s-sw.) (Tx: no switch; Rx: c-sw.), (Tx: none; Rx: both sw.) (Tx and Rx: c-sw.), (Tx: c-sw.; Rx: both sw.) (Tx: none; Rx: both sw.) (Tx: c-sw.; Rx: no switch) Q u e u in g d e la y (µ s) 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 (Tx: c-sw.; Rx: no switch) M e a n Q u e u in g Load (b) Analysis

Fig. 8. Mean packet queuing delay, using timer = 15 slots, a=c=50 Mb/s, b=d= [50, 1570] Mb/s.

250 300 350 400 450 500 No switch Tx: c-sw.; Rx: no switch Tx: no switch; Rx: c-sw. Tx and Rx: c-sw. Tx: none; Rx: s-sw Tx: c-sw.; Rx: s-sw. Tx: none; Rx: both sw. Tx: c-sw.; Rx: both sw. Q u e u in g d e la y (µ s) 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 M e a n Q u e u in g Load (a) Simulation 250 300 350 400 450

500 (No switch), (Tx: none; Rx: s-sw) (Tx: none; Rx: both sw.), (Tx: no switch; Rx: c-sw.) (Tx: c-sw.; Rx: no switch), (Tx: c-sw.; Rx: s-sw.) (Tx and Rx: c-sw.), (Tx: c-sw.; Rx: both sw.) Q u e u in g d e la y (µ s) 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 (Tx: c-sw.; Rx: both sw.) M e a n Q u e u in g Load (b) Analysis

(11)

0.6 0.7 0.8 0.9 1 T=16.11µs (2Tp) T=32.22µs (4Tp) T=48.33µs (6Tp) T=64.48µs (8Tp) T=80.56µs (10Tp) T=96.6µs (12Tp) T=113µs (14Tp) T=120µs (15Tp) T=201µs (25Tp) T=242µs (30Tp) T=322µs (40Tp) F il li n g R a ti o 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.2 0.4 0.6 0.8 1 O p ti c a l S lo t F il li n g Flow intensity (Gb/s) Timer increases

Fig. 10. Trade-off between flow intensity, timer value and optical slot filling ratio.

no-switch architecture is GTs,dRx= 2. The gain in Mean Optical

Slot Filling Time is larger: GTs,dT x = (a + c)2/(2ac) > 2 when

a c-switch is used at Tx only; when a c-switch is used at both

Tx and Rx the gain is doubled: GTs,dT x+Rx = (a + c)2/(ac).

It is easily verified that the values for the uniform case are obtained for a = b in the expressions above. Following the reasoning given in Section V-C, the gain in queuing delay using a c-switch at Tx or Rx can be calculated using (26), with

GDRx

s,d > GDT xs,d, and the gain in queuing delay with c-switch

at both Tx and Rx is GDT x+Rxs,d ≈ GTs,dT x+Rx= (a+b)2/(ab).

Hence, we shown that the impact of the optical slot filling process depends on the absolute value of the resulting data rate of client flow aggregation, and that the optical slot filling process has an impact on network performance even at medium and high loads.

E. Trade-off between queuing delay limitation and optical slot filling ratio

Fig. 10 represents the optical slot filling ratio ( ¯X/K) for

different timer values according to the aggregated client flow intensity, i.e. the intensity of the flows after aggregation by the c-switches, if any. Note that Fig. 10 is applicable to all architectures. As expected, for the same timer value, when the aggregated flow intensity increases, the optical slot filling ratio tends to 1 meaning that the load that can be carried in the network also increases. This is because a larger aggregated flow intensity leads to a faster filling of the optical slots as explained in Section IV.

Fig. 10 shows that it is possible to find a trade-off between latency limitation and capacity wastage by selecting an ade-quate timer value and/or an adeade-quate node architecture. Indeed, note that there is a mapping between the network load and the aggregated flow intensity (x-axis in Fig. 10), which depends on the node architecture. For instance, for a given demand, adding c-switches allows the aggregation of client flows leading to larger resulting flow intensity. Hence, for a given demand, the addition of c-switches increases the aggregated flow intensity,

TABLE I

MAXIMUM ACHIEVABLE LOAD TO ENSURE A QUEUING DELAY BELOW

200 µS.

Symmetric Asymmetric Asymmetric traffic traffic traffic

a=b=c=d a=c,b=d a=b,c=d

(Fig. 6a (Fig. 8a) (Fig. 9a) Fig. 7a) No switch. >0.95 0.5 0.75 Tx: c-sw. >0.95 0.5 >0.95 Rx: c-sw. >0.95 >0.95 0.86 Tx : c-sw. Rx: c-sw. >0.95 >0.95 >0.95 Rx: s-sw. >0.95 0.75 0.75 Tx: c-sw.; Rx: s-sw. >0.95 0.86 0.75 Rx: c-sw + s-sw. >0.95 >0.95 0.86 Tx: c-sw.; Rx: c-sw + s-sw. >0.95 >0.95 >0.95

which in turn leads to a larger optical slot filling ratio. In other words, for a given latency constraint (a timer value) the addition of a c-switch increases the maximum supported network load, or, equivalently, reduces the capacity waste.

Table I summarizes the maximum achievable network load to ensure a queuing delay below 200 µs, for each node architecture. Architectures without any switch at Rx do not support some of the traffic demands (dark grey cells), because of their inability to balance the load, as seen above. The architectures with c-switch at either Tx or Rx (light grey cells) permit to support a higher load (at least 75%), depending to the aggregated traffic. The architectures with c-switch at both sides (white cells) sustain the highest load (more than 95%).

VI. CONCLUSION

In this paper we proposed and compared the performance (in terms of client packet queuing delay and maximum supported load) of different possible electrical architectures for optical slot switching nodes, embedding electrical switches at two possible granularities: client packet and/or slot, at Tx and/or Rx. The comparison was done analytically using a queuing model for optical slot switching ring and by simulation using a custom-made NS3-based network simulator. Architectures without any switch at Rx do not support all traffic scenarios (matrix of traffic demands), because of their inability to balance the load. The architectures with c-switch at either Tx or Rx permit to support a higher load, depending to the aggregated traffic. The architectures with c-switch at both sides sustain the highest load. Although the basic node architecture with no electrical switch presents the worst queuing delay in all simulated scenarios, this worst case never exceeds a relatively low value in the order of magnitude of 100 µs at loads below 75%. This node architecture presents the advantage of lower cost and consumption with respect to the other architectures. However, with small flows, slots are filled slowly and we have to use a timer to bound the queuing delay. With the basic node architecture the timer results in

(12)

a poorly filled slots and a waste of channel capacity; and in an increase of queuing delays and large packet losses at high loads. The lowest queuing delay is achieved with client packet switches at both Tx and Rx sides. This architecture permits to reduce the time of optical slot formation while ensuring a high optical slot filling rate and hence better use of all network resources (Tx/Rx capacity, channel capacity). The use of a client packet switch at Tx or Rx presents intermediate performance, heavily depending on the number and intensities of the aggregated client flows. The use of an electronic slot switch at Rx enables load balancing. The node architecture with this switch only does not permit flow aggregation and is therefore less efficient than those with client packet switches. In this paper we focused solely on performance analysis and the quantified trade-off in terms of CAPEX (or energy) for the various architectures was not evaluated. Future work will consider joint CAPEX and performance analysis of optical slot switched networks.

ACKNOWLEDGMENT

This work was partly supported by the CELTIC+ SASER-SAVENET project.

REFERENCES

[1] S. Korotky, “Traffic trends: Drivers and measures of cost-effective and energy-efficient technologies and architectures for backbone optical networks,” in Proc. OFC, 2012, OM2G.1.

[2] D. Chiaroni, G. Buforn Santamaria, C. Simonneau, S. Etienne, J.-C. Antona, S. Bigo, and J. Simsarian, “Packet OADMs for the next generation of ring networks,” Bell Labs Technical Journal, vol. 14, no. 4, pp. 265–283, Winter 2010.

[3] A. Stavdas, T. G. Orphanoudakis, H. C. Leligou, K. Kanonakis, C. Matrakidis, A. Drakos, J. D. Angelopoulos, and A. Lord, “Dynamic CANON: A scalable multidomain core network,” IEEE Communications Magazine, vol. 46, no. 6, pp. 138–144, Jun. 2008.

[4] M. C. Yuang, I. Chao, B. C. Lo, P. L. Tien, J. J. Chen, C. Wei, Y. M. Lin, S. S. W. Lee, and C. Y. Chien, “HOPSMAN: An experimental testbed system for a 10-Gb/s optical packet-switched WDM metro ring network,” IEEE Communications Magazine, vol. 46, no. 7, pp. 158–166, Jul. 2008.

[5] N. Deng, S. Cao, T. Ma, X. Shi, X. Luo, S. Shen, and Q. Xiong, “A novel optical burst ring network with optical-layer aggregation and flexible bandwidth provisioning,” in Proc. OFC, 2011, O.Th.R.5.

[6] Y. Pointurier, B. Uˇs´cumli´c, I. Cerutti, A. Gravey, and J.-C. Antona, “Dimensioning and energy efficiency of multi-rate metro networks,” IEEE/OSA Journal of Lightwave Technology, vol. 30, no. 22, pp. 3552– 3564, Nov. 2012.

[7] J. Simsarian and L. Zhang, “Wavelength locking a fast-switching tunable laser,” IEEE Photonics Technology Letters, vol. 16, no. 7, pp. 1745– 1747, Jul. 2004.

[8] B. Uˇs´cumli´c, A. Gravey, P. Gravey, and I. Cerutti, “Traffic grooming in WDM optical packet rings,” in Proc. ITC 21, 2009.

[9] B. Uˇs´cumli´c, I. Cerutti, A. Gravey, P. Gravey, D. Barth, M. Morvan, and P. Castoldi, “Optimal dimensioning of the WDM unidirectional ECOFRAME optical packet ring,” Springer Photonic Network Commu-nications, vol. 22, no. 3, pp. 254–265, Jul. 2011.

[10] T. Bonald, S. Oueslati, J. Roberts, and C. Roger, “SWING: Traffic capacity of a simple WDM ring network,” in Proc. ITC 21, 2009. [11] M. A. Marsan, A. Bianco, E. Leonardi, A. Morabito, and F. Neri,

“All-optical WDM multi-rings with differentiated QoS,” IEEE Communica-tions Magazine, vol. 37, no. 2, pp. 58–66, Feb. 1999.

[12] F. Hacimeroglu and T. Atmaca, “Impacts of packet filling in an optical burst switching architecture,” in Proc. IEEE AICT, 2005.

[13] H. C. Leligou, K. Kanonakis, J. Angelopoulos, I. Pountourakis, and T. Orphanoudakis, “Efficient burst aggregation for QoS-aware slotted OBS systems,” European Transactions on Telecommunications, vol. 17, no. 1, pp. 93–98, Jan. 2006.

[14] A. Pantaleo, M. Tornatore, A. Pattavina, C. Raffaelli, and F. Callegati, “Dynamic service differentiation in OBS networks,” in Proc. Broadnets, 2007.

[15] V. M. Vokkarane, K. Haridoss, and J. P. Jue, “Threshold-based burst assembly policies for QoS support in optical burst-switched networks,” in Proc. OptiComm, 2002.

[16] X. Yu, Y. Chen, and C. Qiao, “Performance evaluation of optical burst switching with assembled burst traffic input,” in Proc. GLOBECOM, Nov. 2002.

[17] T. Eido, D. T. Nguyen, and T. Atmaca, “Packet filling optimization in multiservice slotted optical burst switching MAN networks,” in Proc. IEEE AICT, 2008.

[18] B. Uˇs´cumli´c, A. Gravey., P. Gravey, M. Morvan, and I.Cerutti, “Network planning and traffic engineering in ecoframe optical packet ring,” in Telecommunications Forum (TELFOR), Nov. 2011, pp. 808–815. [19] “RFC2544: Benchmarking Methodology for Network Interconnect

Devices.” [Online]. Available: http://www.ietf.org/rfc/rfc2544.txt [20] W. C. Chan, T. C. Lu, and R. J. Chen, “Pollaczek-Khinchin formula for

the M/G/1 queue in discrete time with vacations,” IEE Proc. Computers and Digital Techniques, vol. 144, pp. 222–226, Jul. 1997.

[21] J. H. R. Sinha, C. Papadopoulos, “Internet Packet Size Distributions: Some Observations,” in ISI-TR-2007-643, May 2007.

Nihel Benzaoui received the engineering degrees from Institut National des T´el´ecommunications et des Technologies de l’Information et de la Commu-nication, Oran, Algeria in 2010, a Master Sp´ecialis´e en Conception et Architecture R´eseau from Telecom ParisTech, Paris, France in 2012. She then joined Alcatel-Lucent Bell Labs, France, as a Ph.D. student working on multi-layer mechanisms for optical slot switching networks.

Yvan Pointurier (S’02–M’06–SM’12) received a Diplˆome d’Ing´enieur from Ecole Centrale de Lille, France in 2002, a M.S. in Computer Science in 2002, and a Ph.D. in Electrical Engineering in 2006, both from the University of Virginia, USA. He spent two years at McGill University in Montreal, Canada and then one year at Athens Information Technology, Greece, as a Postdoctoral Fellow. In 2009 he joined Alcatel-Lucent Bell Labs, France, as a research engineer. His research interests span design, optimization and monitoring of networks in general, and optical networks in particular. Dr. Pointurier is a co-recipient of the Best Paper Award at the IEEE ICC 2006 Symposium on Optical Systems and Networks.

(13)

Thomas Bonald is Professor in the department of Computer Science and Networking at Telecom ParisTech, France. He graduated as an Engineer from Ecole polytechnique in 1994 and from Telecom ParisTech in 1996. He received the Ph.D. degree in Applied Mathematics from Ecole polytechnique in 1999. From 1999 to 2009 he was a Research Engi-neer at Orange Labs, working on traffic modeling for wireline and wireless networks. His main re-search interests are in queueing theory and stochastic models of communication networks. He is especially interested in applications to the design and performance evaluation of re-source allocation schemes like packet scheduling and medium access control algorithms. Thomas Bonald is the author of 70 scientific papers and 10 patents. He received with Alexandre Proutire the Best Paper Award of ACM SIGMETRICS / IFIP Performance 2004. He is with Mathieu Feuillet the author of the book Network Performance Analysis published by Wiley in 2011. He is a member of the Board of Directors of ACM SIGMETRICS and an Associate Editor of the IEEE/ACM Transactions on Networking.

Jean-Christophe Antona received the engineer-ing degrees from the Ecole Polytechnique (1998), Palaiseau, France, then from the Ecole Na-tionale Sup´erieure des T´el´ecommunications (2000), Paris, France. He received his Ph.D. in Elec-tronics/Communications from Telecom ParisTech (2011), in Paris, France. He joined Alcatel Research & Innovation, at Marcoussis, France, in 2000, within the Photonic Network Unit. He studied the manage-ment of the interplay between chromatic dispersion and Kerr-induced nonlinearities in WDM Ultra-Long Haul optical systems and achieved the physical design of large-scale demon-stration experiments at 10 and 40 Gb/s. He developed analytical models to simply design and compare optical transmission systems. Then, he studied the physical design of dynamic transparent optical networks. Now within Alcatel-Lucent Bell Labs, he is currently group leader of the Dynamic Optical Networks activity. He has authored or co-authored more than 70 publications (among which 4 post deadline papers in major telecommunication conferences) and 15 patent applications.

Figure

Fig. 1. Optical slot switching ring, 4 nodes and 2 clients per node.
Fig. 3. A simple 4-node scenario.
Fig. 4a shows the Mean Packet Queuing Delay without a timer and Fig. 4b using a timer, for each node, obtained theoretically using (2), and by simulation using NS3
Fig. 6a and Fig. 6b show respectively simulation and analytical results for queuing delay for different loads and for each node architecture
+3

Références

Documents relatifs

Among these, only 74 cases reported concomitant treatment with antipsychotics (aripiprazole, olanzapine, quetiapine, risperidone and/or ziprasidone) which are known to cause

The signals in this Newsletter are based on information derived from reports of suspected adverse drug reactions available in the WHO global database of individual case safety

I It can be chosen such that the components with variance λ α greater than the mean variance (by variable) are kept. In normalized PCA, the mean variance is 1. This is the Kaiser

of elliptic curves having complex multiplication with special Γ-values (cf. Function field, Kronecker limit formula, Eisenstein series, Drinfeld modules... This work was supported

Under this model, and owing to the fact that no intermediate species was ever found until now, MacLean and colleagues logically concluded that SARS-CoV-2 must have been selected in

For practical reason, this system is written using the vector potential A C of the incident laser field, the vector potential A R of the Raman component, the electric field

In case the router emits the delayed ICMP unreachable messages now, spurious undoing of the retransmission timer backoff is possible once, if the TCP segment number

Succeeding MRCPs are then treated as if they were MAIL commands, except that none of the text transfer manipulations are done; the stored message text is sent to