FLOW AND CONGESTION CONTROL

Top PDF FLOW AND CONGESTION CONTROL:

gTFRC: a QoS-aware congestion control algorithm

gTFRC: a QoS-aware congestion control algorithm

as there are available resources and can decrease when a congestion occurs. Nevertheless, this QoS support is alone not sufficient to cope with either the application requirements (e.g., reliabil- ity, timing) or the network control requirements. Most of the today’s Internet applications use TCP [23] as a mean to transport their data. TCP offers a reliable and in se- quence end-to-end stream oriented data transfer service be- tween two interconnected systems. Moreover, TCP imple- ments flow and congestion control mechanisms in order to avoid receivers’ buffers overflowing and network conges- tion. Despite of a TCP good behavior in terms of avail- able bandwidth sharing, TCP is not appropriate for many applications that integrates time and bandwidth constraints. The TCP-friendly Rate Control (TFRC) [11] is an equation based congestion control mechanism operating in the best- effort Internet environment and competing with TCP traffic. TFRC has a much lower variation of throughput over time than TCP. As a result, it is more suitable for multimedia application such as video streaming or VoIP.
En savoir plus

9 En savoir plus

VIRAL : coupling congestion control with fair video quality metric

VIRAL : coupling congestion control with fair video quality metric

Keywords Video streaming · Congestion control · Flow rate fairness · Video quality fairness 1 Introduction Multimedia services have a rapid growth thanks to the evo- lution of technologies such as high-speed networks (ADSL, Wi-Fi, 3G, LTE) and video-enabled devices (laptop, smart- phone, tablet etc.). Traditionally, live multimedia traffic is recommended to be delivered over UDP. Unlike TCP, UDP does not introduce End-to-End (E2E) latency resulting from in-order and reliable delivery. This property makes UDP suitable for interactive real-time applications such as Voice over IP (VoIP) and video conferencing. On the other hand, most of commercial progressive download and streaming solutions (e.g. Apple HLS, Microsoft HSS, DASH) use TCP as underlying transport protocol because the end users can tolerate a short start-up delay for buffering. A challenge
En savoir plus

11 En savoir plus

On the congestion control of Peer-to-Peer applications : the LEDBAT case

On the congestion control of Peer-to-Peer applications : the LEDBAT case

IETF endorses TCP NewReno [43], a high priority loss based congestion control algorithm for TCP. Recent evolutions of loss-based protocols include new algorithms like Cubic [77] and Com- pound TCP [80]. Cubic has become the default algorithm for TCP in the Linux since kernel version 2.6.18 and Compound TCP for the Windows operating system. 4CP [63] is a window based con- gestion control algorithm, implemented as a sender modification of standard TCP NewReno [43]. Its controller design exploits a bad phase (congestion) detector, in order to guarantee a long-term stable throughput to 4CP connection when competing with a single TCP flow, but use the available bandwidth when congestion is low. Two per-flow bandwidth guarantee configurations are possible: in fixed mode, the bandwidth is fixed by the user; in automatic mode, 4CP aims to be TCP-friendly over a large timescale. Low-priority loss-based congestion control has TCP-LP [60] as its best known example, which is also available as a Linux kernel module. TCP-LP [60] enhances the loss-based behavior of TCP Reno with an early congestion detection based on the distance of the instantaneous One-Way Delay from a weighted moving average calculated on all observations. In case of congestion, the protocol halves the rate and enters an inference phase, during which, if further congestion is detected, the congestion window is set to one and normal TCP Reno behavior is restarted. Vegas [27] was proposed as a high-priority delay-based congestion control alterna- tive to the traditional loss-based TCP NewReno protocol. It derives its design choice of using the RTT delay measurements as a proactive congestion signal from the pioneering work of Jain about CARD [55], presented in late 80s. Like LEDBAT, Vegas aims at introducing a small fixed amount of additional delay in the bottleneck, yet to achieve a better throughput and reduce the retransmis- sions, as compared to standard TCP. Furthermore, the protocol assures fairness between multiple flows with heterogeneous propagation delays and does not suffer from stability problems. Among the low-priority protocols, NICE [81] extends the delay-based behavior typical of Vegas with a multiplicative decrease reaction to early congestion, which is actually detected when the number of packets experiencing a large delay in an RTT exceeds a given threshold. Overall the protocols closer in spirit with LEDBAT, i.e. which aim at implementing a LBE service for background trans- fers, are NICE and TCP-LP. Additional details about their similarities and differences are presented in Sec. 3.1.4.
En savoir plus

150 En savoir plus

Fairness Measurement Procedure for the Evaluation of Congestion Control Algorithms

Fairness Measurement Procedure for the Evaluation of Congestion Control Algorithms

1 The term friendliness refers to the capacity of new CCAs of coexisting with others without disrupting them notably in terms of fairness. flow has a saturation point in its path; and (iii) each flow obtains the best performance with respect to its requirements. To the best of our knowledge, the existing definitions of fairness do not capture one element that has become an essential development driver for service provider: the notion of session. Today’s connections between a client and a server are long sessions where not all data have the same impor- tance. Thus, service providers can develop strategies where a flow (session) preempts more resources at certain time of the connection, resulting in transient unfairness. As we will discuss in Section III, the fairness should neither be measured at the end of a session, not be measured on a small fraction of its duration. The researchers miss a definition that could take into account that the competition between flows should be considered at the level of sessions.
En savoir plus

9 En savoir plus

Friendly P2P: Application-level Congestion Control for Peer-to-Peer Applications

Friendly P2P: Application-level Congestion Control for Peer-to-Peer Applications

usage. It results that ISPs implementing such technologies may alleviate and even lose subscribers without actually improving the performance of their network. Another approach requires the contribution of Internet routers. Integrated services (IntServ) based on RSVP protocol can be used to prioritize some flows, while differentiated services (DiffServ) defines a set of class of services which allows a traffic management based upon broad flow aggregates. Unfortunately these mechanisms require the cooperation of all routers. In a more recent work, flow-aware networking (FAN) [8] provides per-flow differentiation to active flows through implicit admission control and per-flow scheduling. FAN requires the association of the end user and its access router to solve the problem of the access network congestion, but the replacement and update of current access routers are costly. We would focus on mechanisms based on only end users without the support of any router.
En savoir plus

6 En savoir plus

Congestion Control for Layered Multicast Transmission

Congestion Control for Layered Multicast Transmission

1. Introduction Several applications like video-conference, television over Internet and remote teaching need multicast transmission. The most important problem with this kind of applications is the need for real time transmission. Using TCP as transfer proto- col is not appropriate for two reasons: TCP is unicast and its flow/congestion control generates a bursty traffic, which is not suitable for real time flows. Using UDP (with- out flow control) is on the other hand unfair towards TCP because UDP would be too greedy. Therefore we need to complement UDP with an effective congestion control algorithm.
En savoir plus

15 En savoir plus

An analysis of NDN Congestion Control challenges

An analysis of NDN Congestion Control challenges

Finally, the third kind of solutions is a hybrid approach that combine the previous solutions. In [4], PCON is designed: the end-to-end part is based on a congestion window at the consumer side whereas the hop-by-hop part is a forwarding strategy on each NDN nodes. We respectively name them PCON-CS and PCON-FS in this paper. PCON-FS uses the CoDel approach [10] to detect the congestion: nodes measure the sojourn time of each packet in their queue. If its mean during a given period is higher than a threshold, the interface is considered as congested. This interface marks the Data packets rather than dropping them in order to trigger an explicit adaptation from the upstream nodes. Other nodes receiving a marked Data shall not use this interface for this flow and prefers to use the other available interfaces instead. Initially, PCON-FS uses the path as defined in BR. With the marks it receives, the traffic is progressively load balanced on the other available paths. The PCON-CS consumer has the same behavior as ICP: a congestion avoidance phase and the RTO computation from RFC 6298 [11]. In addition, a marked Data is considered as a congestion notification and triggers a multiplicative decrease.
En savoir plus

8 En savoir plus

2017 — Mobility Management and Congestion Control in Wireless Mesh Networks

2017 — Mobility Management and Congestion Control in Wireless Mesh Networks

CHAPTER 4 ADAPTIVE TRANSMISSION PREDICTIVE CONGESTION CONTROL 4.1 Introduction Network congestion is a situation where a network node is carrying more data than which it can handle. Some of the network resources which should be considered in loading a network includes buffer memory and processing speed. Problems associated with congestion include long time wasted for the jobs and data in the queues, packet loss, as well as new connections that can also get blocked (Islam et al., 2014). The main steps that commonly exist in most of the congestion control algorithms proposed in the literature are congestion detection, congestion signalling and flow rate adjustment. However, the first step is the most important step which can be done in an explicit or implicit way. In implicit approaches, the node can implicitly determine that there is a congestion in the network whenever the data transmission takes longer what it usually takes to transmit the packet, packet loss or even not receiving acknowledgment packet. This approach is not suitable for wireless network as the delay and packet lost could be resulted from other causes. An explicit congestion notification is the approach that we are following in our proposed algorithm. Congestion status are stored when an explicit congestion notification are received. Those congestion history is used in our proposed Variable Order Markov prediction model.
En savoir plus

137 En savoir plus

Mobile TFRC: a congestion control for WLANs

Mobile TFRC: a congestion control for WLANs

and download flows. For illustration purpose, we consider that two mobile nodes upload TFRC flows (of which the transmission rates are respectively 11Mb/s and 2 Mb/s) and three mobile nodes download TFRC flows with transmis- sion rate of 5.5 Mb/s. Fig. 6 shows that by default, each upload flow occupies much more bandwidth (average ra- tio of three) than each download flow. Conversely, when using MTFRC improved with the proposed fairness mech- anism, we observe a fair share of the bandwidth between the upload and download flows (Fig. 7). Indeed, in this case, when applying equation (7) (10) and (11), since the AP is considered as a upload node we have N = 3, N 1 = N 3 = 1, N 2 = 1 (which corresponds to the aggregated 3 download nodes) and we get X up = X m = 968Kb/s and X f air = X up ∗ 3/5 = 581Kb/s. The sending rate for each of the upload mobile nodes is then limited to X f air to allow download nodes sharing the same bandwidth.
En savoir plus

4 En savoir plus

A survey of optimal network congestion control for unicast and multicast transmission

A survey of optimal network congestion control for unicast and multicast transmission

6.4 Multicast utility functions Our discussion of utility functions so far concerns unicast utility functions. Any utility function can of course be used in a multicast context, but fairness between unicast and multicast flows can be impacted by the utility functions used, as pointed out in [22]. Consider the single-rate multicast case, for which the formulation (1) applies. A typical multicast session will use many more links than would a unicast flow between the (unique) source and any given receiver. The aggregated link price for the multicast session will thus typically be higher and the resulting session rate lower at the optimum. Unicast flows will tend to be unfair to multicast sessions inside the optimization framework, so that it is reasonable to contemplate the use of a bias in the utility function in order to compensate for this.
En savoir plus

35 En savoir plus

DiffServ-Aware Flow Admission Control and Resource Allocation Modeling

DiffServ-Aware Flow Admission Control and Resource Allocation Modeling

possible extension of the SLAs and highlights the links to consider in priority for network evolutions. 1. INTRODUCTION Packet switched networks are traditionally working in Best Effort, without providing any guarantee or prevision regarding the bandwidth allocated to flows, the losses, nor the delays. The emergence of multimedia applications like video streaming, games, IPTV… underlines the need for quality of service (QoS). Several approaches have been explored to guarantee a required amount of resources for some given flows. One possible solution is to offer over-provisioning for ensuring that the network is always far from congestion. The under-utilization of links makes certain that packets will arrive in intermediate nodes with a reduced concurrence from other flows and will be classified in nearly empty queues. Over- provisioning protects packets from losses and delays, but is not always possible to implement. For example, access network bandwidth is not indefinitely extensible. The best solution is the DiffServ architecture, standardized at IETF [2][3]. When QoS is required, the application or the first QoS-aware equipment marks packets to indicate the required QoS and each node applies the associated treatment. DiffServ defines several classes of service (CoS): Expedited Forwarding (EF), Assured Forwarding (AF) and Best Effort (BE). Each CoS is associated with a Per Hop Behavior (PHB) as explained in section 2. The idea of this architecture is to offer better treatment to given flows according to their needs.
En savoir plus

9 En savoir plus

Congestion control for coded transport layers

Congestion control for coded transport layers

Fig. 1: Schematic of experimental testbed. sharing a common throughput bottleneck), fairness is affected through E[ ˜ β i ]. Friendliness: Eq. (2) is sufficiently general enough to include AIMD with a fixed backoff factor, which is used by standard TCP. We consider two cases. First, consider a loss-free link where the only losses are due to queue overflow and all flows backoff when the queue fills. Under this case, E[ ˜ β i ] = β i (k). For flow i with fixed backoff of 0.5 and flow j with adap- tive backoff β j , the ratio of the mean flow throughputs is E[s i ]/E[s j ] = 2(1 − β j ) (by Eq. (4)), assuming both flows have the same RTT. Note that the throughputs are equal when β j = T j /RT T j = 0.5. Since RT T j = T j + q max /B where q max is the link buffer size, β j = 0.5 when q max = BT j (i.e., the buffer is half the size of the bandwidth-delay product). Second, consider the case when the link has i.i.d packet losses with probability p. If p is sufficiently large, the queue rarely fills and queue overflow losses are rare. The throughput of flow i with a fixed backoff of 0.5 can then be accurately modeled using the Padhye model [10]. Specifically, the throughput is largely decoupled from the behavior of other flows sharing the link, since coupling takes place via queue overflow. This means that flows using an adaptive backoff do not penalize flows that use a fixed backoff. Section V-C presents experi- mental measurements confirming this behavior.
En savoir plus

8 En savoir plus

Demonstration of Reduced Airport Congestion Through Pushback Rate Control

Demonstration of Reduced Airport Congestion Through Pushback Rate Control

Figure 3: Regression of the takeoff rate as a function of the landing rate, parameterized by the number of props in a 15-minute interval for 22L, 27 | 22L, 22R configuration, under VMC [9]. each additional prop departure. This observation is consistent with procedures at BOS, since air traffic con- trollers fan out props in between jet departures, and therefore the departure of a prop does not interfere very much with jet departures. The main implication of this observation for the control strategy design at BOS was that props could be exempt from both the pushback control as well as the counts of aircraft taxiing out (N). Similar analysis also shows that heavy departures at BOS do not have a significant impact on departure throughput, in spite of the increased wake-vortex separation that is required behind heavy weight category aircraft. This can be explained by the observation that air traffic controllers at BOS use the high wake vortex separation requirement between a heavy and a subsequent departure to conduct runway crossings, thereby mitigating the adverse impact of heavy weight category departures [9].
En savoir plus

20 En savoir plus

Cooperative Congestion Control dans NDN

Cooperative Congestion Control dans NDN

-FS” et ”PCON-CS + DRF” montrent le mˆeme type d’oscillations que TCP. Ce ph´enom`ene est directement li´e `a la gestion de la fenˆetre de congestion qui va diminuer drastiquement lorsqu’une congestion est d´etect´ee. ` A l’inverse, notre solution est tr`es stable et est capable de rapidement converger lorsqu’un nouveau flux commence ou termine. Cela est du au fait que CCC ne va pas augmenter de mani`ere aveugle le d´ebit des flux et va assigner les contraintes telles qu’aucune congestion ne peut se produire. En effet, contrairement `a une solution de bout en bout, chacun des nœuds connaˆıt le d´ebit qu’il peut atteindre et la capacit´e dont il dispose. Le seul cas o`u une congestion apparaˆıt est lorsqu’un nouveau flux commence, mais celle-ci est d´etect´ee localement `a l’aide des AQM et les valeurs des contraintes sont r´eduites pour r´esoudre ce probl`eme. Lorsque le flux A se termine, CCC est capable de rapidement r´e-allouer la bande passante pour que le flux B utilise ensuite la totalit´e de la capacit´e du chemin. Pour les deux autres
En savoir plus

5 En savoir plus

Demonstration of reduced airport congestion through pushback rate control

Demonstration of reduced airport congestion through pushback rate control

6. Observations and lessons learned We learned many important lessons from the field tests of the pushback rate control strategy at BOS, and also confirmed several hypotheses through the analysis of surveillance data and qualitative observations. Firstly, as one would expect, the proposed control approach is an aggregate one, and requires a minimum level of traffic to be effective. This hypothesis is further borne by the observation that there was very little control of pushback rates in the most efficient configuration (4L, 4R | 4L, 4R, 9). The field tests also showed that the proposed technique is capable of handling target departure times (e.g., EDCTs), but that it is preferable to get EDCTs while still at gate. While many factors drive airport throughput, the field tests showed that the pushback rate control approach could adapt to variability. In particular, the approach was robust to several perturbations to runway throughput, caused by heavy weight category landings on departure runway, controllers’ choice of runway crossing strategies, birds on runway, etc. We also observed that when presented with a suggested pushback rate, controllers had different strategies to implement the suggested rate. For example, for a suggested rate of 2 aircraft per 3 minutes, some controllers would release a flight every 1.5 minutes, while others would release two flights in quick succession every three minutes. We also noted the need to consider factors such as ground crew constraints, gate-use conflicts, and different pre-flight procedures for
En savoir plus

35 En savoir plus

A Stable and Flexible TCP-friendly congestion control protocol for layered multicast transmission

A Stable and Flexible TCP-friendly congestion control protocol for layered multicast transmission

Figure 4(a) shows the sharing in the rst extreme ase, i.e. a short RTT (1 ms) and TCP begins before CIFL. We see that CIFL is so aggressive in the beginning that it an take more bandwidth than TCP. After this, it de reases its throughput to get exa tly what TCP gets. In the medium

12 En savoir plus

Copa : practical delay-based congestion control for the internet

Copa : practical delay-based congestion control for the internet

We ask whether it is possible to develop a congestion control algorithm that achieves the goals of high throughput, low queueing delay, and fair rate allocations, b[r]

46 En savoir plus

Multi-Agent System for Smart-Grid Control with Commitment Mismatch and Congestion

Multi-Agent System for Smart-Grid Control with Commitment Mismatch and Congestion

As mentioned earlier, the power consumption/production imbalance constitutes one of the issues which could be miti- gated by these ancillary services. Such imbalances are mana- ged by Balance Responsible Parties (BRPs). Their imbalances are calculated over a set of loads and power sources, called a Balance Perimeter (BP), and over an imbalance settlement period (soon to be harmonised to 15 min in Europe [ [7]]). They correspond to the difference between their day-ahead commitment in terms of power consumption/production and their actual consumption/production. Should the actual pro- duction be greater, or the actual consumption be lower, than expected, the Transmission System Operator (TSO) pays the energy mismatch to the BRP. In the opposite case, the BRP must pay for this energy mismatch to the TSO.
En savoir plus

6 En savoir plus

Q-AIMD: A Congestion Aware Video Quality Control Mechanism

Q-AIMD: A Congestion Aware Video Quality Control Mechanism

saw-tooth behavior prevents the application to adapt efficiently its sending rate. Furthermore, the buffering at the sender side might overtake the delay constraint of the application. As a result, TCP is able to support real-time traffic (e.g., live streaming) if the fair-share is at least twice bigger than the source bit rate [4]. For all these reasons, the support of real- time applications has turned towards protocols allowing out- of-order delivery and rate-based congestion control such as TCP-Friendly Rate-based Control (TFRC) [5] which does not implement retransmissions mechanism. TFRC [6] is a rate- based congestion control mechanism specifically designed to carry multimedia traffic. This protocol is widely adopted as transport mechanism for such traffic due to its smooth sending property. It allows applications that use fixed packet size to compete fairly with TCP flows using the same packet size.
En savoir plus

9 En savoir plus

A DCCP Congestion Control Mechanism for Wired- cum-Wireless Environments

A DCCP Congestion Control Mechanism for Wired- cum-Wireless Environments

We have proposed here a new sender based loss discrimination scheme. We have used the concept of the Zig- Zag scheme for the loss discrimination but it is done at the sender side. All we need for the loss discrimination with Zig- Zag scheme is the value of ROTT, its mean value and its deviated value. The two later can be calculated once you have the value of ROTT. DCCP/TCP-like protocol calculates the value of Round Trip Time (RTT). In the absence of congestion, the time for a packet to reach the receiver must be same as the time for an acknowledgement to reach the sender. This leads to the notion that ROTT is one half of the RTT. So instead of implementing loss discrimination at the receiver side and additional loss notification mechanism, we can implement loss discrimination at the sender side taking ROTT as one half of RTT. This considerably reduces the changes required to be made at the sender side protocol. In our protocol agent, we have implemented loss discrimination at the sender side and have taken ROTT as one half of RTT. The results obtained are not as good as if proper loss notification mechanism would have been implemented but it is a good compromise between simplicity and reasonably good results.
En savoir plus

7 En savoir plus

Show all 10000 documents...