• Aucun résultat trouvé

Buffer Management for Traffic Classes with QoS RequirementsRequirements

Resource Management and QoS Provision in Delay Tolerant

2.5 Buffer Management for Traffic Classes with QoS RequirementsRequirements

The proposed algorithms considered thus far assume all end-to-end sessions (and, thus, messages) to be of equal importance. In many envisioned scenarios, network nodes might be running multiple applications in parallel. In this context, ensuring success-ful data delivery and/or minimizing the delivery delay may be more important for one DTN application than for another. Consider the example of a military operation where we have two applications launched concurrently at the DTN nodes: one reporting po-sition information of friendly forces periodically and another one generating mission debriefings less frequently. We can consider that the delivery delay requirement for the first one is lower than the second one, since, after some time, a reported position may be stale. On the contrary, ensuring that a single mission debriefing message is delivered successfully may be more important than losing some (out of many) position updates. It is thus reasonable to assume that different messages might have different QoS requirements and resource allocation decisions should take these into account.

To model this setup we make some additional assumptions. We assume there are C different traffic classes different QoS requirements. E.g., focusing on the case of delivery ratio, classkis assumed to have a minimum acceptable delivery probability PQoS(k) (a classkcan also be “best effort”, i.e.PQoS(k) = 0.). Similarly, we use superscript (k)to refer to a specific quantify for classk(e.g.n(k)i ).

Within this context, our goal is to modify the previous optimization framework with the following objectives:

Feasible Region: If there are enough resources in the network,

Obj. 1 To ensure that the achieved delivery probability for every classkis at least as high asPQoS(k).

Obj. 2 Provided that minimum delivery ratios are achieved, any remaining resouces are allocated towards improving thetotaldelivery rate (across classes).

Infeasible Region:If there are not enough resouces to fulfillObj. 1then

Obj. 3 Allocate the resources, so as to satisfy the requirements of higher priority classes (without loss of generality we assume priority is decreasing withk)

Objective (1) above suggest that satisfying QoS requirement of each class is a hard constraint. Objective (2) implies that beyond satisfying the QoS, extra resources should be allocated to messages which can most benefit by these resources. Given the decreas-ing nature of message marginal utilities (with respect to the number of copies), shown earlier, this results as we shall see to a max-min allocation policy of remaining re-sources. Finally, Objective (3) says that, if there are not enough resources, then class

priorities are absolute and all resources should be greedily allocated to the highest priority classes. The following formulates the above objectives into an optimization problem.

Definition 2.5.1 (Buffer Management with QoS Guarantees). Consider a network snapshot at timetwithKlive messages, andRi,Ti m(k)i , being the remaining time, elapsed time, and number of nodes that have seen messagei after elapsed timeTi, respectively (superscript(k)denotes that a quantity refers to classk). If the following optimization problem has a feasible solutionn(k)i , then any optimal solution satisfies Obj.1andObj.2. The key additonal constraint, compared to the original best effort problem is Eq.(2.6).

It says that the delivery probability is1if messageihas been delivered already (with probability1(Lm(k)i1)) orPi(k)(t)if it hasn’t (this probability depends on the number of allocated copiesn(k)i ) if it hasn’t, and the sum of both should be at least as large as de-sired QoS for this message’s class. This constraint is convex onn(k)i , so the centralized problem remains convex.

Eq.(2.7) is not a contraint per se (otherwise the problem would not be convex, since this is not affine), but just defines notationPi(k)(t)for brevity. If one could centrally and instantaneously choose all values forn(k)i , it is easy to see that Eq.(2.6) captures Obj.1and the objective captures Obj.2. Any interior-point method could solve this problem centrally. Nevertheless, as explained earlier, such a centralized policy cannot be implemented in our DTN context. Our goal instead is to ensure every node takes a drop or scheduling decision independently, in a distributed manner. Additionally, the above problem does not give any guarantees for when the problem is infeasible, and thus does not satisfyObj.3.

Based on the above observations, we propose the following modified utilities, that nodes can use for both scheduling and dropping messages (as in the case of best effort traffic, considered earlier), towards achieving these objectives.

Ui(k)(DR) =Ui(DR)·[

whereckis a large constant andck≫cl,∀k > l.

The above utility function achieves the following: (i) It first pushes the solution back into the feasible domain for each QoS constraint (because the term inside the max is much higher than 1, when the constraint is violated). (ii) If there is a feasible solution (i.e. Eq.(2.6) can be satisfied for all messages of all classes), then it performs gradient ascent among feasible solutions (as in the problem without QoS constraints);

this ensuresObj.1-2above are satisfied eventualy. (iii) If there is no feasible solution to the problem, the requirement on constantsck ensures that the available resources are allocated to highest priority class, till its constraint is satisfied, then remaining resources to 2nd highest priority class, and so forth.

This can be seen as a distributed implementation of the centralized problem of Def. 2.5.1, where the constraints of Eq.(2.6) are introduced in the objective as penalty functions. The types of penalty functions corresponding to the chosen utility are hard barrier functions, as they are0when the constraint is not violated but take a very high value when the constraint is violated even a little. During a contact then the nodes involved update only a subset of the control variables (independently and possibly in parallel with other pairs), corresponding only to the messages inside the two buffers.

Note that it is often more common to assume “soft” penalty functions in distributed implementations (gradually tightening the constraint), in order to ensure the algorithm does not get stuck on the border of the feasible region. However, the convexity of the problem together with the randomized nature of coordinate ascent here (the control variables updated at each step are random, and depend on the nodes that meet each other), a hard constraint like the above does not pose a problem. An implementation with soft penalty functions would be perhaps interesting when quantitiesniandmiare in fact noisy estimates obtained as explained in the next section. We defer this to future work.

2.5.1 Performance Evaluation

To evaluate our policy we considered three priority classes, namely Expedited (high-est), Normal and Bulk (lowest) (based on the terminology of the bundle protocol speci-fication [114] regarding different QoS classes). The BDR results are presented for vari-ous values of total available buffer space in the network. We consider a setup where the nodes create bundles every1/rseconds, they meet each with exponential inter-contact times with a common rate, and they exchange their non-common bundle copies (and drop copies, if the buffer is full) according to the utilities of Eq.(2.10). We present some sample results, comparing the performance of our policy with two existing QoS-based buffer management policies, namely: ORWAR [115] and CoSSD [116]. More details about the simulation setup and additional results (e.g. with real traces) can be found in [23].

Based on Fig. 2.2, it is clear that our scheme outperforms ORWAR. For low buffer values (i.e., < 500 buffer spaces), all three classes achieve higher BDR with our scheme. ORWAR fails to capture even the required performance of the Expedited class, even when the resources are adequate to do so. For higher buffer availabilities, OR-WAR’s expedited class reaches to higher BDR than the required threshold. However, this is not desired based on the previous discussion, as it comes at the cost of the other

Figure 2.2: QoS Policy vs ORWAR Figure 2.3: QoS Policy vs CoSSD

Figure 2.4: Overall policies comparison

two classes, whose performance is much lower than it could be. The superiority of our scheme is also captured by the overall network performance (Fig. 2.4, considering all classes), which is up to20%higher with our policy, comparing to ORWAR.

In Fig. 2.3, the results of the comparison with CoSSD policy are shown. Inside the infeasible region (< 400) the lower classes, as well as the overall performance (Fig. 2.4), are improved comparing to our policy. However, this comes at the cost of significant performance degradation for the Expedited class, which does not manage to reach its required performance threshold for the first values of Buffer sizes (<400).

This is obviously contrary to the intended behavior, which dictates that our primal goal is to reach the desired performance for the expedited class. The relative behavior between the two compared policies changes inside the feasible region (> 400). The Expedited class’s BDR for CoSSD increases beyond its desired QoS threshold, without the lower classes having reached this threshold. As highlighted for the comparison with ORWAR, this is opposite to the optimal behavior. The consequence is that our policy outperforms CoSSD both in terms of lower classes, as well as overall network performance in this buffer availability region.