• Aucun résultat trouvé

Mobile Edge Caching

7.2 Low Cost Caching for Streamed Video Content

7.2.3 Generic Density Regime

We now consider a busy urban environment where contacts with different vehicles (with the same content) might overlap, i.e., λ·Ni·E[D]is not small. If a user is downloading videoifrom node A, and the connection is lost (e.g., user or cache moves away), the user could just keep downloading from another node B storing i, also in range. Hence, as long as there isat leastone cache with a copy within range (we denote this time interval withBi), the user will keep downloading contentiat raterH8. We can then model these overlapping contacts with an extra G/G/queue in front of the playout queue (as shown in Fig. 7.7). New vehicles arrive in the G/G/queue with rateλNi, each staying for a random service time (corresponding to a contact duration with meanE[D]) and independently of other cars. The number of jobs in the G/G/ queue is the number of cars concurrently within range of the user.

Hence, it is easy to see that: (a) the beginnings of busy periods of the queue on the left correspond to new bulk arrivals in the playout buffer (queue on the right), and (b) the mean duration of such busy periods, multiplied byrH, corresponds to the (new) mean bulk size per arrival.

Lemma 10. For a generic density scenario, the bulk arrival statistics into the playout

8We ignore for now interruptions from switching between nodes. Such delays can be very small (e.g., in the order of few ms if vehicles are operating as LTE relays [247]). We consider switching and association delays in the simulation scenarios.

buffer are:

Proof. Let{ZCi(t), t >0}beNiidentical and independent renewal processes corre-sponding to the inter-contact times with vehicles storing contentidrawn from distri-butionfC(t)with rateλ(see A.5). Let further{ZC(t), t >0}be the superposition of these processes. According to the Palm-Kintchine theorem [265],{ZC(t)}approaches a Poisson process with rateλNiifNilarge andλsmall. Thus, the G/G/queue cap-turing overlapping meetings (Fig. 7.7 left) can be approximated by an M/G/queue with arrival rateλNiand mean service timeE[D].

The probability that there are0jobs in the system (idle probability) iseλE[D]·Ni (this result is well known for M/M/queue, but it also holds for generic contact dura-tions by the service process insensitivity of the M/G/queue [233]). Furthermore, by ergodicity, it holds that9:

E[Ii]

E[Bi] +E[Ii] =eλE[D]·Ni.

Since E[Ii] = 1/λNi, solving for E[Bi] gives us the expected busy period of the M/G/queue and multiplying byrHgives as the expected bulk sizeE[Y]of Eq. (7.16).

Additionally, the beginnings of busy periods of the M/G/queue correspond to (bulk) arrivals into the playout queue. The mean time between such arrivals is simply E[Bi] +E[Ii]. Hence, the arrival rate of bulks into the playout buffer is:

λP i , 1

Lemma 11. For a generic density scenario the following expression is asymptotically tight as the content sizesibecomes large, whenNi <λE[D]1 ·ln Proof. The proof follows directly by replacingλP iandE[Y](from Lemma 10 above) into the proof of Lemma 7. The extra condition corresponds to the requirement ofρi<

1 (for stationarity). Note that when this requirement is not satisfied, this essentially implies that the mean delivery capacity of the helper system is higher thanrP, and the infrastructure is essentially not needed for large enough content.

We can now formulate the optimal cache allocation problem for the generic density scenario:

9We slightly abuse notation for these idle and busy periods of the queue on the left, while in the proof of Lemma 7 we used them for the idle and busy period of the playout buffer (queue on the right).

Theorem 2.2(Generic Density Optimization). For a generic density scenario, the opti-mal content allocation is the solution to the following integer non-linear programming:

minimize

Ni

k i=1

ϕi·si·eλE[D]·Ni, (7.18) subject to the constraints of Eqs. (7.10) and (7.11).

Proof. Based on Lemma 11, the total number of bits downloaded fromIis equal to minimize

which is equivalent to minimize the objective function of Eq. (7.18).

Corollary 6. The above optimization problem is NP-hard.

The optimization problem described in Theorem 2.2 corresponds to a nonlinear bounded knapsack problem, which is known to be NP-hard. Branch-and-bound algo-rithms have been developed for such problems, but they are not efficient when the size of the problem increases. Instead, we will consider here thecontinuous relaxationof the above problem, i.e., assuming that Ni [0, H] are real random variables. It is easy to see that the continuous problem is convex, and we can solve it using standard Lagrangian methods [266]. In addition to reduced complexity, this relaxation allows us to also derive the optimal allocation in closed form thus offering additional insights (e.g., compared to discrete approximation algorithms).

Corollary 7(Generic Density Allocation). For a generic density scenario with contin-uous allocation variablesNi[0, H], the optimal cache allocation is given by

Ni=

λE[D] , andmCis an appropriate Lagrangian multiplier.

The proof is based on standard Lagrangian methodology and is omitted due to space limitations. It can be found in [94]. We userandomized rounding[267] on the content allocation of Corollary 7 to go back to an integerNi allocation, which is a widely used approach for designing and analyzing such approximation algorithms. As argued earlier, the expected error is small when caches fit several content. To validate this, in Table 7.3 we compare the objective value from our allocation to the one corre-sponding to the continuous solution of Corollary 7 (we report the percentage of traffic offloaded). As the latter is a lower bound on the optimal solution of the discrete prob-lem of Theorem 2.2, the actual performance gap is upper bounded by the values shown in Table 7.3.

Table 7.3: Estimated offloading gains of rounded allocation vs. continuous relaxation for different cache sizes (in percentage of the catalogue size).

Cache size 0,02 0,05 0,10 0,20 Rounded 33,148% 45,334% 54,959% 62,751%

Continuous 33,116% 45,323% 54,955% 62,750%

The above results are all based on the underlying assumption of queue stationarity.

In simulation results, we can see that, in the scenario considered, this is valid for videos longer than 30min, but is already accurate enough for 15min videos. In any case, in [94]

we have derived analytically this stationarity error as a function of file size. One could plug this into the objective as a correction factor and solve the resulting optimization for even better results. Nevertheless, for a realistic catalog of YouTube files, enough offloading gains can already be observed, as shown in the next section.

7.2.4 Performance Evaluation

We perform simulations based on real traces for vehicle mobility and content popu-larity to confirm the advantages of the vehicular cloud and to validate our theoretical results. To do so we extend our previous simulator to also modelUser Mobility.: We use synthetic traces based on SLAW mobility model [268]. Specifically, according to the model, users move in a limited and defined area around popular places. The mo-bility is nomadic where users alternate between pauses (heavy-tailed distributed) and travelling periods at constant (but random) speed.

Inline with the proposed protocols (e.g., 802.11p, LTE ProSe), we consider two maximum communication ranges betweenUandHnodes: 100m (short range) or 200m (long range). As most wireless protocols implement somerate adaptationmechanism, our simulator also varies the communication rate according to the distance between the user and the mobile helper she is downloading from (while we consider the average download rate in the model). Given current wireless rates in the 802.* family and that near future vehicles will probably carry high speed mobile access points, we use amean rH = 5Mbps. We also setrP = 1Mbps, that approximates the streaming of a 720p video. Additionally, we implement an association setup mechanism according to [247]

that introduces a delay of 2s to synchronize a UE with a vehicle (i.e., the download from a cache starts 2s after the beginning of the contact). Finally, we set the cache size per nodeCin the range0,02%0,5%of the total catalogue, which is an assumption that has also been used in [260, 83] (we use0,1%as default value). Unless otherwise stated, the mean video length is 1 hour (i.e., mean content size equal to 450MB).