• Aucun résultat trouvé

Converse Bound for the Scenario with Given Computation and Reduce

Dans le document Fundamental limits of shared-cache networks (Page 151-156)

In this section we provide the proof of Theorem 11. We start with some useful notation.

For a subset S ⊆[Λ] and a file assignment M, we denote the number of files that are exclusively mapped by j nodes inS as

a(j)S,M, X

J ⊆S:|S|=j

|(q∈J∩ Mq)\(i6∈J∪ Mi)|, which in turn can be rewritten as

a(j)S,M= 1 j

X

λ∈S

a(j,λ)S,M (8.49)

wherea(j,λ)S,M is the number of files that are exclusively mapped byj nodes in S including node λ. It holds that

|S|

X

j=1

a(j,λ)[Λ],MλN, λ∈[Λ]. (8.50)

Furthermore, for someQ ⊆[K] andN ⊆[N], we define VQ,N ,{Vq,n :q∈ Q, n∈ N },

whereVq,n was introduced at the beginning of the previous section and it represents the random variable whose realization gives the intermediate value vq,n.

Next, for a subsetS⊆[Λ], we define

XS ,(Xλ :λ∈S), as well as

YS ,(VLS,:, V:,MS),

where LS = ∪λ∈SLλ, MS = ∪λ∈SMλ, and where we use ”:” to denote the set of all possible indices. YS contains all the intermediate values required by the nodes in S and all those known locally by the nodes in S after the map phase.

We can now start the proof by presenting an important lemma which is instrumental for the derivation of the lower bound.

Lemma 12. For any file assignmentMand any reduce load vector L, it holds that

H(XS|YSc)≥TX

λ∈S

|S|

X

j=1

a(j,λ)S,M P

q∈SLq−j·Lλ

j2 S ⊆[Λ] (8.51)

Proof. The proof of Lemma 12 is presented in Appendix D.1.

We now proceed to lower bound the optimal communication load τ(L,γ) as follows:

where (8.54) is due to Lemma 12.

Recalling that PΛ

λ=1γλ=t, under the assumption of homogeneous file assignment we have that

Applying this assumption in (8.54), we obtain that the optimal communication load τhom (L,γ)can be lower bounded as

where (a) follows from (8.49), (b) follows from (8.55) and (8.56), and (c) from the fact that

P

λ∈[Λ]Lλ(1−γλ)

Kt does not depend on the specific file assignmentM. The proof of Theorem 11 is complete.

Conclusions and Future Directions

In this chapter, we quickly revisit the major contributions of this thesis and present some possible future directions.

9.1 Shared-Cache Networks with Single-Antenna Transmitter

Motivated by realistic cache-aided heterogeneous networks, in this thesis we have studied the fundamental limits of shared-cache networks with (potentially) heterogeneously populated caches. While we mostly focused on the case where each user is associated to only one cache, for a specific symmetric network configuration with as many users as caches we have assumed that users are associated to different many helper caches.

9.1.1 Topology-Agnostic and Topology-Aware Scenarios

Brief Summary: For the topology-agnostic scenario, where the number of users con-nected to each cache is not known during the caching phase, we have characterized the optimal normalized delivery time under the assumption of uncoded cache placement. This result shows that, as expected, the sum-DoF of t+ 1 = Λγ+ 1 is achievable only when users are distributed uniformly among the caches. This is in contrast to the topology-aware scenario where we have shown that a proper memory allocation as a function of the number of users associated to the caches allows for multicasting opportunities that, in turn, result in a sum-DoF oft+ 1 regardless of the specific cache occupancy vector. At the same time, a carefully designed non-uniform memory allocation leads to higher local caching gains than the uniform case, thus surprisingly implying that the more skewed the cache occupancy vector is, the lower will be the NDT required to serve all the users.

For this topology-aware scenario, the optimality of our achievable performance requires

— in addition to the aforementioned uncoded cache placement assumption — also the assumption that all the bits of the library are stored across the caches the same number of times (homogeneous placement). Finally, we have demonstrated the benefits of memory allocation also in the case when only the average cache occupancies are known in the placement phase, and we have shown how even such partial knowledge far outperforms

the topology-agnostic scheme. We recall that all the above results also apply to the equivalent multiple file request problem.

Discussions for Possible Future Directions: Dealing with the heterogeneity of the setting raised interesting challenges in redesigning converse bounds as well as redesigning coded caching schemes, which generally thrive on uniformity. We hope and we believe that the tools developed for the study of these problems can be useful for other heterogeneous settings. For example, we have seen how the proposed memory allocation for the topology-aware scenario is strongly related to the one used in [38] for the dedicated-cache setting where the link rates between the transmitter and receivers are different. We hope that the techniques used in our developed bound for the topology-aware scenario could be exploited also in this unequal link rates setting in order to provide better information theoretic guarantees than the one presented in [38]. Furthermore, we observe that the high gains achieved by our topology-aware scheme come with a price of a high subpacketization requirement, which we recall takes the formS=P

τ∈Ct[Λ]

Qt

j=1Lτ(j). In the finite file size regime with a large number of caches and users, this high subpacketization requirement might significantly limit the actual gains. Therefore, a possible interesting direction for future work could aim at designing achievable coded caching schemes with lower subpacketization requirements. Finally, we recall that our work assumes that the cost of fetching data from the caches is zero. However, while it is true that the data rates at which a cache-aided access point can serve the users are much higher than those at which a macro base station can serve them, it is also true that such rates are finite and therefore fetching content from the caches is in general a costly operation. This rises the need of studying shared-cache networks where the access to the caches has a non zero cost.

9.1.2 Multi-Access Shared-Cache Network

In this thesis we have considered a special type of multi-access shared-cache networks where each user is associated toz neighboring caches and where the number of caches equals the number of users. When our results were published, our work was the first to show that a global caching gain higher than the one achievable in the single access setting — where each user is associated to only one cache — is achievable along with the full local caching gain arising from the several caches to which each user is connected.

We showed this interesting fact by proposing two achievable schemes for the special cases whereKγ= 2 and z= K−1 . For the latter case, our proposed scheme is optimal and it shows an ability to serveKγz+ 1 users simultaneously as if there wherezK caches in the network and each user had exclusive access tozof them. We acknowledge the limitations of our achievable schemes which are designed for these 2 aforementioned special cases.

Despite after the publication of our results, several other works have provided additional contributions on this topic, characterizing the fundamental limits of such networks still remains an open problem, both in terms of achievable schemes and converse bounds.

9.2 Cache-Aided MISO Broadcast Channel

Brief Summary: The first contributions of this thesis on the cache-aided MISO BC were obtained for the setting where each user is associated to a cache which, in turn, serves an arbitrary number of users. For this shared-cache network, we have assumed a topology-agnostic scenario and we have focused on the case where the number of antennasN0 at the transmitter is smaller than the number of users per cache. For this described setting, we characterized the optimal normalized delivery time (DoF regime) under the assumption of uncoded cache placement, and we showed that the gain coming from having multiple antennas allows for a multiplicative reduction ofN0 in the NDT.

For the complementary regime where N0 can be higher than the number of users per cache, the optimal NDT remains generally unknown. However, for the special case where the users are distributed uniformly among the caches (with an integer constraint on the ratio between N0 and the number of users per cache KΛ) we have proposed a scheme achieving the optimal one-shot linear DoF ofKγ+N0 regardless of the value of Λ, as long as Λ≥ NK0. This result suggests that in a dedicated-cache setting where — motivated by subpacketization constraints — we use user grouping such that the users in the same group store the same content, arranging users in less than NK

0 groups does not allow to achieve the optimal one-shot linear DoF of the network. Furthermore, for the dedicated-cache setting where the number of antennas N0 at the transmitter exceeds the total cache redundancy Kγ, we proposed a scheme that achieves a DoF of Kγ+α (α ≤ N0) with a low subpacketization requirement. We analysed the performance of this scheme also in the finite SNR regime where we have used optimized MMSE-type beamformers that require low computational complexity. Interestingly, the parameter α can be tuned both to control subpacketization and to trade multiplexing gains with beamforming gains, which we know being very important at low SNR regime. Finally, the low computational complexity of the beamforming design combined with the low subpacketization allows our scheme to be eligible for being used in large networks with many users and many antennas at the transmitter.

Discussions for Possible Future Directions: With minor modifications, our topology-agnostic multi-round scheme presented in Section 3.3 could be applied to the regime whereN0 ≥Lλ,∀λ∈[Λ]. Also, we recall that our developed converse bound in Section 3.4 holds for this aforementioned regime. However, we know that, for this regime, the converse bound is loose, and we believe that the aforementioned multi-round scheme fails to achieve the optimal performance. Therefore, the optimal DoF of the topolgy-agnostic setting for this regime where N0 ≥Lλ,∀λ∈[Λ] remains an open problem. In Chapter 4, we have briefly mentioned that the topology-aware scheme developed for the shared-link shared-cache network can be extended to the case where the transmitter is equipped with multiple antennas as long asN0 ≤Lλ,∀λ∈[Λ]. The details were not provided in this thesis. For the complementary regime whereN0 ≥Lλ,∀λ∈[Λ], we believe that if the optimal achievable scheme for the topology-agnostic scenario would be known, then, in light of the results of this thesis, the extension of such scheme to the topology-aware scenario might be straightforward. Finally, for the dedicated-cache setting we believe

that an interesting future direction would be the one of working towards a decentralized variation of the proposed scheme in Chapter 6, similarly to [53] where each user picks at random one of some pre-defined cache states. We believe that such a practical ramification of the proposed scheme could still achieve good finite SNR performance.

Dans le document Fundamental limits of shared-cache networks (Page 151-156)