• Aucun résultat trouvé

Shared-Cache Networks

Dans le document Fundamental limits of shared-cache networks (Page 32-35)

2.3 Thesis Outline and Summary of the Main Contributions

2.3.1 Shared-Cache Networks

The first type of networks that we will study in this thesis is the one of a cache-aided network where each cache serves an arbitrary number of users. In particular, similarly to the cache-aided shared-link setting discussed in Section 1.2.1, a shared-link shared-cache network consists of a server with a library of N files that serves K users through an error-free shared link that can serve 1 file per unit of time. Each user is assisted at zero cost by one of the Λ (Λ≤ K) caches in the system such that — during the delivery phase — each cacheλ= 1,2, . . . ,Λ serves an arbitrary set of users Uλ. We will denote by Lλ the number of users associated to the λ-th most populated cache, and we will refer to the vectorL= (L1, L2, . . . , LΛ) as the cache occupancy vector. Notice that in general we have thatLλ 6=|Uλ|. We also assume that the cumulative size of the Λ caches ist times the size of the library. This setting can be separated into two major different

scenarios: the first is the topology-agnostic scenario where, during the cache placement phase, it is not known the number of users associated to each cache, and the second is the topology-aware scenario where, on the contrary, the caching phase is aware of the exact number of users assisted by each cache. Furthermore, within the context of shared-cache networks, we will seek to study the shared-cache setting where each user has access to more than one cache. Within the framework of coded caching, we will refer to this problem as themulti-access coded caching problem. For this setting, we will focus on the symmetric scenario withK users and Λ =K caches, where each user has access to z different helper caches. We now proceed to present our main contributions for these aforementioned settings.

Topology-Agnostic Coded Caching with Shared Caches: In this scenario, we assume that, during the cache placement phase, the exact number of users |Uλ| associated to cache λis not available while, instead, the cache occupancy vectorL is known. Under the assumption of uncoded cache placement, the optimal worst-case (among all possible users’ requests) normalized delivery time takes the form

T(t,L) = PΛ−t

r=1Lr Λ−rt

Λ t

. (2.1)

We will see how this optimal performance is achieved with a uniform memory allocation, i.e each cache is of sizeγ = Λt. This result allows us to conclude that, if the number of users associated to each cache is not known during the cache placement phase, any other non-uniform memory allocation cannot be helpful. What we will also see is that the NDT in (2.1) corresponds to a sum DoF smaller thant+ 1, which is achievable only if users are distributed uniformly among the caches, i.e. when Lλ= KΛ,∀λ∈[Λ]. The above result in (2.1) will be derived in Chapter 3 where we will present the achievable scheme and the corresponding matching converse. The latter is based on the index coding technique used in [4] which proved the optimality of the MAN normalized delivery time previously stated in equation (1.1). However, our heterogeneous setting required an interesting twist of the original converse technique that nicely allowed to push the lower bound up to the exact optimal performance. This result — which extends to the scenario where the server is equipped with a number of antennas N0 no larger than the smallest number of users in a cache (i.e. N0≤LΛ) — can also be found in the following publications:

[41]E. Parrinello, A. ¨Unsal, and P. Elia, “Optimal coded caching in heterogeneous networks with uncoded prefetching,” in IEEE Information Theory Workshop, (ITW), 2018.

[42]E. Parrinello, A. ¨Unsal, and P. Elia, “Fundamental limits of coded caching with multiple antennas, shared caches and uncoded prefetching,” in IEEE Transactions on Information Theory, vol. 66, no. 4, pp. 2252–2268, 2020.

Topology-Aware Coded Caching with Shared Caches: In the considered topology-aware scenario, the exact number of users |Uλ|associated to each cacheλis known during the

cache placement phase. Without loss of generality, we here assume that cacheλis the most populated cache1, i.e. Lλ =|Uλ|,∀λ∈[Λ]. This knowledge of the network topology opens up the opportunity for an optimized memory allocation as a function ofL. For each λ∈[Λ], we will allocate to cache λa (normalized) cache size

γλ,

Lλ·P

q∈Ct−1[Λ]\{λ}

Qt−1

j=1Lq(j) P

q∈Ct[Λ]

Qt

j=1Lq(j) , t∈ {0,1, . . . ,Λ}, (2.2) where we have used the notationCkT ,{τ :τ ⊆ T,|τ|=k}. For a total cache-size budget t, we will show that all users can be served within a NDT that is no larger than

T(t,L) = PΛ

λ=1Lλ(1−γλ)

t+ 1 , (2.3)

where {γλ}Λλ=1 (defined above) adheres to the global cache size constraintPΛ

λ=1γλ=t.

Equation (2.3) directly tells us that since Lλ is the number of files requested by the users connected to cacheλ, and since the quantity (1−γλ) corresponds to the amount of data that each user connected to cacheλhas to receive, then the denominatort+ 1 corresponds to the sum DoF of the network and reflects the number of users that can be served simultaneously. Interestingly, the memory allocation allows to achieve the sum DoF oft+ 1 regardless of cache occupancy vectorL. This nicely deviates from the topology-agnostic scenario where this maximal DoF is achievable only when users are uniformly distributed among the caches. Furthermore, in this thesis we will show that this achievable NDT is information theoretic optimal under the assumption of uncoded and homogeneous cache placement. Here, the expressionhomogeneous cache placement refers to those placement schemes where all the bits of the library are cached the same number of times across the caches. The achievable scheme highlights the importance of memory allocation in heterogeneous settings, and the converse provides combinatorial tools that we believe can be used to develop bounds for other heterogeneous cache-aided settings.

These results will be presented in Chapter 4 where we will also discuss the scenario where during the cache placement phase the available information on the topology (in particular on the cache occupancy vector) is erroneous or imperfect. In this context, we will demonstrate the benefits of memory allocation that exploits only the average cache occupancy, and how even such partial knowledge far outperforms the topology-agnostic treatment. Part of our work on topology-aware shared-cache networks resulted in the following publication

[43]E. Parrinello and P. Elia, “Coded caching with optimized shared-cache sizes,”

in 2019 IEEE Information Theory Workshop (ITW), 2019, pp. 1–5, and all these results will be soon submitted to the following publication:

1Because of the assumption thatLλ=|Uλ|, in this topology-aware scenario, saying that the exact number of users|Uλ|associated to each cacheλis known during the cache placement phase is equivalent to saying that the cache occupancy vectorLis known during such phase.

[44]E. Parrinello, A. Bazco-Nogueras and P. Elia, “Fundamental limits of topology-aware shared-cache networks,” to be submitted to IEEE Transactions on Information Theory, 2021.

Multi-Access Coded Caching: Chapter 5 addresses the multi-access coded caching problem where each of the K users in the system are connected toz consecutive caches.

We assume that there are in total Λ =K caches, each with a normalized memory size γ = Kt, and that the topology of the network follows a cyclic-shift pattern (see Figure 5.1), where for example userKis connected to caches{K,1,2, . . . , z−1}. For this setting the work in [26] showed that the following worst-case NDT

T(t, z) = K(1−zγ)

t+ 1 (2.4)

is achievable, thus showing an increased local caching gain of (1−zγ) compared to the MAN setting where each user has access to only one cache (z = 1). What the above though was not able to show was an increase in the global caching gain, which remained at t+ 1 = Λγ + 1 =Kγ+ 1. In our work we will show that for the special case where z= K−1t the NDT

T

t,K−1 t

= K−zt zt+ 1 = 1

K (2.5)

is achievable, and it is optimal under the assumption of uncoded cache placement. This shows an ability of the scheme to serve zt+ 1 users at the same time as if there were zK caches in the system and each user had exclusive access toz of them. Furthermore, whent= 2 we show that a global caching gain greater thant+ 1 = 3 is achievable while maintaining the full local caching gain (1−zγ). In other words, for any z∈[K−1] and t= 2 our achievable NDT satisfies

K−2z

4 < T(2, z)≤ K−2z

3 . (2.6)

These results were published in the following publication:

[45] B. Serbetci, E. Parrinello, and P. Elia, “Multi-access coded caching: gains beyond cache-redundancy,” in 2019 IEEE Information Theory Workshop (ITW), 2019, pp.1–5.

Dans le document Fundamental limits of shared-cache networks (Page 32-35)