• Aucun résultat trouvé

Cache-Aided Shared-Link Broadcast Channel

Dans le document Fundamental limits of shared-cache networks (Page 20-24)

1.2 Coded Caching

1.2.1 Cache-Aided Shared-Link Broadcast Channel

Coded Caching was proposed for the first time in [2], for a network where a server with access to a library ofN files serves — through a noiseless shared-link channel of capacity 1 file per unit of time — a set of K users, each equipped a storage unit of size M (in units of files), so that each user can store a fractionγ, MN of the library. The system

is assumed to operate in two different and sequential phases. The first, referred to as cache placement phase, consists in filling — during the off-peak hours – the caches with a fraction of the content of the library without knowledge of the future users’ demands.

The second phase, calleddelivery phase, occurs when each user in the system requests a (potentially different) file from the library. The server, then, depending on the requested files and the content cached at the users, transmits a codeword of duration T, which will be used by each user, along with the cached content, to recover its requested file. For this setting, Maddah-Ali and Niesen proposed a cache placement phase that consists of splitting each file of the library into very small chunks and carefully placing them in the users’ caches according to a specific combinatorial pattern. This novel cache placement induces in the delivery phase coded multicasting opportunities that are properly exploited by a coding scheme that delivers XORs that serve Kγ+ 1 users simultaneously. The work in [2] has shown that irrespective of the user demands, the normalized delivery time (NDT) cannot be worse than

TM AN = K(1−γ)

Kγ+ 1 , γ ∈

0, 1 K, 2

K, . . . ,1

. (1.1)

If we have a careful look at the NDT in (1.1), we can identify two important quantities.

First of all, we identify the numerator term (1−γ), which is commonly referred to as local caching gain and which tells us how much of each file has to be transmitted to each user in additional to the amount that the user has in its own cache. Then, we identify the denominator termKγ+ 1, which is usually referred to as global caching gain and which represents the number of users that can be served simultaneously in the delivery phase. We will also refer to this quantity as the sum degrees of freedom of the network, which is here defined as

DoF , K(1−γ)

T , (1.2)

reflecting the rate of delivery of the non-cached desired information. This achievable performance was proved to be information theoretic optimal within a multiplicative factor of 2 in [3] and exactly optimal under the assumption ofuncoded cache placement in [4]

(see also [5]). Here, the term uncoded cache placement refers to any cache placement strategy that stores the bits of the library in the caches without applying any coding.

It is important to point out that TM AN (K, t) is optimal in terms of the worst-case performance, while a smaller delivery time can be achieved when a subset of users have identical requests [5]. Furthermore, the MAN scheme [2] requires that during the placement phase the server be already aware of the number and identity of the users that will use the system in the subsequent delivery phase. For this reason, this scheme is classified as a centralized scheme. In some practical scenarios, the identity of the users might not be available to the server, which is forced to employ a so-calleddecentralized scheme [6]. Before presenting the general scheme achieving the above performance in (1.1), we present a simple example that helps to convey the idea of the general algorithm in [2].

A Toy Example

Consider a caching network where a server with a library ofN = 4 equally-sized files W(1), W(2), W(3), W(4) servesK = 4 users, each equipped with a cache of size M = 2.

We assume that the channel between the server and the users is noiseless with capacity equal to one file per unit of time. In the cache placement phase, the MAN scheme in [2]

splits each file of the library in 6 equally-sized subfiles as follows:

W(n)={W12(n), W13(n), W14(n), W23(n), W24(n), W34(n)},

where each subfile Wτ(n), n∈[N], τ ⊂[K],|τ|= 2 has size/duration |Wτ(n)|= 16 . Then, each userk∈[4] fills its cache Zk in the following way:

Z1 ={W12(n), W13(n), W14(n) ∀n∈[4]}, Z2 ={W12(n), W23(n), W24(n) ∀n∈[4]}, Z3 ={W13(n), W23(n), W34(n) ∀n∈[4]}, Z4 ={W14(n), W24(n), W34(n) ∀n∈[4]}.

We observe that, because of the above cache placement, when each user will place a request to the server, she will have to retrieve from the library only 3 out of the 6 subfiles that compose the requested file, regardless of the specific request. Let us now assume that in the delivery phase each user requests a different file so that users 1,2,3,4 request files W(1), W(2), W(3), W(4), respectively. Then, the MAN strategy will create and sequentially transmit the following bit-wise XORs:

x123=W23(1)⊕W13(2)⊕W12(3), x124=W24(1)⊕W14(2)⊕W12(4), x134=W34(1)⊕W14(3)⊕W13(4), x234=W34(2)⊕W24(3)⊕W23(4).

(1.3)

Let us now focus on the transmitted messagex123. We observe that user 1 has in its own cache the subfilesW13(2) andW12(3), which can therefore be removed fromx123 to get the desired subfileW23(1) free of interference from the subfiles intended for users 2 and 3.

Due to its cached content, user 1 can recover the other missing subfiles W24(1) andW34(1) from the transmitted messagesx124 and x134, respectively. Following similar arguments, we can conclude that all the other users can also successfully recover all their desired subfiles from the transmitted messages in (1.3). The total normalized delivery time T needed to successfully deliver all the subfiles to the 4 users is given by the sum of the duration of the four above messages, i.e. T= 4× 16 = 23. Notice that if we transmitted the 12 subfiles one by one the total delivery time would be as high asT = 2. Thus, coded caching allows for a multiplicative reduction in the delay by a factor ofDoF = 3.

The MAN Coded Caching Scheme

Having given the above example, we proceed to provide the MAN placement and delivery schemes in their general form.

MAN Cache Placement Each file W(n) of the library is split into S = K

disjoint equally-sized subfiles as follows:

W(n)= (Wτ(n) :τ ⊆[K] :|τ|=Kγ). (1.4) For each n∈ [N], subfile Wτ(n) is stored in the cache of user k if and only ifk ∈τ. It follows that the cache of user k∈[K] consists of the following content

Zk ={Wτ(n) :τ 3k,∀n∈[N]}. (1.5) Hence, each user stores in its cache a total of N Kγ−1K−1

subfiles each of size 1

(K), which account for a total used memory of size

N

K−1 Kγ−1

1

K

=M, (1.6)

thus satisfying the per-user cache size constraint. We now proceed with the definition of the so-called subpacketization requirement of a coded caching scheme.

Definition 1. We use the termsubpacketization requirement to refer to the number of subfiles in which a coded caching scheme has to split each file of the library.

The MAN coded caching scheme has a subpacketization requirement ofSM AN = K , which is exponential inK.

MAN Delivery Scheme Consider all the Kγ+1K

setsQ⊂[K] ofKγ+ 1 users. For each such set Q, the server creates a message denoted byxQ and transmits it to the Kγ+ 1 users in the set. The message xQ takes the form

xQ =

k∈Q

WQ\{k}(dk) . (1.7)

By construction, it can be easily verified that each subfile in the XOR is desired by one of theKγ+ 1 users inQ and it is available in the local caches of the otherKγ users inQ.

Achievable performance Observing that the total number of transmissions is Kγ+1K and that each such transmission takes 1

(K) units of time, immediately tells us that the total delivery time can be expressed as in equation (1.1).

1.2.2 Extensions of the MAN Coded Caching Scheme to Other Settings

Dans le document Fundamental limits of shared-cache networks (Page 20-24)