• Aucun résultat trouvé

A very different approach in caching was introduced in [1] by Maddah-Ali and Niesen. Compared to the previously discussed caching efforts, where the focus was concentrated into predicting the content popularity and exploiting this knowledge by storing the most popular files first, in the Coded Caching approach each cached content is used to reduce the received interference.

1 2 · · · N

Server

User 1 User 2

User K

Cache

Cache

Cache

Figure 1.2: The bottleneck channel, with K users and N files.

Specifically, the model in the work of Maddah-Ali and Niesen [1] consid-ered a server with access to a library of N files, which server was connected via a wired, noise-less channel with link capacity 1 file to K users, each equipped with a cache able to store the equivalent capacity of M files (cf.

Figure 1.2). The system works in two distinct phases, where during the first (placement phase) content is pushed to the users in a manner oblivious to future requests. Then, during the second phase (delivery phase) each user requests one file from the server and the server transmits a sequence of messages in order to satisfy these demands.

1.1.1 Performance of Coded Caching

For the above described setting, the algorithm of [1] showed that it can achieve the normalized performance (delivery time) of

T1(K, γ) = K(1−γ)

1 + (1.1)

which also translates to a cache-aided Degrees-of-Freedom (DoF) perfor-mance of

D1(K, γ)≜ K(1−γ)

T1(K, γ) = 1 + (1.2) implying an ability to serve 1 + users at a time (all with a different requested content), over the same link that would — in the absence of Coded Caching — have allowed the serving of only one user at a time.

Toy Example

Before formally describing the placement and delivery algorithms in [1], we will first illustrate these algorithms through the use of a toy example. Let us assume the single transmitter, wired bottleneck channel withK = 2users (see Figure 1.3), where the server has access to a library of N = 2 files, namely {W1, W2}. Further, let us assume that each user has the capacity to store content of size equal to the size of M = 1 file.

Placement Phase The algorithm starts by splitting each of the two files into S = 2 parts (subfiles) as follows

W1

W11, W21 (1.3)

W2

W12, W22 (1.4)

and caching at the two users as Z1 =

W11, W12 (1.5)

Z1 =

W21, W22 (1.6)

where Zk denotes the contents of user k’s cache.

Delivery Phase Starting with denoting the two requested files using letters A, B i.e., A will represent the preference of user 1 and B the preference of user 2, the algorithm transmits the message

x12=A2⊕B1 (1.7)

where is the bit-wise XOR operator. We can see that this naming abstrac-tion can represent any possible combinaabstrac-tion of the files W1, W2, thus for

Server

User 1 User 2

Figure 1.3: The bottleneck channel, with K = 2 users and N = 2 files.

any possible request pattern the above message will satisfy users’ demands, as we will show in the following paragraph. Moreover, we can see that the above transmitted message contains both remaining desired subfiles, where the users can decode their desired subfile by “XORing” out the interfering message.

Specifically, user 1 will use subfile B1, found in its cache, while user 2 will use subfile A2 to recover their respective subfiles.

As evident from Eq. (1.7), only one transmission is needed to satisfy both users’ demands. On the other hand, having not performed the Coded Caching algorithm, then the subfiles would need to be transmitted in separate transmissions and the reduction that could be achieved would amount to the size of the already cached parts, i.e. the local caching gain. Taking these comments into consideration we can see that the required time to complete all user requests is reduced in half.

Note Throughout this document we use the convention, unless otherwise stated, that each user selects a different file, thus the calculated delivery time amounts to the worst case delivery time. As a result, it follows that the number of files needs to always be higher than the number of users. For the scenario where the number of files is less than the number of users as well as the scenario where more than one users select the same file the reader is referred to the works in [8,9].

The Coded Caching algorithm

We proceed with a detailed description of the coded caching algorithm of [1].

It is important to notice that some elements presented in this section will form the basis for the algorithms presented in the following next chapters.

As discussed above, the server has access to a library of N files,{Wn}Nn=1, where each file is of size f bits. The server is connected to a set of K users, and each user can store in its cache M·f bits, corresponding to a cache of normalized size γMN. The process starts with the placement phase.

Placement Phase During the placement phase the caches of the users are populated with content, where this phase takes place before any demand has been made.

Initially, each file is subpacketized into S1 =

K

(1.8) subfiles as

Wn → {Wτn, τ [K], |τ|=Kγ} (1.9) where index τ tells us that the subfile Wτn is stored at the |τ| = caches indicated by set index τ.

Then, the cache of user k [K], i.e. Zk, is populated with content as follows

Zk={Wτn:k ∈τ, ∀n∈[N]} (1.10) where we can see that this selection of content respects the cache-size con-straint since

|Zk| S1 =

K1 1

K

=γ. (1.11)

From the placement algorithm (cf. Eq. (1.10)) we can make the following observations:

• First, as demonstrated, each user is storing a fraction of each file equal toγ (0,1), which migrates from traditional caching policies that either entirely store a file in a user’s cache or would not store it and

• further, any file from the library can be found in its entirety in the collective cache of the users, which also constitutes a fundamentally different approach compared with the traditional caching algorithms where unpopular files may not be stored at the users,

• thus, we can deduce that the difference between the traditional caching algorithms and the Coded Caching algorithm is that in the first case users are treated individually and content is stored with the intention to optimize the delivery time regardless of the other users. On the other hand, the coded caching approach is designed in such a way to use the collective cache of the users.

• Moreover, each subfile of every file is cached by a subset of users and

• finally, the file index τ helps identify which users have stored this subfile.

Delivery Phase The delivery phase commences with the request of one file by each of the K users, where we denote the demand of user k∈[K]by Wdk.

1 for all σ [K],|σ|=+ 1 (Select + 1 users) do

2 Transmit:

xσ =M

kσ

Wσd\{kk} (1.12)

3 end

Algorithm 1.1: Delivery Phase of the MN algorithm [1]

The delivery process is described in Algorithm 1.1, where we can see that in each iteration a new subset of + 1 users is selected. For this user subset, a XOR is formed in such a way so as to contain, for any of the users in σ, one unknown and desired subfile and subfiles that can be found in that user’s cache.

Decoding From Eq. (1.12) we have that if user k belongs in set σ of the selected users then the received message xσ will contain a subfile that is desired by this user i.e., Wσd\{kk}. In order for user k to decode this subfile it is required to remove the interfering subfiles. It is easy to see that in each XOR the unwanted subfiles (from user k’s perspective) are all cached, thus this user can decode its desired message.