• Aucun résultat trouvé

Worst-case, information and all-blocks locality in distributed storage systems: An explicit comparison [ Roberta Barbi Pascal Felber Hugues Mercier Valerio Schiavoni ]

N/A
N/A
Protected

Academic year: 2021

Partager "Worst-case, information and all-blocks locality in distributed storage systems: An explicit comparison [ Roberta Barbi Pascal Felber Hugues Mercier Valerio Schiavoni ]"

Copied!
5
0
0

Texte intégral

(1)

Worst-case, information and all-blocks locality in

distributed storage systems: An explicit comparison

Roberta Barbi, Pascal Felber, Hugues Mercier and Valerio Schiavoni

University of Neuchˆatel, Switzerland

first.last@unine.ch

Abstract—Distributed storage systems often use erasure coding techniques to provide reliability while decreasing the storage overhead required by replication. Due to the drawbacks of stan-dard MDS erasure-correcting codes, numerous coding schemes recently proposed for distributed storage systems target other metrics such as repair locality and repair bandwidth. Unfortu-nately, these schemes are not always practical, and for most of them locality covers information data only. In this article, we compare three explicit linear codes for three types of locality: a Reed-Solomon code for worst-case locality, a recently proposed pyramid code for information locality and the Hamming code HAM, an optimal locally repairable code directly built from its generator matrix for all-blocks locality. We also provide an efficient way for repairing HAMand show that for the same level of storage overhead HAMprovides faster encoding, faster repair and lower repair bandwidth than the other two solutions while requiring less than fifty lines of code.

I. INTRODUCTION

The easiest way to provide reliability in distributed storage systems is with data replication. Replication has interesting properties, including its relative simplicity and the fact that a system can repair a failed data object by fetching and copying any of its replicas, which is optimal. Unfortunately, replication entails a significant storage overhead, which can be decreased with erasure-correcting codes. With a systematic (n, k) linear code, each codeword consists of n blocks: k source blocks for the original data, and n−k parity blocks. The storage overhead is n−kk , and if the code is Maximum Distance Separable (MDS) [1], any k of the n blocks are necessary and sufficient to decode the codeword and recover the original data. In distributed storage systems the n blocks are typically distributed on different physical disks, thus an MDS code can tolerate up to n − k disk failures, which is optimal. MDS codes, especially Reed-Solomon codes, are commonly used in practice, for instance by Facebook [2], [3].

Distributed storage systems execute periodic repair proce-dures during which one of the most frequent error occurrence is a single failed block per codeword. Unfortunately, MDS erasure-correcting codes were not designed for this purpose and have the prohibitive drawback that k blocks must be fetched to repair a single failure. A lot of research has therefore been recently dedicated to improve the repair locality and repair bandwidth of state-of-the-art codes, and to design new codes tailored for storage. These Locally Repairable Codes (LRC) can repair a small number of block erasures with less than k blocks at the price of an increased storage overhead. LRCs thus offer tradeoffs between MDS codes and

replication [4]–[10]. Codes with good locality are now used by major cloud operators such as Microsoft in their Azure storage service [11] and virtualization platform [12].

A. Our contributions

Despite the recent advancements of codes with good locality properties, there is a big gap between theory and practice, in the sense that most LRCs cannot be realistically implemented in practical systems due to performance shortcomings such as high encoding/decoding/repair complexity and latency. More-over, information locality and locality are sometimes used interchangeably, while it should be clear that whenever a code only offers information locality, there are stored blocks that do not benefit from the property.

We select the Hamming Code (HAM) as representative of codes with all-blocks locality: HAM can repair every block with three other blocks, which is optimal for (7, 4) codes. Furthermore, HAMis highly efficient on modern architectures because it encodes, decodes and repairs using only hardware-implemented bitwise-xor operations.

We emphasize that our long-term objective is to develop and deploy codes with good locality in large-scale distributed storage systems. In this article, we present our first steps towards this goal and provide:

• An efficient way to repair HAM above its theoretical repair capability (under maximum likelihood decoding the repair capability of a code with minimum distance d is d − 1 [1]),

• A comparison of HAM against two systematic linear coding schemes with same storage overhead and different locality properties.

We describe encoding, decoding and repair algorithms for HAM, and formally study its locality and fault tolerance. We then evaluate our prototype with both synthetic failures and real-world traces against two codes with the same storage overhead: a (7, 4) Reed-Solomon code and a (7, 4) pyramid code [6] with information locality 2 built from a (6, 4) Reed-Solomon code. Comparing for the same level of storage over-head implies that the two non-MDS have lower fault tolerance. Nevertheless the three schemes can withstand as many losses as triplication does while entailing a storage overhead of 75%. We show that HAM provides faster encoding, faster repair and lower repair bandwidth than the other two codes and we discuss how this improves the fault tolerance by estimating

(2)

mean time to data loss. HAMdecreases the repair bandwidth by up to 26%, and repairs 5 times faster.

The remainder of this paper is organized as follows. Sec-tions II and III present coding preliminaries and the existing codes used for comparison. Fault tolerance analysis and repair procedure are analyzed in Sections IV and V. Finally, we evaluate the performance of our implementation in Section VI and conclude in Section VII.

II. PRELIMINARIES

A. Linear codes over finite fields

A linear code C of length n and dimension k ≤ n over GF(p) is a k-dimensional subvector space of the finite field (GF(p))n. We write C (n, k) or (n, k) to indicate such a

code. The elements of the code are codewords, and the n components of a codeword are blocks. In this work we operate over GF(28

): addition over GF(28) is simply bitwise-xor ⊕

and we perform the modular reduction [13] using the primitive polynomial x8+ x4+ x3+ x2+ 1, whose binary representation

is 100011101 [14]. B. Locality

The block locality of a block b is the smallest integer r such that b is a linear combination of exactly r other blocks. It ranges from 1, if the block is replicated, to k as this is the number of source blocks. For repairing purposes the lower the better. The code locality of a code C is the maximum over all the block localities [10]. A code with locality r means that each block is a linear combination of at most r other blocks, and there is at least one block having locality exactly r. In particular, (n, k) MDS codes have locality k, which is the highest possible. We write (n, k, r) to denote a (n, k) code with locality r ≤ k. Locality can be weakened to protect only source blocks: the information locality of C is the maximum over all the source block localities. We write (n, k, hri) to denote a (n, k) code with information locality r. Moreover we refer to a code offering the same block locality r for every block as a code with all-blocks locality r.

III. LINEAR CODES FOR COMPARISON

We introduce three explicit linear codes for the three types of locality: a Reed-Solomon code [1] for worst-case locality, a recently proposed pyramid code [6] for information locality and HAM, an optimal locally repairable code for all-blocks locality.

A. Worst-case locality: RS(7,4)

We consider a (7,4) systematic Reed-Solomon code RS(7,4) over GF(28) based on a Hankel matrix [15]. To build a Hankel

matrix we define βi , 1−α1 i where α is a primitive element

of GF(28). We extract a rectangular submatrix H of size k × (n − k) from the triangle

β1 β2 β3 β4 · · · β255 β2 β3 β4 · · · β255 .. . β254 β255 β255

and build the Hankel-form generator matrix G = (Ik|H)

where Ik is the k × k identity matrix. Our RS(7,4) code is

thus generated by GRS(7,4) =     1 0 0 0 244 167 157 0 1 0 0 167 157 114 0 0 1 0 157 114 237 0 0 0 1 114 237 95     . (1)

B. Information locality:PYR(7, 4, h2i)

Pyramid codes [6] are built on top of an embedded MDS code. In particular, we build a (7, 4, h2i) pyramid code PYR(7, 4, h2i) on top of a (6, 4) Reed-Solomon code over GF(28) whose generator matrix in Hankel form is derived from the first six columns of GRS(7,4) in (1) partitioning the

sixth [6], [10]. The generator matrix of our PYR(7, 4, h2i)

pyramid code is therefore given by the first 5 columns of (1) and columns 6 and 7 are respectively (167, 157, 0, 0) and (0, 0, 114, 237).

The pyramid code retains the minimum distance of the em-bedded RS code and hence it is not MDS. However, the extra redundancy improves locality: for a codeword (b0, b1, . . . , b6),

we have b5= 167b0+ 157b1 and b6 = 114b2+ 237b3. Note

that b4 has locality 4; indeed the pyramid code offers only

information locality. C. All-blocks locality:HAM

We now present HAM and describe its properties. The Hamming code HAMis an optimal systematic (7, 4, 3) locally repairable code. Its choice is based on two main reasons:

• The supports of the codewords of weight 4 of the Ex-tended Hamming code form a 3-(8,4,1) design [1] and such a design can be exploited for repairing HAM, as seen in Section V.

• Non-trivial binary MDS codes (that is codes with k 6=

1, n reaching the Singleton bound d ≤ n − k + 1) over GF(2) do not exist [16]. Hence, codes optimal with respect to the generalized bound d ≤ n − k −k

r + 2

[10] should be targeted.

We end up at the Hamming code starting from a 4 × 4 identity matrix (for the code to be systematic) and then adding three different parity columns with three 1s each to ensure locality 3, while paying attention to place only two overlapping 1s on each row to minimize the minimum distance.

1 0 0 0 0 1 1 0 1 0 0 1 0 1 0 0 1 0 1 1 0 0 0 0 1 1 1 1               GHAM=

Lemma. HAM has minimum distance d = 3 and locality r = 3, which is optimal.

Proof. The minimum distance of the Hamming code is well-known, see [1]. For locality, we can easily see that each parity block is the sum of exactly three source blocks, and that each

(3)

b6 b5 b4 b3 b2 b0 b1

Fig. 1: Relationships between the blocks of a HAMcodeword. Each column corresponds to a parity equation involving the blocks in gray.

0

𝛔0

𝜇

𝛔1

𝜇

𝛔2

𝜇

𝛔3

𝜹

2

1

2

3

DL

Fig. 2: Markov chain. States correspond to the number of erasures and DL represents the data loss state. The blue dotted arrow apply to HAM and PYR(7, 4, h2i), i.e. δ2 = 0 for

RS(7,4).

source block is the sum of exactly one parity block and two other source blocks. These parameters are optimal because they achieve the bound d ≤ n−k −k

r +2 with equality [10].

HAM has the additional property that it does not require multiplications over finite fields since the elements of GHAM are 0s and 1s. This makes HAM easily amenable to fast im-plementations in hardware using only bitwise-xor operations.

IV. FAULT TOLERANCE ANALYSIS

We use mean time to data loss (MTTDL) to compare the fault-tolerance offered by the three schemes. We estimate MTTDL using a Markov model [2], [17].

Before presenting the actual model, we compute the fraction r(j) of j-erasures that can be corrected by the three codes: r(j) = 0 for all j greater or equal to the minimum distance of the code. RS(7,4) is MDS, thus its i-erasures properties are r(0) = r(1) = r(2) = r(3) = 1.

PYR(7, 4, h2i) is built on top of RS(6,4), thus it can correct every 2-erasure, i.e., r(0) = r(1) = r(2) = 1. By exhaustive search, we also found that the pyramid code can correct 26 out of the 73 = 35 3-erasures, thus r(3) = 26

35 = 74.29%.

The minimum distance of HAM is 3, thus r(0) = r(1) = r(2) = 1, and like the pyramid code it can correct some 3-erasures. To compute the number of repairable 3-erasures, consider a codeword with blocks (b0, b1, b2, b3, b4 = b1⊕

b2⊕ b3, b5 = b0⊕ b2 ⊕ b3, b6 = b0 ⊕ b1 ⊕ b3). Block b0

appears in b0, b5, b6, thus the 3-erasure in positions {0, 5, 6}

are irrecoverable. The same is true for {1, 4, 6} and {2, 4, 5}. Then notice that b0⊕b4= b1⊕b5= b2⊕b6and they all contain

b3, hence the erasure patterns {0, 3, 4}, {1, 3, 5}, {2, 3, 6} are

also irrecoverable. All the other 3-erasures are repairable, thus r(3) = 35−7

35 = 80%.

Knowing the j-erasure properties, we can define pj, the

probability that the coding scheme can tolerate one more fail-ure, given that j failures have already occurred: pj=

r(j+1) r(j)

where r(j) is the unconditional probability that the code can tolerate j failures as defined above. We compute p , (p0, p1, p2) for the three codes. We get p = (1, 1, 1) for

RS(7,4), p = (1, 1, 0.74286) for PYR(7, 4, h2i) and p = (1, 1, 0.8) for HAM.

MTTDL takes into account the maintenance offered by the repair process by means of the drive rebuild rate µ. Also, we call λ the drive failure rate (we set λ = 5000001 hours [17]).

We consider a Markov chain with states corresponding to the number of erasures, with transition rates σj for j =

{0, 1, 2} given by σj = (n − j)λpj [17] and σ3= (n − 3)λ,

where n is the length of the code. For the two non-MDS codes δ2 = (n − 2)λ(1 − p2), whereas δ2 = 0 for RS(7,4).

The Markov chain is shown in Figure 2.

The transition matrix for the Markov model is given by

P =   −σ0 σ0 0 0 0 µ −(µ + σ1) σ1 0 0 0 µ −(µ + σ2+ δ2) σ2 δ2 0 0 µ −(µ + σ3) σ3  

where the last column represents the absorbing state corre-sponding to data loss and each row represents a non-absorbing state. We use ˆP , the negative of P deprived of the last column, to compute MTTDL per stripe as the first component y1of the

solution of the linear system ˆP y = 1. We postpone explicit results to section VI as we will use our simulation to get estimates for the rebuild rate µ.

V. REPAIR ALGORITHMS

This section describes the repair mechanisms implemented by the three codes. The repair method takes as input alive blocks of a codeword and outputs the codeword itself. Given enough alive blocks, RS(7,4) and PYR(7, 4, h2i) compute missing blocks in two steps: first the original message is retrieved and then it is re-encoded to get all blocks of the codeword. HAMneeds one step only as it repairs erased blocks directly.

a) RS(7,4): Whenever the number of erasures is at most 3, this code fetches four valid blocks and apply Gaussian elimination on a 4 × 4 system.

b) PYR(7, 4, h2i): The repairing procedure for the pyra-mid code implies two steps. First, it attempts to recover data blocks by linear combination. Second, if the required redundant blocks are not available, the code falls back on a RS-like repair procedure, in which case there is no performance gain compared to the embedded RS code.

c) HAM: Figure 1 depicts the relationships between blocks of HAM codewords. For each column of the figure, the xor of the gray blocks is 0, thus three gray blocks are sufficient to infer the fourth. For instance, the first column means that b0⊕ b2⊕ b3⊕ b5= 0.

To repair an erased block, we look for a column from Figure 1 where the erased block is gray, fetch the other three gray blocks and xor them to recover the erased one.

(4)

0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 32kB 64kB 128kB Latency (s) encoding 0.001 0.002 0.003 0.004 0.005 0.006 32kB 64kB 128kB decoding RS PYR HAM 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 32kB 64kB 128kB repair

Fig. 3: Latency of encoding, decoding and repair.

300 350 400 450 500 550 0 20s 40s 60s Throughput (Bytes/sec)

128 nodes, 4kB, 1.5% failing nodes per second RS PYR HAM

(a) When few erasures occur, both locality and information locality lead benefits in repair.

800 1000 1200 1400 0 20s 40s 60s Throughput (Bytes/sec)

128 nodes, 4kB, 5% failing nodes per second RS PYR HAM

(b) When the number of erasures is close to the repair capability, information locality is not enough to have an advantage on the MDS code.

Fig. 4: Bandwidth: number of blocks used by repair.

VI. EVALUATION

In this section we experimentally evaluate the performance of HAM. First we provide a set of micro-benchmarks to evaluate the encoding, decoding and repair latency. Then, we evaluate the bandwidth costs under synthetic faults. Fi-nally, we inject a 12-hour trace from a large-scale deploy-ment. We compare the performance of HAM against the RS(7,4) and PYR(7, 4, h2i) presented in Section III-C. We leverage a full, yet not optimized prototype of the code implemented in Python.

A. Micro-benchmark: latency

We begin our evaluation by measuring the latency of the schemes to encode, decode and repair input data of increasing size (from 32 kB to 128 kB). Figure 3 presents our results. HAM outperforms RS(7,4) and PYR(7, 4, h2i) in encoding,

up to 1.5× and 1.2× respectively. This directly derives from

the absence of multiplication operations in HAM, as opposed to RS(7,4) and PYR(7, 4, h2i). The decoding latency results are basically equivalent across the three codes: since they are systematic (with dimension k = 4), the decoding operation trivially consists in reading the first 4 blocks of the codeword. The repairing micro-benchmark consists in erasing up to 2 random blocks (or 25% of the codeword) and repairing them as fast as possible. We explain the gap in repairing performances between the codes as follows. RS(7,4) systematically solves a 4 × 4 linear system by Gaussian elimination for every erased block, which is costly (O(k3) in the dimension of the matrix). PYR(7, 4, h2i) exploits its information locality: when there is

only one erasure per codeword, PYR(7, 4, h2i) fetches 2 blocks for repair (if the failure is not in the 4th position, in such a

case PYR(7, 4, h2i) fetches 4 blocks). However, if more than one erasure occurs, PYR(7, 4, h2i) needs to process at least as many blocks as Reed-Solomon. In this case, PYR(7, 4, h2i) never outperforms RS(7,4) in terms of latency.

On the other hand, HAM never uses Gaussian elimination by leveraging its xor-based structure. The repair of one failed block results in 2 bitwise-xor. If more than one failure occurs, HAM fetches 5 blocks and recovers the whole codeword executing 6 bitwise-xor. This leads to a factor of 5× speed-up in latency with respect to RS(7,4) and PYR(7, 4, h2i).

We can now complete our fault-tolerance analysis. We set µ = 1 for HAM and µ = 5 for the other two codes and get

RS(7,4) PYR(7, 4, h2i) HAM

MTTDLstripe(hrs) 4.7645 · 1015 3.7031 · 1012 1.1904 · 1014 The fault-tolerance of HAM is lower than the one of RS(7,4) because of its lower distance, but higher than the one of PYR(7, 4, h2i) thanks to the repair rate.

We do not show throughput results due to lack of space but they are in the same line as latency results.

B. Micro-benchmark: repair bandwidth

To evaluate the network bandwidth costs of the three codes, we simulate a cluster of remotely distributed 128 nodes. We make the nodes crash at very high rates during a period of 1 minute. Figure 4a and Figure 4b present our results for two challenging scenarios, where 1.5% and 5% of the nodes fail every second. We present the mean and standard deviation of the number of blocks fetched during the repair process. HAM

consistently fetches fewer blocks, demonstrating the advantage of codes with high locality.

(5)

0 200 400 600 800 1000 0 1h 2h 3h 4h 5h 6h 7h 8h 9h 10h 11h 12h 112 114 116 118 120 Online Nodes Throughput (Bytes/sec) RS PYR HAM

Fig. 5: Real-world fault trace: network bandwidth required to repair the system during a 12-hour trace.

C. Benchmark: real-world fault trace

We conclude our evaluation by means of a real-world trace [18]. We replay 12 hours of the trace, during which between 112 and 120 simulated nodes are available. The fault dynamics injected in the system is depicted on top of Figure 5. In this experiment, the nodes are used to store 2 kB of data. After the 12 hours of the trace, HAM required 11.35 MB to repair the system, as opposed to the 14.33 MB of RS(7,4) (+%26) or 13.97 MB of PYR(7, 4, h2i) (+%23).

VII. CONCLUSION

In this article we compared three linear coding schemes offering different locality properties: HAM, a locally repairable code with all-blocks locality r = 3, RS(7,4), a Reed-Solomon code representing the worst-case locality r = k = 4, and PYR(7, 4, h2i), a Pyramid code with information locality hri = 2. We also presented an efficient method for repairing HAM

based on designs.

We have experimentally evaluated HAM using simulations and real-world fault traces. Our results show the benefits of relying on efficient bitwise-xor operations for encoding as well as increased performance at repairing faulty systems.

When compared to RS(7,4) and PYR(7, 4, h2i), HAM

showed lower latency, thanks to the repairing procedure. Sim-ilarly, using a real-world trace, HAMshowed lower bandwidth consumption thanks to its all-blocks locality property.

Our results also highlighted how information locality and locality should not be confused: we showed that a code offering information locality can outperform standard MDS codes in term of repair bandwidth when few erasures occur and that such a restriction does not apply to the code with all-blocks locality.

Motivated by the evidence provided by this work, we plan to extend the repair procedure to binary linear codes and to evaluate the performance gain in a large-scale distributed storage system.

VIII. ACKNOWLEDGEMENTS

This research was partially supported by the European Union’s Horizon 2020 - The EU Framework Programme for Research and Innovation 2014-2020, under grant agreement No. 653884.

REFERENCES

[1] F. J. MacWilliams and N. J. A. Sloane, The theory of error correcting codes. Elsevier, 1977.

[2] M. Sathiamoorthy, M. Asteris, D. Papailiopoulos, A. G. Dimakis, R. Vadali, S. Chen, and D. Borthakur, “XORing elephants: novel erasure codes for big data,” in Proceedings of the VLDB Endowment, vol. 6, no. 5. VLDB Endowment, 2013, pp. 325–336.

[3] K. V. Rashmi, N. B. Shah, D. Gu, H. Kuang, D. Borthakur, and K. Ramchandran, “A Solution to the Network Challenges of Data Recovery in Erasure-coded Distributed Storage Systems: A Study on the Facebook Warehouse Cluster,” in HotStorage, 2013.

[4] A. G. Dimakis, P. B. Godfrey, Y. Wu, M. J. Wainwright, and K. Ram-chandran, “Network coding for distributed storage systems,” IEEE Transactions on Information Theory, vol. 56, no. 9, pp. 4539–4551, 2010.

[5] F. Oggier and A. Datta, “Self-repairing homomorphic codes for dis-tributed storage systems,” in INFOCOM. IEEE, 2011, pp. 1215–1223. [6] C. Huang, M. Chen, and J. Li, “Pyramid Codes: Flexible Schemes to Trade Space for Access Efficiency in Reliable Data Storage Systems,” ACM Transaction on Storage, vol. 9, no. 1, pp. 3:1–3:28, 2013. [7] K. Shanmugam, D. S. Papailiopoulos, A. G. Dimakis, and G. Caire,

“A repair framework for scalar mds codes,” IEEE Journal on Selected Areas in Communications, vol. 32, no. 5, pp. 998–1007, 2014. [8] D. S. Papailiopoulos and A. G. Dimakis, “Locally Repairable Codes,”

IEEE Transactions on Information Theory, vol. 60, no. 10, pp. 5843– 5855, 2014.

[9] P. Gopalan, C. Huang, B. Jenkins, and S. Yekhanin, “Explicit maximally recoverable codes with locality,” IEEE Transactions on Information Theory, vol. 60, no. 9, pp. 5245–5256, 2014.

[10] P. Gopalan, C. Huang, H. Simitci, and S. Yekhanin, “On the locality of codeword symbols,” IEEE Transactions on Information Theory, vol. 58, no. 11, pp. 6925–6934, 2012.

[11] C. Huang, H. Simitci, Y. Xu, A. Ogus, B. Calder, P. Gopalan, J. Li, and S. Yekhanin, “Erasure Coding in Windows Azure Storage,” in USENIX, 2012, pp. 15–26.

[12] LRC Erasure Coding in Windows Storage Spaces. [Online]. Available: http://research.microsoft.com/en-us/um/people/chengh/slides

[13] R. Lidl and H. Niederreiter, Finite fields. Cambridge University Press, 1997.

[14] Reed-Solomon open-source implementation. [Online]. Available: https://github.com/tomerfiliba/reedsolomon

[15] F. Mattoussi, V. Roca, and B. Sayadi, “Complexity comparison of the use of Vandermonde versus Hankel matrices to build systematic MDS Reed-Solomon codes,” in SPAWC. IEEE, 2012, pp. 344–348. [16] F. Oggier, A. Datta et al., “Coding techniques for repairability in

networked distributed storage systems,” Foundations and Trends in Communications and Information Theory, vol. 9, no. 4, pp. 383–466, 2013.

[17] J. L. Hafner and K. Rao, “Notes on reliability models for non-MDS erasure codes,” IBM Res. rep. RJ10391, 2006.

[18] M. Bakkaloglu, J. J. Wylie, C. Wang, and G. R. Ganger, “On correlated failures in survivable storage systems,” Carnegie Mellon University, Tech. Rep., May 2002, cMU-CS-02-129. [Online]. Available: http://repository.cmu.edu/pdl/107/

Figure

Fig. 1: Relationships between the blocks of a H AM codeword.
Fig. 3: Latency of encoding, decoding and repair.
Fig. 5: Real-world fault trace: network bandwidth required to repair the system during a 12-hour trace.

Références

Documents relatifs

Abstract.  This  paper  presents  a  case  study  of  the  three  multimodal  dining  experiences  provided  by  a  local  fine‐dining  restaurant 

In the seventh chapter, we study Bernoulli bond percolation on Cayley graphs of abelian groups, and we prove the locality of the critical point in this setting.. a sequence of

Today, compared to 10 years ago, we have fewer res- piratory and diarrheal diseases, less measles, but more tuberculosis (approximately 3 million deaths in 1997, including an

The goal of a transformation is to modify the original execution order of the operations. A convenient way to express the new order is to give for each op- eration an execution

We have observed this evidence in another 5 dSphs and 11 UFDs and called it the missing High Amplitude Short Period (HASP) RRab problem in dSphs.. This evidence cannot be related

It is alsointeresting that the starting high TOC solutions showed significant levels of tin and antimony. This is thought to be due to the presence of lead, antimony, and tin in

/ La version de cette publication peut être l’une des suivantes : la version prépublication de l’auteur, la version acceptée du manuscrit ou la version de l’éditeur. For

autistes semblent alors ne pas être les mêmes que celles perçues par les parents d’enfants témoins, notamment concernant les barrières émotionnelles et