Strong Consistency

Top PDF Strong Consistency:

Support of Strong Consistency on Fog Applications

Support of Strong Consistency on Fog Applications

Another key aspect of Ingress is its need for strong consis- tency. Consider the most common transaction in gameplay: interacting with a portal; and that these portals are usually located in crowded public landmarks, it is likely that multiple players may be interacting with a portal (either to defend it or attack it) concurrently. In that case, it is fundamental that the results of the players’ actions are seen by all participants in a consistent state. Otherwise, game actions performed by concurrent players could result in state inconsistencies where one player might observe the portal as belonging to one team, while another concurrent player would ob- serve the same portal as belonging to the other team. This would inevitably violate the application logic, resulting in triggering additional mechanisms such as rollbacks to bring the state back to a consistent state and would penalise the application’s QoE. This challenge is trivially solved when concentrating all application state in the Cloud. However, by leveraging the Fog Computing architecture, additional mechanisms are needed to guarantee that strong consistency is equally maintained in our system model.
En savoir plus

7 En savoir plus

Strong Consistency for Shared Objects in Pervasive Grids

Strong Consistency for Shared Objects in Pervasive Grids

Because tasks under CONFIIT are independent, almost no synchronization is required among the processes. Indeed, the single element that nodes need to synchronize is the list of completed tasks, which is ensured by a token passing. This computational model clearly impacts on the fault-tolerance aspects of CONFIIT, as processes that disconnect cause almost no harm (the worst case being the regeneration of the token). In the scenario that we propose, however, processes not only share more complex objects but require data consistency in order to respect a task-dependency graph. It is clear that such kind of application will suffer if deployed in a pervasive environment, as the disconnection of a process may block the progress in a graph path. Indeed, we need to ensure non- blocking data consistency in the shared objects even when mobile devices disconnect temporarily.
En savoir plus

7 En savoir plus

Causal Consistency: Beyond Memory

Causal Consistency: Beyond Memory

Abstract In distributed systems where strong consistency is costly when not impossible, causal consistency provides a valu- able abstraction to represent program executions as par- tial orders. In addition to the sequential program order of each computing entity, causal order also contains the seman- tic links between the events that affect the shared objects – messages emission and reception in a communication chan- nel, reads and writes on a shared register. Usual approaches based on semantic links are very difficult to adapt to other data types such as queues or counters because they require a specific analysis of causal dependencies for each data type. This paper presents a new approach to define causal consis- tency for any abstract data type based on sequential specifi- cations. It explores, formalizes and studies the differences be- tween three variations of causal consistency and highlights them in the light of PRAM, eventual consistency and se- quential consistency: weak causal consistency, that captures the notion of causality preservation when focusing on con- vergence; causal convergence that mixes weak causal con- sistency and convergence; and causal consistency, that coin- cides with causal memory when applied to shared memory.
En savoir plus

13 En savoir plus

Ensuring referential integrity under causal consistency

Ensuring referential integrity under causal consistency

• Safety: An object can be deleted only if it is unreachable. • Liveness: Unreachability of an object will eventually be de- tected. In a storage system where the application can delete objects explicitly, the programmer must be careful to preserve the RI invari- ant. This problem has been studied in the context of (concurrent) garbage collection for decades. Folklorically, it is often thought that enforcing RI requires synchronisation and strong consistency. In fact, previous work has stated otherwise [2, 4, 12]. The main

6 En savoir plus

Database Consistency Models

Database Consistency Models

correct replicas is tight. In particular, if any majority of the replicas may fail, the emulation does not work (Delporte-Gallet et al. 2004). The above result implies that, even for a very basic distributed service, such as a register, it is not possible to be at the same time consistent, available and tolerant to partition. This result is known as the CAP Theorem (Gilbert and Lynch 2002), which proves that it is not possible to provide all the following desirable features at the same time: (C) strong Consistency, even for a register, (A) Availability, responding to every client request, and (P) tolerate network Partition or arbitrary messages loss.
En savoir plus

25 En savoir plus

Speed for the elite, consistency for the masses: differentiating eventual consistency in large-scale distributed systems

Speed for the elite, consistency for the masses: differentiating eventual consistency in large-scale distributed systems

I. I NTRODUCTION Modern distributed computer systems have reached sizes and extensions not envisaged even a decade ago: modern datacenters routinely comprise tens of thousands of ma- chines [1], and on-line applications are typically distributed over several of these datacenters into complex geo-distributed infrastructures [2], [3]. Such enormous scales come with a host of challenges that distributed system researchers have focused on over the last four decade. One such key chal- lenge arises from the inherent tension between fault-tolerance, performance, and consistency, elegantly captured by the CAP impossibility theorem [4]. As systems grow in size, the data they hold must be replicated for reasons of both performance (to mitigate the inherent latency of widely distributed systems) and fault-tolerance (to avoid service interruption in the pres- ence of faults). Replicated data is unfortunately difficult to keep consistent: strong consistency, such as linearizability or sequential consistency, is particularly expensive to implement in large-scale systems, and cannot be simultaneously guar- anteed together with availability, when using a failure-prone network [4].
En savoir plus

11 En savoir plus

Framework for Real-time collaboration on extensive Data Types using Strong Eventual Consistency

Framework for Real-time collaboration on extensive Data Types using Strong Eventual Consistency

Real-time collaboration is a special case of collaboration where users work on the same artefact simultaneously and are aware of each other’s changes in real-time. Shared data should remain available and consistent while dealing with its physically distributed aspect. Strong Consistency is one approach that enforces a total order of operations using mechanisms, such as locking. This however introduces a bottleneck. In the last decade, algorithms for concurrency control have been studied to keep convergence of all replicas without locking or synchronization. Operational Transformation and Conflict- free Replicated Data Types (CRDT) are widely used to achieve this purpose. However, the complexity of these strategies makes it hard to integrate in large software, such as modeling editors, especially for complex data types like graphs. Current implementa- tions only integrate linear data, such as text. In this thesis, we present CollabServer, a framework to build collaborative environments. It features a CRDTs implementation for complex data types such as graphs and gives possibility to build other data structures.
En savoir plus

111 En savoir plus

Theoretical Analysis of Singleton Arc Consistency

Theoretical Analysis of Singleton Arc Consistency

6 An extension of SAC In the previous section, we gave two observations that prevent us from using classical constructions that lead to efficient local consistency algorithms. But observation 2 not only has negative consequences on the implementation of ef- ficient SAC algorithms. It also shows a weakness in the pruning capability of SAC. If (j, b) does not belong to AC(P | i=a ), the SAC test on (j, b) could be done on a network from which (i, a) has been removed since we are guaranteed that (i, a) and (j, b) cannot belong to the same solution. Then, there would be more chances of wipe out when testing (j, b). This leads to a slightly modified definition of SAC. We obtain a stronger consistency level that we will compare to existing ones.
En savoir plus

11 En savoir plus

On Composition and Implementation of Sequential Consistency

On Composition and Implementation of Sequential Consistency

To illustrate the contributions of the paper, we also address a higher level operation: a snapshot operation [5] that allows to read in a single operation a whole set of registers. A sequentially consistent snapshot is such that the set of values it returns may be returned by a sequential execution. This operation is very useful as it has been proved [5] that linearizable snapshots can be wait- free implemented from single-writer/multi-reader registers. Indeed, assuming a snapshot operation does not bring any additional power with respect to shared registers. Of course this induces an additional cost: the best known simulation needs O(n log n) basic read/write operations to implement each of the snapshot operations and the associated update operation [6]. Such an operation brings a programming comfort as it reduces the “noise” introduced by asynchrony and failures [7] and is particularly used in round-based computations [8] we consider for the study of the composability of sequential consistency.
En savoir plus

15 En savoir plus

On the Consistency Conditions of Transactional Memories

On the Consistency Conditions of Transactional Memories

This paper is on consistency conditions for transactional memories. It first presents a framework that allows defining a space of consistency conditions whose extreme endpoints are serializability and opacity. It then extracts from this framework a new consistency condition that we call virtual world consistency. This condition ensures that (1) each transaction (committed or aborted) reads values from a consistent global state, (2) the consistent global states read by committed transactions are mutually consistent, but (3) the consistent global states read by aborted transactions are not required to be mutually consistent. Interestingly enough, this consistency condition can benefit lots of STM applications as, from its local point of view, a transaction cannot differentiate it from opacity. Finally, the paper presents and proves correct a STM algorithm that implements the virtual world consistency condition. Interestingly, this algorithm distinguishes the serialization date of a transaction from its commit date (thereby allowing more transactions to commit).
En savoir plus

27 En savoir plus

Checking the internal pedagogical consistency of a game learning situation: the Leclercq’s triple consistency triangle

Checking the internal pedagogical consistency of a game learning situation: the Leclercq’s triple consistency triangle

Figure 7.2. the two remaining Leclercq’s triangle within ELEKTRA’s game 8 Conclusion We see Tyler’s theory and Leclercq’s model and representation as a foundation for reaching pedagogical integrity and legitimating design interventions according to learning objectives. The principle of Leclercq’s triple consistency and its operationalization through the ELEKTRA project allowed both the identification of a lack of consistency between objectives, methods and evaluations and an efficient communication tool to express pedagogical point of vue to non-pedagogues. This is not trivial: the diagnosis conducted with this conceptual tool reveals a dramatic threat on the possibility of learning and on the possibility to have a control of what is learnt in the game. Improvement decisions on learning content, learning methods, learning evaluation could be taken based on this effort of pedagogical "triangulation".
En savoir plus

9 En savoir plus

Heterogeneous models matching for consistency management

Heterogeneous models matching for consistency management

C. Change processing To maintain the consistency of the system with regard to established correspondences, model evolution must be per- formed. Figure 19 describes the process followed for the change processing. In this phase, models are updated to take into account the identified changes and the modifications deemed by the experts to be realized. On the one hand, the evo- lutions classified as ”Automatic Evolution Category” will be handled automatically through two strategies that correspond to the addition and the removal of model elements. For the first one, when a new model element is added to a source model, the matching phase is relaunched incrementally. The second strategy aims to delete a correspondence. In fact, if we delete a model element, the correspondence becomes orphan. We define an orphan correspondence as a correspondence for
En savoir plus

13 En savoir plus

Consistency in choice and credence

Consistency in choice and credence

what you do, either you will have performed some particular action you ought not have performed, or you will have performed some sequence of actions that you ought n[r]

78 En savoir plus

The trouble with SMT consistency

The trouble with SMT consistency

We will see that SMT consistency issues are quite different from consistency issues in human transla- tions. In fact, while inconsistency errors in SMT output might be particularly obvious to the human eye, SMT is globally about as consistent as human translations. Furthermore, high translation consis- tency does not guarantee quality: weaker SMT sys- tems trained on less data translate more consistently than stronger larger systems. Yet, inconsistent trans- lations often indicate translation errors, possibly be- cause words and phrases that translate inconsistently are the hardest to translate.
En savoir plus

9 En savoir plus

Towards Consistency-Based Reliability Assessment

Towards Consistency-Based Reliability Assessment

Merging information provided by several sources is an important issue and merging techniques have been extensively studied. When the reliability of the sources is not known, one can apply merg- ing techniques such as majority or arbitration merging or distance- based merging for solving conflicts between information. At the opposite, if the reliability of the sources is known, either repre- sented in a quantitative or in a qualitative way, then it can be used to manage contradictions: information provided by a source is gen- erally weakened or ignored if it contradicts information provided by a more reliable source [1, 4, 6]. Assessing the reliability of in- formation sources is thus crucial. The present paper addresses this key question. We adopt a qualitative point of view for reliability representation by assuming that the relative reliability of informa- tion sources is represented by a total preorder. This works considers that we have no information about the sources and in particular, we do not know if they are correct (i.e they provide true information) or not. We focus on a preliminary stage of observation and assessment of sources. We claim that during that stage the key issue is a con- sistency analysis of information provided by sources, whether it is the consistency of single reports or consistency w.r.t trusted knowl- edge or the consistency of different reports together. We adopt an axiomatic approach: first we give some postulates which character- ize what this reliability preorder should be, then we define a generic operator for building this preorder in agreement with the postulates.
En savoir plus

4 En savoir plus

Product Filters, Acyclicity and Suzumura Consistency

Product Filters, Acyclicity and Suzumura Consistency

Two observations are worth pointing out at this stage. First, if social preferences are not required to be reflexive and complete, acyclicity and Suzumura consistency cannot be distinguished in terms of the pairwise decisive coalition structures they correspond to. This parallels the observations of Bossert and Suzumura (2010a, Chapter 10) regarding quasi-transitive and transitive social relations in the absence of reflexivity and complete- ness. The second (and, to us, more striking) observation is that a product filter structure allows us to generate collective choice rules that satisfy not only weak Pareto but also neutrality. Note that this property is not required (and neither is the weaker axiom of independence of irrelevant alternatives) when establishing that a product filter structure results from assuming weak Pareto along with the acyclicity of the social preferences.
En savoir plus

19 En savoir plus

On the Consistency of Ordinal Regression Methods

On the Consistency of Ordinal Regression Methods

Fisher consistency of arbitrary loss functions (a setting that subsumes ordinal regression) has been studied for some surrogates. Lee et al. (2004) proposed a surrogate that can take into account generic loss functions and for which Fisher consistency was proven by Zhang (2004b). In a more general setting, Ramaswamy and Agarwal (2012, 2016) provide necessary and sufficient conditions for a surrogate to be Fisher consistent with respect to an arbitrary loss function. Among other results, they prove consistency of least absolute deviation (LAD) and an ε-insensitive loss with respect to the absolute error for the case of three classes (k = 3). In this paper, we extend the proof of consistency for LAD to an arbitrary number of classes. Unlike previous work, we consider the so-called threshold-based surrogates (AT, IT and CL), which rank among the most popular ordinal regression loss functions and for which its Fisher consistency has not been studied previously.
En savoir plus

36 En savoir plus

Weak and strong triggers

Weak and strong triggers

Weak and strong triggers ∗ Jacques Jayez, Valeria Mongelli, Anne Reboul, and Jean-Baptiste van der Henst Abstract The idea that presupposition triggers have different intrinsic properties has gradually made its way into the literature on presuppositions and become a current assumption in most approaches. The distinctions mentioned in the different works have been based on introspective data, which seem, indeed, very suggestive. In this paper, we take a different look at some of these distinctions by using a simple experimental approach based on judgment of naturalness about sentences in various contexts. We show that the alleged difference between weak (or soft) and strong (or hard) triggers is not as clear as one may wish and that the claim that they belong to different lexical classes of triggers is probably much too strong.
En savoir plus

23 En savoir plus

Strong Collapse for Persistence

Strong Collapse for Persistence

i is called the core of the complex K i and we call the sequence Z c the core sequence of Z. We show that one can compute the PH of the sequence Z by computing the PH of the core sequence Z c , which is of much smaller size. Our method has some similarity with the work of Wilkerson et. al. [5] who also use strong collapses to reduce PH computation but it differs in three essential aspects: it is not limited to filtrations (i.e. sequences of nested simplicial subcomplexes) but works for other types of sequences like towers and zigzags. It also differs in the way the strong collapses are computed and in the manner PH is computed.
En savoir plus

16 En savoir plus

Heterogeneous models matching for consistency management

Heterogeneous models matching for consistency management

C. Change processing To maintain the consistency of the system with regard to established correspondences, model evolution must be per- formed. Figure 19 describes the process followed for the change processing. In this phase, models are updated to take into account the identified changes and the modifications deemed by the experts to be realized. On the one hand, the evo- lutions classified as ”Automatic Evolution Category” will be handled automatically through two strategies that correspond to the addition and the removal of model elements. For the first one, when a new model element is added to a source model, the matching phase is relaunched incrementally. The second strategy aims to delete a correspondence. In fact, if we delete a model element, the correspondence becomes orphan. We define an orphan correspondence as a correspondence for
En savoir plus

14 En savoir plus

Show all 703 documents...