Proof and refinement

Top PDF Proof and refinement:

Correct Instantiation of a System Reconfiguration Pattern: A Proof and Refinement-Based Approach

Correct Instantiation of a System Reconfiguration Pattern: A Proof and Refinement-Based Approach

C. Correct-by-construction formal methods The proposed approach is a generic one. The context C0 describes the manipulated systems concepts (systems, variables, HorizontalInvs, etc. ). The concepts are manipulated as first-order objects in the machines M0 and M1 in order to encode the behavior pattern described with the events Initialization, progress, fail, repair and complete_failure. Let’s note that the concept of transition is not manipulated as first order objects and thus not defined within the context C0. One may wonder why the transitions between states are not defined in this context C0. There are two main reasons for that: first, transitions are not explicitly manipulated by the substitution mechanism introduced in this paper. Second, the Event-B method provides a powerful built-in inductive proof technique based on invariant preservation by the events (see table I). The only proof effort relates to the correct event refinement. Note that in traditional correct-by-construction techniques like Coq [5] or Isabelle [6], classical inductive proof schemes are offered. One has first to describe the inductive structure associated to the formalized systems, then to give a specific inductive proof scheme for this defined inductive structure and finally to prove the correct instantiation. In the kernel definition of these techniques, the inductive process associated to transition systems correspond- ing to the pattern of Figure 1 and the refinement capability are not available as a built-in inductive proof process. transition together with corresponding inductive proof principles and the instantiation of transitions because event refinement is not available. Unlike Event-B, another meta-level is needed.
En savoir plus

9 En savoir plus

Handling Refinement of Continuous Behaviors: A Refinement and Proof Based Approach with Event-B

Handling Refinement of Continuous Behaviors: A Refinement and Proof Based Approach with Event-B

The generic model yields 13 proof obligations that are easily discharged. The abstract tank (WaterTank0) gen- erates 134 proof obligations, most of which are actually quite similar as the events are akin. Those proofs are not really complicated and often rely on the use of very generic theorems. The ODE-based tank (WaterTank1) produces 132 proof obligations, mainly refinement related (i.e. sim- ulation, witness and guard strengthening). Those POs lead to longer proofs, often studded with well-definedness sub- goals, not very complicated but very redundant. Last, the con- crete model (WaterTank_DualTank_Cylinder) yields 94 proof obligations, again most of them being refinement related. They are substantially harder to discharge; especially because of the witnesses’ form. Moreover, because not all the Rodin automatic provers are interfaced with the theory plug-in, our extensive use of it the heavily hinders proof automation. Thus, a lot of proofs have to be done completely manually.
En savoir plus

9 En savoir plus

Mixed Integer-Real Mathematical Analysis, and Lattice Refinement Approximation and Computation Paradigm

Mixed Integer-Real Mathematical Analysis, and Lattice Refinement Approximation and Computation Paradigm

has an expected value and a standard deviation which are O(h [ ω+1 α ] ). The proof is similar to that of Theorem 5.2, but using Lemma 5.3 instead of Lemma 5.2. 6 Locally Analytical Functions All along this section, we consider again the notations R, Aa , M and M ′ defined in Section 4. Moreover, I denotes a sub-algebra of M containing Zd (typically I = Zd or I = M). At last ∆ is a differentiation operator on functions from I to M ′ .

63 En savoir plus

Inductive representation, proofs and refinement of pointer structures

Inductive representation, proofs and refinement of pointer structures

As for mechanization of the proofs, there is the usual divide between interactive theorem proving (that we follow) and fully automated methods that are usually incomplete or cover only very specific correctness properties. On this line, Garcia and Möller [59] carry out a verification by translation of the algorithm to PlusCal and model checking for graphs of bounded size. Loginov et al. [99] uses the tool TLVA, based on shape analysis. The procedure is not entirely “automatic”, as it requires feeding TLVA with appropriate state relations. Even then, the analysis runs for several hours. (By means of comparison, our Isabelle/HOL proof script is processed in the order of two minutes.) An advantage of TLVA is that it directly works on C code. It is not quite clear which limitations are effectively imposed on the kind of graph structure (acyclic?) that has been verified.
En savoir plus

220 En savoir plus

Semantics and implementation of a refinement relation for behavioural models

Semantics and implementation of a refinement relation for behavioural models

The extension relation can be simply interpreted such as the simulation between two transformed graphs (A(Q) simulates A(P )), and at each simulation states pair, there exists the inclusion of acceptance set (Π = {ht, ui|u.acc ⊂⊂ t.acc}). The proof of the reduction relation red is similar except that the simulation is ex- pressed in the opposite way. i.e A(P ) ⊂ ≈ Π h∅,Ψ i A(Q) means that A(P ) simulates A(Q). This theorem allows extension and reduction relations to be calculated like simula- tion relations on transformed graphs, although a direct implementation of their initial definitions, based on a trace set inclusion, would have been P-space complete.
En savoir plus

16 En savoir plus

On Tate’s refinement for a conjecture of Gross and its generalization

On Tate’s refinement for a conjecture of Gross and its generalization

in K/k (see Section 3 for the definition) and another place in S splits in K/k. He then proposed a refined conjecture in that case. The purpose of this paper is to study Tate’s refined conjecture from a cohomological view point and to generalize it to arbitrary cyclic extensions. (In a forthcoming paper [2], a further generalization of the conjecture will be given.) We will prove that a weak congruence holds for any cyclic l- extension (Theorem 3.3), which implies Tate’s refined conjecture when k is a number field (Theorem 3.4). Piecing the congruences together for all primes l, we will also obtain a weak congruence for arbitrary cyclic extensions (Theorem 4.2), which is a partial result in the direction of our conjecture. In particular it shows that the generalized conjecture (and hence the Gross conjecture) is true for arbitrary cyclic extensions of number fields (Theorem 4.3 and Corollary 4.4). In the last section, using the results above, we will give a new proof of the Gross conjecture for arbitrary abelian extensions K/Q (Theorem 10.1), which simplifies our previous proof in [1]. The main idea of the proof consists of two ingredients: one is an inter- pretation of the Gross regulator map in terms of Galois cohomology, and the other is genus theory for cyclic extensions K/k. Here by genus theory we mean a formula (Theorem 7.1) for the (S, T )-ambiguous class number of K/k, and it will play an important role when we relate the Stickelberger element to the Gross regulator in the proof of Theorem 4.2. The idea to use genus theory can be already found in the paper of Gross [9], where he implicitly used it to prove a weak congruence in the case of cyclic ex- tensions of prime degree. Thus our proof may be regarded as a natural generalization of his.
En savoir plus

31 En savoir plus

Proof certificates in PVS

Proof certificates in PVS

These atomic rules are defined as a refinement of an intermediate decompo- sition of proof steps which is already present in PVS. This intermediate decom- position is based on a specific subset of proof steps, the primitive rules. In PVS, every proof step, including defined rules and strategies, can be decomposed as a sequence of primitive rules. As any primitive step is a proof step, this inter- mediate level of decomposition can be formalized in the original format of .prf proof traces. In fact, such a decomposition can be performed using the PVS package Manip [2], in which the instruction expand-strategy-steps allows one to decompose every proof step into a succession of primitive rules.
En savoir plus

8 En savoir plus

Translating Between Implicit and Explicit Versions of Proof

Translating Between Implicit and Explicit Versions of Proof

proofs (via possibly untrusted and complex theorem provers) from the checking of proofs (via smaller and trusted checkers). In such a setting, the provenance of a proof should not be critical for checking a proof. Separating theorem provers from proof checkers using a simple, declarative specification of proof certificates is not new: see [27] for a historical account. For example, the LF dependently typed λ-calculus [25] was originally proposed as a framework for specifying (natural deduction) proofs and the Elf system [41] pro- vided both type checking and inference for LF: the proof-carrying code project of [40] used LF as a target proof language. The LFSC system is an extension of the dependently typed λ-calculus with side-conditions and an implementation of it has successfully been used to check proofs coming from the SMT solvers CLSAT and CVC4 [48]. Deduction modulo [18] is another extension to depen- dently typed λ-terms in which rewriting is available: the Dedukti checker, based on that extension, has been successfully used to check proofs from such systems as Coq [9] and HOL [4]. In the domain of higher-order classical logic, the GAPT system [22] can check proofs given by sequent calculus, resolution, and expansion trees and allows for checking and transforming among proofs in those formats.
En savoir plus

20 En savoir plus

Software architectures: multi-scale refinement

Software architectures: multi-scale refinement

II. C ONTRIBUTION This work refines previous work by the authors on a multi- scale modeling approach for software architectures. In [6], the multi-scale modeling solution has only considered a fixed number of scales and describes a progressive process of refinement from a generic model describing a given point of view at a given scale to a specific model describing this point of view at another scale. We have proposed a multi- scale modeling perspective for Systems of Systems (SoS) architecture description. We have focused on SysML (System modeling language) notations and used block diagrams. To generalize the approach, this paper considers unfixed number of scales and proposes notations and common generic rules at all scales. We extend the approach by considering the refinement process as a vertical and horizontal model trans- formation, and by adding more details on scales. Reaching a fine-grained description that contains all necessary details that characterize the architectural style will trigger the stop condition and decide the last scale. We have proposed, in [7], a hybrid approach. The top-down approach was presented by the refinement process which transforms architecture in both a vertical and a horizontal way. The bottom-up approach is described by the abstraction process, which consists of vertical and horizontal transformations. This paper details only the top- down approach to study the multi-scale nature of complex software systems. We develop a specific solution modeling multi-scale software architecture that focuses on reference architectures in particular the Publish-Subscribe style.
En savoir plus

9 En savoir plus

A constructive and elementary proof of Reny's theorem

A constructive and elementary proof of Reny's theorem

does not impose. Imposing this very general condition allows us to obtain a constructive proof. Anyway, one could easily adapt the proof above in order to obtain a simple (non constructive) proof of Reny’s Theorem, without imposing that the strategy sets are Lindeloff spaces. Indeed, suppose that there does not exist a Nash equilibrium under the assumptions of Reny’s Theorem. Then the set Γ nequ = Γ is compact. Thus, without using Lindeloff Theorem, one

9 En savoir plus

Proof Trick: Small Inversions

Proof Trick: Small Inversions

In informal reasonings, we generally consider such consequences as obvious, because “no rule yields mul3 1” and “mul3 13 can only come from the second constructor T3, applied to 10”. But formally the justification has to be given using regular case analysis. Finding the right pattern is often tricky. In Coq, this can be done automatically thanks to a very useful tactic called inversion [3]. However both the proof term and the underlying reasoning are quite large in the current implementation, which makes the corresponding explanation somewhat hard to follow. From a practical perspective, inversion is useful in writing programs with dependent types [6, 2]; such programs cannot be written directly – we have to switch to interactive mode 1 – and running them entails heavy computations.
En savoir plus

12 En savoir plus

Formal Security Proof of CMAC and Its Variants

Formal Security Proof of CMAC and Its Variants

relying on their ability to compute the probability of sampling a particular value for Y i−1 . However, this sampling occurs in a previous iteration of the loop, and may in fact be overwritten, losing its “randomness” for the next iteration where the events are tested. One may argue that every value it may be over- written with in fact follows the same distribution. However, EasyCrypt’s logics–and indeed entire proof methodology relies on reasoning about values rather than distributions, and its logics cannot express the fact that some intermediate value follows a particular distribution. On the other hand, a standard way of dealing with similar issues would be to delay the random sampling until the value is used, allowing a precise probability computation. However, in this case, the value could in fact be overwritten between the point where it is initially sampled and the point where it is used. This introduces depen- dencies between random values and the adversary’s view of the system that make it impossible to delay sampling operations as desired. However, if we cannot formalize precisely their argument, we can formalize a simpler–less precise bound that does not discount internal collisions when they are caused by a common prefix.
En savoir plus

15 En savoir plus

Cell refinement of CsPbBr3 perovskite nanoparticles and thin films

Cell refinement of CsPbBr3 perovskite nanoparticles and thin films

parameters. The peak full width at half maximum (FWHM or Hw) is then on average half the value (for instance, Hw ¼ 0.13 at 2-theta ¼ 15.1  ) of the one measured before heat treatment (Hw ¼ 0.28), characterizing a wider crystallite size. A bigger change in the ‘c’ unit cell parameter is observed for the NPs compared to the thin lm between heating and cooling, which indicates that there might be a preferred orientation or stronger distor- tion along the c-axis during the synthesis process. The distorted octahedral environments of Pb 2+ and unit cell differences are attenuated owing to structural rearrangement aer heating. The minority phase CsPb 2 Br 5 le at RT aer thermal treatment, presents a ¼ b ¼ 8.484(1) ˚A and c ¼ 17.362(2) ˚A cell parameters with I4/mcm space group. The temperature and excess Pb 2+ are two key factors for the formation of CsPbBr3/CsPb2Br5 mixtures. During the synthesis processes, the evolution between CsPb2Br5
En savoir plus

9 En savoir plus

Nearwell local space and time refinement in reservoir simulation

Nearwell local space and time refinement in reservoir simulation

If Local Grid Refinement (LGR) is commonly used in reservoir simulations in the nearwell regions, current commercial simulators still make use of a single time stepping on the whole reservoir domain. It results that the time step is globally constrained both by the nearwell small refined cells and by the high Darcy velocities and high non linearities in the nearwell region. A Local Time Stepping (LTS) with a small time step in the nearwell regions and a larger time step in the reservoir region is clearly a promising field of investigation in order to save CPU time. It is a difficult topic in the context of reservoir simulation due to the implicit time integration, and to the coupling between a mainly elliptic or parabolic unknown, the pressure, and mainly hyperbolic unknowns, the saturations and compositions.
En savoir plus

22 En savoir plus

Conceptual (and hence mathematical) explanation, conceptual grounding and proof

Conceptual (and hence mathematical) explanation, conceptual grounding and proof

First of all, as mentioned above, laws or generalizations are necessary in explanations since they allow the occurrence of the phenomenon explained to be expected and in this sense they contribute to making us understand why the phenomenon occurred. This is one of Hempel’s main ideas and what Salmon (1984) calls the nomic expectability, namely expectability on the basis of laws or generalizations. Secondly, the requirement of laws or generalizations for explanations is supported and justified by a Humean conception of the nature of causality. According to this conception, causality cannot be taken as primitive and the holding of regularities that support causal claims is a non-circular way of preventing such causal claims to remain without analysis and elucidation. But precisely because generalizations or laws provide an elucidation for causal claims, then it is reasonable to argue that they contribute to explanations in which causal claims occur. Thirdly and finally, the presence of laws or generalizations in scientific explanations fits very well with the scientific practice: in many areas of science, e.g. physics, chemistry, but also economics and evolutionary biology, scientists use laws or generalizations to explain phenomena. “Explaining various macroscopic electromagnetic phenomena typically involves writing down and solving one or more of Maxwell’s equations” (Woodward, 2003, p. 185), in the same way “explaining some elementary quantum-mechanical phenomenon will involve modeling some physical systems in such a way that we can apply the Schr¨odinger equation to it,” (Woodward, 2003, p. 185). Since the use of laws or generalizations is a pervasive characteristic of explanatory practice, a theory of explanation must acknowledge it.
En savoir plus

32 En savoir plus

Proof Normalization Modulo

Proof Normalization Modulo

Unit´e de recherche INRIA Lorraine, Technopˆole de Nancy-Brabois, Campus scientifique, ` NANCY 615 rue du Jardin Botanique, BP 101, 54600 VILLERS LES Unit´e de recherche INRIA Rennes, Ir[r]

44 En savoir plus

Proof of Behavior

Proof of Behavior

We change the paradigm of Proof of Work and we introduce the concept of Proof of Behavior. 172[r]

6 En savoir plus

Denotational proof languages

Denotational proof languages

What we wanted was a small but powerful formal language in which to write down proofs in a lucid and structured style; it should have an exceptionally clean and si[r]

421 En savoir plus

Strategy-proof preference aggregation

Strategy-proof preference aggregation

Contrary to choice rules, aggregation rules have not been much studied from the viewpoint of their robustness to preference misrepresentations. Researchers are clearly aware of the incentive issue and seem to agree that some aggregation rules (such as the Borda rule, for instance) are somehow “more vulnerable to misrepresentations” than others. What prevents a systematic analysis, however, is the lack of a formal notion of robustness of aggregation rules to preference misrepresentations. The classic notion of strategy-proofness, which concerns choice rules, needs to be adapted. The only attempt to formulate a definition applicable to aggregation rules that we are aware of is due to Bossert and Storcken (1992). An aggregation rule is strategy-proof in their sense if misrepresenting one’s preference never induces a social ordering which is closer to one’s own preference according to the Kemeny distance. The results in Bossert and Storcken (1992) are mainly impossibilities.
En savoir plus

24 En savoir plus

Generalized Proof Number Search

Generalized Proof Number Search

Nous présentons Generalized Proof Number Search (GPNS) un algorithme fondé sur les Proof Numbers, per- mettant d’obtenir la valeur de positions dans des jeux à multiples résultats. GPNS est une généralisation directe de Proof Number Search (PNS) : dans le cas des jeux à deux résultats les deux algorithmes se comportent exacte- ment de la même manière. Cependant, GPNS permet de traiter directement une classe plus étendue de jeux. Lors- qu’un jeu à plus de deux résultats, on peut utiliser PNS à plusieurs reprises avec différents objectifs pour obtenir finalement la valeur d’une position. À l’inverse, un seul appel à GPNS est suffisant pour obtenir la même infor- mation. Nous présentons des résultats expérimentaux sur la résolution du Puissance 4 et de Woodpush, pour divers tailles de plateaux. Ces résultats montrent le nombre de descentes à effectuer pour résoudre une position donnée, est bien moindre pour GPNS que pour PNS.
En savoir plus

8 En savoir plus

Show all 10000 documents...