The B set **theory** is provided with a specific type system, expressed using set constructs, resulting in a lack of separation in a B formula between typing and set reasoning. To help us embed B typing constraints into PFOL, we define a procedure to annotate B variables with their types, using the type-checking algorithm of the B Method. The interpretation of these types will then be given by the translation function into PFOL. Axioms and hypotheses are generalized by translating B types to (universally quantified) type variables in PFOL. In contrast, types coming from the formula to be proved are interpreted as type constants in PFOL. In addition, we define the reverse translation from PFOL to B, letting us to reword the initial B formula. Thanks to this reverse translation and the derivations of Zenon inference rules expressed using the B **proof** system, we can translate Zenon proofs into B proofs, guaranteeing the soundness of our translation.

En savoir plus
In this paper we investigate some **proof**-theoretical properties of indexed nested sequents. The first and foremost one is the cut-elimination theorem. As Fitting’s original system does not use a cut rule, this result is actually entailed by his (semantical) completeness theorem. Using the translation mentioned above, one could also use the cut-elimination result for labelled tree sequents with equal- ity, yielding an indirect **proof** [23]. However, only an internal cut-elimination **proof** makes a **proof** formalism a first-class citizen for structural **proof** **theory**. For this reason we give in this paper a syntactic **proof** of cut-elimination carried out within indexed nested sequents. We achieve this by making some subtle but cru- cial adjustments to the standard cut-elimination **proof** for pure nested sequents. One of the main advantages is that this **proof** can be exported to the intuition- istic framework with basically no effort. We achieve this by using the techniques that had already been successfully used for ordinary nested sequents [8, 26, 15]. This allows us to present the cut-free indexed nested sequents systems in a uni- form manner for classical and intuitionistic modal logic. The deductive systems are almost identical, the main difference being that an intuitionistic **sequent** has only one “output” formula, in the same way as in ordinary **sequent** **calculus** an intuitionistic **sequent** has only one formula on the right.

En savoir plus
the propositional fragment of [4]) to infinitary derivations. 709
I Definition 41 (µMALL **sequent** **calculus**). The **sequent** **calculus** for the propositional fragment of µMALL is a finitary **sequent** **calculus** whose rules are the same as those of µMALL ∞ , except that the ν rule is as follows:

non primitive recursive CoreDataXPath(↑ + , ↓ + ) [13].
On both accounts, the use of sequents enriched with histories is a promising starting point. From a **proof** **theory** perspective, two lines of inquiry seem interesting. The first would be to develop a cut elimination procedure for our **sequent** **calculus**; by completeness of the **calculus**, the (cut) rule is admissible, but this is a semantic **proof** rather than a syntactic one. The second is to consider an extension of DataGL where histories are integrated as logical connectives in the syntax instead of being a mere extra-logical mechanism; this might help in designing Hilbert-style axiomatisations for data logics.

En savoir plus
1 Introduction
It is a fundamental observation, made independently by several researchers, that a formal **proof** can be subdivided into its abstract deductive structure, often called skeleton, and a way of instantiating it with formulas which renders it a valid **proof**. For **proof**-search, the separation of these two layers is a principle whose importance can hardly be overemphasised. It is already visible in the original resolution rule [24] but even more apparent in the extension [18] of resolution to type **theory**. It is central for matings [1] and has applications in logic programming where **proof**-search provides an operational semantics for Prolog- like languages [21]. From a **proof**-theoretic point of view, the relation between these two levels has been investigated in [22]. Such questions give rise naturally to unification problems [20, 10]: filling up a skeleton for a cut-free first-order **proof** can be done by solving a first-order unification problem, the case with cuts corresponds to second-order unification, which is undecidable [15].

En savoir plus
Kleene algebra
In our **sequent** system, called LKA, proofs are finitely branching, but possibly infinitely deep (i.e. not wellfounded). To prevent fallacious reasoning, we give a simple validity criterion for proofs with cut, and prove that the corresponding system admits cut-elimination. The difficulty in the presence of infinitely deep proofs consists in proving that cut-elimination is productive; we do so by using the natural interpretation of regular expressions as data types for parse-trees [ 15 ], and by giving an interpretation of proofs as parse-tree transformers. Such an idea already appears in [ 18 ] but in a simpler setting, for a finitary natural deduction system rather than for a non-wellfounded **sequent** **calculus**.

En savoir plus
in Chapter 3.
Chapter 7
In this chapter we apply the methodology of Chapter 2 and Chapter 6 (for G3ii) to the depth-bounded intuitionistic **sequent** **calculus** G4ii of [Hud89, Hud92, Dyc92]. We ﬁrst show how G4ii is obtained from LJQ. We then present a higher-order **calculus** for it —decorating proofs with **proof**-terms, which uses constructors cor- responding to admissible rules such as the cut-rule. While existing inductive arguments for admissibility suggest weakly normalising **proof** transformations, we strengthen these approaches by introducing various term-reduction systems, all strongly normalising on typed terms, representing **proof** transformations. The variations correspond to diﬀerent optimisations, some of them being orthogonal such as CBN and CBV sub-systems similar to those of G3ii. We note however that the CBV sub-system seems more natural than the CBN one, which is related to the fact that G4ii is based on LJQ.

En savoir plus
377 En savoir plus

IRIF, CNRS, Universit´ e de Paris, F-75013 Paris, France {ade,saurin}@irif.fr
Abstract. Logics based on the µ-**calculus** are used to model induc- tive and coinductive reasoning and to verify reactive systems. A well- structured **proof**-**theory** is needed in order to apply such logics to the study of programming languages with (co)inductive data types and au- tomated (co)inductive theorem proving. While traditional **proof** system suffers some defects, non-wellfounded (or infinitary) and circular proofs have been recognized as a valuable alternative, and significant progress have been made in this direction in recent years. Such proofs are non- wellfounded **sequent** derivations together with a global validity condition expressed in terms of progressing threads.

En savoir plus
1 INTRODUCTION
1.1 Control operators and dependent types
Originally created to deepen the connection between programming and logic, dependent types are now a key feature of numerous functional programming languages. From the point of view of programming, dependent types provide more precise types—and thus more precise specifications—to existing programs. From a logical perspective, they permit definitions of **proof** terms for statements like the full axiom of choice. Dependent types are provided by Coq or Agda, two of the most actively developed **proof** assistants. They both rely on constructive type theories: the **calculus** of inductive constructions for Coq [ 6 ], and Martin-Löf’s type **theory** for Agda [ 24 ]. Yet, both systems lack support for classical logic and more generally for side effects, which make them impractical as programming languages.

En savoir plus
1 INTRODUCTION
1.1 Control operators and dependent types
Originally created to deepen the connection between programming and logic, dependent types are now a key feature of numerous functional programming languages. From the point of view of programming, dependent types provide more precise types—and thus more precise specifications—to existing programs. From a logical perspective, they permit definitions of **proof** terms for statements like the full axiom of choice. Dependent types are provided by Coq or Agda, two of the most actively developed **proof** assistants. They both rely on constructive type theories: the **calculus** of inductive constructions for Coq [ 6 ], and Martin-Löf’s type **theory** for Agda [ 24 ]. Yet, both systems lack support for classical logic and more generally for side effects, which make them impractical as programming languages.

En savoir plus
considering proofs as preexisting objects that can be seen as winning strategies in the interaction, we see each step of the interaction as a step in two simultaneous exhaustive searches for the proofs of two dual formulae. We will work in the multiplicative and additive fragment of linear logic (MALL), which has two important advantages. Firstly, it is symmetric enough to allow a single process to perform two orthogonal **proof** searches. Secondly, it is not complete, i.e. the fact that a formula is not provable does not imply that its negation is. In other words, refuting a statement is necessarily different from proving its negation, highlighting the relevance of our approach. The formalism of choice for representing proofs will be **sequent** **calculus**, as it is the perfect basis for **proof** search and enjoys the symmetries we require. Moreover, the property of focalisation of linear logic will be thoroughly used in this thesis, and we will demonstrate its significance in interaction.

En savoir plus
102 En savoir plus

The memoisation table is filled-in by clause-learning: our plugin adds an entry whenever it builds a complete **proof** of some **sequent** ∆ ` and no previous entry ∆ 0 ` exists with ∆ 0 ⊆ ∆, or whenever it concludes that some **sequent** ∆ ` is not provable and no previous entry ∆ 0 ` exists with ∆ ⊆ ∆ 0 . For the table to cut computation as often as possible, a pre-processing step is applied to a **proof**- tree before it enters the table: it is pruned from every formula that is not used in the **proof**, which is easy to do for complete proofs (eager weakening is applied a posteriori by inspection of the inductive structure). PSYCHE’s kernel instead performs pruning on- the-fly, whenever an inference is added to complete proofs. Since **proof**-completion can be seen as finding a conflict, pruning by eager weakening is a conflict analysis process naturally provided by structural **proof** **theory**. Of course, the efficiency of pruning relies on the efficiency of the decision procedure in providing a small inconsistent subset whenever it decides that a set of literals is inconsistent.

En savoir plus
Thanks to several works on classical logic in **proof** **theory**, it is now well-established that continuation- passing style (CPS) translations in call by name and call by value correspond to different polarisations of formulae ( Girard , 1991 ; Danos, Joinet, and Schellinx , 1997 ; Laurent , 2002 ). Extending this observation and building on Curien and Herbelin’s abstract-machine-like calculi ( 2000 ), the last author proposed a term assignment for a polarised **sequent** **calculus** (where the polarities of formulae determine the eval- uation order) in which various calculi from the literature can be obtained with macros responsible for the choices of polarities ( Munch-Maccagnoni , 2013 ). It aims to explain several CPS translations from the literature by decompositing them through a single CPS for **sequent** **calculus**. It has later proved to be a fruitful setting to study the addition of effects and resource modalities ( Curien, Fiore, and Munch- Maccagnoni , 2016 ), providing a categorical **proof** **theory** of Call By Push Value semantics ( Levy , 2004 ). We propose to bring together a dependently-typed **theory** (ECC) and polarised **sequent** **calculus**, by presenting a **calculus** L dep suitable as a vehicle for compilation and representation of effectful computa- tions. As a first step in that direction, we show that L dep advantageously factorize a dependently typed continuation-passing style translation for ECC+ call/cc. To avoid the inconsistency of type **theory** with control operators, we restrict their interaction. Nonetheless, in the pure case, we obtain an un- restricted translation from ECC to itself, thus opening the door to the definition of dependently typed compilation transformations.

En savoir plus
However, Gentzen’s original **sequent** **calculus** has major defects for **proof**-search: the freedom with which rules can be applied leads to a very broad and redundant search space. It needs to be reﬁned, or controlled, to produce reasonable **proof**-search procedures.
Structural **proof** **theory** and its semantics have evolved the concepts of polarities and focusing, which greatly impact the way we understand the **proof**-search mechanisms: Miller et al. introduce a notion of uniform proofs [ MNPS91 ] which is used to extend the concepts of logic programing beyond the logical fragment of Horn clauses, and uniform proofs themselves can be generalised as the concept of focusing, which gives **sequent** **calculus** **proof** enough structure to specify reasonable **proof**-search procedures in linear [ And92 ], intuitionistic, and classical logic [ LM09 ].

En savoir plus
140 En savoir plus

To address this problem, we will in this paper use a pure deep inference sys- tem [13,15,6,14] to deal with weakening and contraction. This leads to different degrees of freedom for the weakening and contraction rules than in the **sequent** **calculus**. Whereas in the **sequent** **calculus** we can allow or forbid the rules on the left and/or on the right of the turnstile, in deep inference we have other choices. Besides allowing or forbidding rules, we can restrict the rules to atomic formu- las or not, and to shallow contexts or not. Furthermore, deep inference systems with contraction and weaking also admit the medial rule, which implements the classical implication

En savoir plus
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

SMT solvers are nowadays pervasive in verification tools. When the verification is about a crit- ical system, the result of the SMT solver is also critical and cannot be trusted. The SMT-LIB 2.0 is a standard interface for SMT solvers but does not specify the output of the get-**proof** com- mand. We present a **proof** system that is geared towards SMT solvers and follows their conceptually modular architecture. Our **proof** system makes a clear distinction between propositional and **theory** reasoning. Moreover, individual theories provide specific **proof** systems that are combined using the Nelson-Oppen **proof** scheme. We propose specific **proof** systems for linear real arithmetic (LRA) and uninterpreted functions (EUF) and discuss **proof** generation and **proof** checking. We have evaluated the cost of generating proofs in our **proof** system. Our experiments on benchmarks taken from the SMT-LIB library show that the simple mechanisms used in our approach suffice for a large majority of the selected benchmarks.

En savoir plus
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

3
Another approach consists in twisting the definition of group to make it fit the previous model : defining “groups” — or at least what will figure in the statement of a property at the position where a group can be expected — as an equivalence class of groups. This is what we would call the “deep” embedding of **proof** by isomorphism. The difficulty with this technique consists in identifying proper- ties which are conserved by isomorphism and modifying their statement without making them too alien compared to those that do not enjoy this invariance. More- over, it implies maintaining a complex distinction between those polymorphic classes of groups and the sets they contain, all transparently to the user — who is potentially a mathematician and should not see any apparent difference between the two. Nonetheless this approach has been attempted, once, by Santen ( 1999 ), albeit in a library much smaller than ours.

En savoir plus
157 En savoir plus

Unité de recherche INRIA Sophia Antipolis 2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex France Unité de recherche INRIA Futurs : Parc Club Orsay Université - ZAC des Vi[r]