L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

Backpropagation in the Simply Typed Lambda-calculus with Linear Negation ALOÏS BRUNEL, Deepomatic, France DAMIANO MAZZA, CNRS, UMR 7030, LIPN, Université Sorbonne Paris Nord, France MICH[r]

Unite´ de recherche INRIA Lorraine, Technopoˆle de Nancy-Brabois, Campus scientifique, 615 rue du Jardin Botanique, BP 101, 54600 VILLERS LE`S NANCY Unite´ de recherche INRIA Rennes, Iri[r]

assumed. However, it would be interesting to know if this restriction can be dropped.
Problems arising from non left-linear rewriting are directly transposed to left-linear conditional rewriting. **The** semi-closure condition is sufficient to avoid this, and it seems to provide **the** counterpart of left-linearity for unconditional rewriting. However, two remarks have to be made about this restriction. First, it would be interesting to know if it is a necessary condition and besides, to characterize a class of non semi-closed systems that can be translated into equivalent semi-closed ones. Second, semi-closed terminating join systems behave like normal systems. But normal systems can be easily translated into equivalent non-conditional systems. Moreover such a translation preserves good properties such as left-linearity and non ambiguity. As many practical uses of rewriting rely on terminating systems, semi-closed join systems may be in practice essentially an intuitive way to design rewrite systems that can be then efficiently implemented by non-conditional rewriting.

En savoir plus
by adding or removing several type constructions, like tensor prod- uct, polymorphism and type fixpoints. Our work follows in spirit **the** same approach focusing on **the** encoding of (co)algebras.
Gaboardi et al. [ 17 ] have studied **the** expressivity of **the** different light logics by designing embeddings from **the** light logics to Lin- ear Logic by Level [ 4 ], another logic providing a characterization of polynomial time but based on more general principles. Interest- ingly, in Linear Logic by Level **the** § modality commutes **with** all **the** other type constructions. It would be interesting to study what is **the** expressivity of this logic **with** respect to **the** encoding of alge- bras and coalgebras. Baillot et al. [ 6 ] have approached **the** problem of improving **the** expressivity of LAL by designed a programming language **with** recursion and pattern matching around it. We take inspiration by their work but instead of adding extra constructions we focus on **the** constructions that can be defined in LAL itself. 9. Conclusion and future works

En savoir plus
Remarks : 1) **With** these rules, **the** rst proof above is not directly a proof in **the** restriction: **the** axiom rule of LJ has to be encoded in **the** restriction by an axiom rule followed by a contraction rule.
2)This **calculus** appears also in Danos et al [2] **with** a slight dierence in **the** treatment of structural rules. Like its classical version LKT, it has been considered by Danos et al for its good behaviour w.r.t. embedding into linear logic. **The** **calculus** LJT appears also as a fragment of ILU, **the** intuitionistic neutral fragment of unied logic described by Girard in [6]. **The** **calculus** ILU is itself a form of LJ constrained **with** a stoup, for which Girard pointed out that \**the** formula [in **the** stoup] (if there is one) is **the** analogue of **the** familiar

En savoir plus
ii) **with** <ν − 1, i> if ν > **the** depth of this occurrence ; and after that, we replace this occurrence of <ν − 1, i> **with** **the** closed term given by **the** environment e.
Now, we obtain (λ n u)[e] (which is closed) by **the** substitution (ii) on **the** free occurrences of <ν, i> in u such that ν > **the** depth of this occurrence in u. Indeed, they are exactly **the** free occurrences in λ n u. Then, one step of weak head reduction on (λ n u)[e] ¯ φ 1 . . . ¯ φ n performs

19th April 2007
Abstract
LJQ is a focused sequent **calculus** for intuitionistic logic, **with** a simple re- striction on **the** first premiss of **the** usual left introduction rule for implication. In a previous paper we discussed its history (going back to about 1950, or be- yond) and presented its basic theory and some applications; here we discuss in detail its relation to call-by-value reduction in **lambda** **calculus**, establishing a connection between LJQ and **the** CBV **calculus** λ C of Moggi. In particular,

Expr ession also needs an in terpreted attribute. For DTD-technical reasons, only **the** two most important values are specified for **the** val attribute (similarly, only two ord values are given). **The** DTD also does not enforce context-dependent attribute values such as <Equal oriented="no"> being normally used in conditions. Moreover, while **the** DTD does not prevent **Lambda** formulas to occur on **the** lhs of (both kinds of) equations, a static analyzer should confine them to **the** rhs of oriented equations. A more precise XSD is part of **the** emerging Functional RuleML 0.9 [http://www.ruleml.org/fun] .

En savoir plus
2.2 τ∂λ-**calculus**
In differential proof nets **the** 0-ary tensor and **the** 0-ary par can be added freely in **the** sense that we still have a natural interpretation in MRel and M ∞ . These
operations can be translated in our **calculus** as an exception mechanism. **With** on one side a τ (Q) that “raises” **the** exception (or test) Q by burning its ap- plicative context (whenever these applications do not have any linear compo- nent, otherwise it diverges). And **with** on **the** other side a ¯ τ(M ) that “catch” **the** exceptions in M by burning **the** abstraction context of M (whenever this abstraction is dummy). **The** main difference **with** a usual exception system is **the** divergence of **the** catch if no exception are raised.

En savoir plus
However, one of **the** shortcomings of L tac is that tactics used to construct proofs are unsafe 7 . This language can be improved by means of types for tactics as is suggested by
Delahaye.
Efficiency in theorem provers can also be gained by improving **the** machinery to guide and verify proofs, by means of improvements in **the** interaction **with** **the** kernel and **the** tactic engine. Improvements for system development have many contributions: libraries for specialised theories (see **the** list of users’ contributions 8 ), tactic languages to enhance **the** mechanisation of proofs [ 36 , 118 ], improvements to **the** user back-end framework for example **the** asynchronous edition of proofs [ 17 ], interaction **with** other theorem provers to cooperate in a large developments, etc. All of this turns C OQ into a sophisticated programming language where **the** program development arises naturally, for instance **the** R USSELL extension [ 101 ] to develop programs **with** dependent types.

En savoir plus
135 En savoir plus

4.1 Introduction
In this chapter, we are interested in **the** conﬂuence of rewriting systems. In par- ticular, we study a problem arising from **the** combination of rewrite rules **with** β- reduction. Remember that conﬂuence is a highly desirable property of **the** λΠ- **Calculus** Modulo for several reasons. First, conﬂuence is **the** most direct way to prove **the** product compatibility property (Theorem 2.6.11). Second, as soon as **the** rewrite relation is also strongly normalizing, conﬂuence entails **the** decidability of **the** congruence: two terms are convertible if and only if they have **the** same normal form. Third, conﬂuence has also been used in **the** previous chapter for proving that weakly-well-formed rewrite rules are permanently well-typed. More generally, any property based on uniﬁcation will require conﬂuence. Lastly, conﬂuence is used to prove strong normalization when there are type-level rewrite rules [Bla05b].

En savoir plus
169 En savoir plus

Although **the** original λµe µ **calculus** of [ 2 ] has a system of simple types based on **the** sequent **calculus**, **the** untyped version is a Turing-complete language for computation **with** explicit repre- sentation of control, as well as code. In this work we try to give a meaning to untyped λµe µ **calculus** and understand its behaviour. We interpret its variant closed under call-by-name reduction in **the** category of negated domains, and **the** variant closed under call-by-value reduction in **the** Kleisli category. As far as we know, this is **the** first interpretation of untyped λµe µ **calculus**. We also prove **the** confluence of both versions.

En savoir plus
5.2 Tailoring **Lambda** **Calculus** for an ASM
Let F be **the** family of interpretations of all static symbols in **the** initial state. **The** adequate **Lambda** **calculus** to encode **the** ASM is Λ F .
Let us argue that this is not an unfair trick. An algorithm does decompose a task in elementary ones. But “elementary” does not mean “trivial” nor “atomic”, it just means that we do not detail how they are performed: they are like oracles. There is no absolute notion of elementary task. It depends on what big task is under investigation. For an algorithm about matrix product, multiplication of integers can be seen as elementary. Thus, algorithms go **with** oracles.

En savoir plus
Kozen (1979) gives a semantics of imperative probabilistic pro- grams as partial measurable functions from infinite random traces to final states, which serves as **the** model for our trace semantics. Kozen also proves this semantics equivalent to a domain-theoretic one. Park et al. (2008) give an operational version of Kozen’s trace- based semantics for a λ-**calculus** **with** recursion, but “do not inves- tigate measure-theoretic properties”. Cousot and Monerau (2012) generalise Kozen’s trace-based semantics to consider probabilistic programs as measurable functions from a probability space into a semantics domain, and study abstract interpretation in this setting.

En savoir plus
5 A
hievements and Perspe
tives
Using various proof te
hniques, we have established that **the** ~ x-
al
ulus is strongly normalizing. F or that purpose, we have formalized a proof te
hnique of SN via PSN. Let us mention that we have su
essfully applied this te
hnique, **with** some adjustments, to prove SN of **the** -
al
ulus (introdu
ed in [3℄) for **the** rst time, as far as we know. We also used it to establish that PSN implies SN for **the** -
al
ulus [1℄, for whi
h PSN is known to fail [10℄, showing that, for this
al
ulus, **the** only problem of SN is in PSN.

En savoir plus
We illustrate these notions in Figure 1, for **the** example λx.λy. ((λz.z)y)x. **The** scope of **the** abstraction λx is **the** entire subterm, λy. ((λz.z)y)x (which may or may not be taken to include λx itself). Note that **with** explicit substitution, **the** scope may grow or shrink by lifting explicit substitutions in or out. **The** skeleton is **the** term λx.λy. (wy)x where **the** subterm λz.z is lifted out as an (ex- plicit) substitution [λz.z/w]. **The** spine of a term, indicated in **the** second image, cannot naturally be expressed **with** explicit substitution, though one can get an impression **with** capturing substitutions: it would be λx.λy.wx, **with** **the** sub- term (λz.z)y extracted by a capturing substitution [(λz.z)y/w]. Observe that **the** skeleton can be described as **the** iterated spine: it is **the** smallest subgraph of **the** syntax tree closed under taking **the** spine of each abstraction, i.e. that contains **the** spine of every abstraction it contains.

En savoir plus
These collapse results require innocence – or at least a substitute ensuring that composition is deadlock-free. But beyond **the** sequential deterministic case, there was for a long time no adequate notion of innocence [ Harmer and McCusker 1999 ]. This changed only a few years ago, **with** two notions of non-deterministic innocent strategies (using concurrent games [ Castellan et al. 2014 ] and sheaves [ Tsukada and Ong 2015 ]). These two models depart from traditional game semantics in ways that are technically very different, but conceptually similar: they both record more intensional behavioural information. This change of perspective recently allowed a quantitative extension of **the** relational collapse [ Castellan et al. 2018 ] for a probabilistic language, using concurrent games. Concurrent games are a family of game semantics initiated in [ Abramsky and Melliès 1999 ], **with** intense activity in **the** past decade prompted by a new non-deterministic generalization based on event structures [ Rideau and Winskel 2011 ]. Building on notions from concurrency theory, they are a natural fit for **the** semantics of concurrent programs [ Castellan and Clairambault 2016 ; Castellan and Yoshida 2019 ]. It is perhaps more surprising that their adoption has a strong impact even when studying sequential programs such as **the** quantum λ-**calculus**: they offer a fine-grained causal presentation of **the** behaviour of programs that contrasts **with** **the** temporal presentation of traditional games models. This has far-reaching consequences. For **the** present paper, both our collapse theorem and **the** congruence of **the** observational quotient required for full abstraction rely on a visibility condition, a substitute for innocence ensuring a deadlock-free composition – visibility bans certain impure causal patterns, leveraging **the** expressiveness of concurrent games. Thus, our constructions rely heavily on **the** fact that **the** model of [ Clairambault et al. 2019 ] was developed within concurrent games. Our collapse theorem follows in **the** footsteps of **the** probabilistic collapse [ Castellan et al. 2018 ], which we generalize to **the** quantum case.

En savoir plus
7 Conclusions
We have introduced **the** structural λj-**calculus**, a concise but expressive λ-**calculus** **with** jumps. No prior knowledge of Linear Logic is necessary to understand λj, despite their strong connection. We have established many different sanity prop- erties for λj such as confluence and PSN. We have used λj as an operational framework to elaborate new characterisations of **the** well-known notions of full developments and L-developments, and to obtain **the** new, more powerful notion of XL-development. Finally, we have modularly added commutation of indepen- dent jumps, σ-equivalence and two kinds of propagations of jumps, while showing that PSN still holds.

En savoir plus
Thus, in order to characterize normalization in λ µ , we will resort to intersection and union type. But moreover, we want to do that in a quantitative way, as it was announced in **the** introduction of Part II. Indeed, **the** non-idempotent approach provides very simple combinatorial arguments, only based on a decreasing measure, to characterize head or strongly normalizing terms by means of typability (recall Sec. 3.4 for HN and Sec. 5.2 for SN). We show that for every typable term t **with** type derivation Π, if t reduces to t 0 , then t 0 is typable **with** a type derivation Π 0 such that **the** measure of Π is strictly greater than that of Π 0 . In **the** well-known case of **the** λ-**calculus**, this measure is simply based on **the** structure of type tree derivations given by **the** number of its nodes (Definition 3.2), which strictly decreases along reduction. However, in **the** λ µ -**calculus**, **the** creation of nested applications during µ-reduction may increase **the** number of nodes of **the** corresponding type derivations, so that such a naive definition of measure is not decreasing anymore. We then take into account not only **the** structure of derivations, but also **the** structure (multiplicity and size) of certain types appearing in **the** derivations, thus ensuring an overall decreasing of **the** measure during reduction.

En savoir plus
370 En savoir plus