• Aucun résultat trouvé

2 The second order λ-calculus

N/A
N/A
Protected

Academic year: 2022

Partager "2 The second order λ-calculus"

Copied!
8
0
0

Texte intégral

(1)

About classical logic and imperative programming

Jean-Louis Krivine

Equipe de Logique math´ematique, Universit´e Paris VII 2 Place Jussieu 75251 Paris cedex 05

e-mail krivine@logique.jussieu.fr December 22, 1992

1 Introduction

In this lecture, we shall consider a very well known typed λ-calculus system, which is the second order λ-calculus (also called “system F”) of Girard [2], rediscovered by Reynolds [16]

in a computer science frame. We shall extend it in two ways:

• Types will be formulas of second order predicate calculus, and not only, as in system F, second order propositional calculus [5, 6]. In a certain sense, this is a harmless extension, since the λ-terms which are typable are the same. This kind of extension has already been considered by D. Leivant [11].

• A much more serious extension is the following: the underlying logic will be classical logic, and not only, as in system F, intuitionistic logic.

Extraction of programs from classical proofs has been considered, since two or three years by several people (C. Murthy [12], J.Y. Girardapproach has the following features:

1. We shall not try to consider general reduction of proof (i.e. general β-reduction of λ- terms). Instead, I shall only usehead reduction, and, in fact, only the following particular case:

λx t.uu1. . . un ;t[u/x]u1. . . un.

The λ-calculus with this reduction is known as “call-by-nameλ-calculus” (Plotkin [15]).

Now, since we want to catch classical logic, we add a new constantc, whose type will be

∀X(¬¬X →X) (the axiom of classical logic over intuitionistic logic), following an idea of T. Griffin [4]. Because of our restricted reduction, we only have to define the law of reduction of c when it is in head position. This is:

ctt1. . . tn;t.λx xt1. . . tn (x is a new λ-variable).

This is a particular case of the rule of reduction for control operators given by M. Fel- leisen [1].

(2)

These rules will be called rules of headc-reduction. They areglobalrules, i.e. they apply only to the whole term. These rules are particularly interesting because of the following claim: the second orderλ-calculus with headc-reduction is a good model forimperative programming languages, and not for functional ones, as generally believed. We shall give later on a few examples to support this claim.

2. A crucial tool, in this approach, is the notion ofstorage operator, to be defined later. As is well known, in call-by-name λ-calculus, a function must compute its argument every time it uses it. Storage operators were introduced in [7, 8] in order to avoid this, and to simulate call-by-value, when necessary. They are also used for simulating some features of assignment instructions in imperative programming languages. Lastly, following an idea of M. Parigot [14], we shall use them in an essential way, in order to interpret classical proofs as programs.

2 The second order λ-calculus

Types are formulas of second order predicate calculus, with function symbols, individual vari- ables x, y, . . ., and predicate variables X, Y, . . .of each arity. The only logical connectives are

∀,→,⊥. ¬F is defined as F → ⊥. A→(B →C) is also denoted by A, B →C.

Examples: ∀X(X1, X0→Xx) denoted by Bool[x].

∀X[∀y(Xy→Xsy), X0→Xx] denoted byInt[x].

∀X[∀yz(Int(y), Xz →Xcons(y, z)), Xnil→Xx] which is the type of lists of integers.

In these formulas, 1,0, nil are constant symbols (i.e. function symbols of arity 0), s, cons are function symbols of arity 1 and 2. X is a predicate variable of arity 1.

2.1 Rules of construction of typed terms

1. x1 :A1, . . . , xk:Ak `xi :Ai.

2. If x1 :A1, . . . , xk :Ak, x:A`τ :B, then x1 :A1, . . . , xk :Ak `λx τ :A →B.

3. If x1 :A1, . . . , xk :Ak `τ :A→B, τ0 :A then x1 :A1, . . . , xk :Ak `τ τ0 :B.

4. If x1 :A1, . . . , xk :Ak `τ :∀x A, then x1 :A1, . . . , xk :Ak `τ :A[t/x].

5. If x1 : A1, . . . , xk :Ak `τ :A, then x1 : A1, . . . , xk :Ak `τ :∀x A, if x does not appear in A1, . . . , Ak.

6. Ifx1 :A1, . . . , xk :Ak `τ :∀X A, thenx1 :A1, . . . , xk :Ak `τ :A[F/X] (F is a formula with k free variables, k being the arity ofX).

7. Ifx1 :A1, . . . , xk:Ak `τ :A, then x1 :A1, . . . , xk :Ak `τ :∀X A, ifX does not appear in A1, . . . , Ak.

8. Finally, we may suppose we are given a set E of equations t =u between terms. Then, we have the following rule:

If x1 :A1, . . . , xk :Ak `τ :A[t/x], then x1 :A1, . . . , xk:Ak` τ : A[u/x], if t =u is an equational consequence of E.

(3)

These are the rules for second order intuitionistic logic. We get classical logic in a very simple way: do not change the rules, but add, at the left-hand side, the declaration c: ∀X(¬¬X → X) (where X is a propositional variable, i.e. a variable of arity 0). Thus, a λ-term correspond to an intuitionistic proof if, and only if, the constantc does not appear in it.

We shall first consider intuitionistic logic, in order to define the notion of storage operator.

2.2 Data types

We shall not give here the general definition of data types. We shall use only two examples:

The type of booleansis given by the following formula:

Bool[x]≡ ∀X(X1, X0→Xx).

The type of integers is given by the formula:

Int[x]≡ ∀X(∀y(Xy→Xsy), X0→Xx).

The product and the sum of two data types A[x] and B[x] are given by the formulas:

(A×B) [x]≡ ∀X(∀yz(A[y], B[z]→Xc(y, z))→Xx)

(A+B) [x]≡ ∀X(∀y(A[y]→Xinl(y)),∀z(B[z]→Xinr(z))→Xx). Many other operations are defined on data types, such as lists, trees, . . .

A very important property of data types is the following (we express it for the data type of integers): if ` τ : Int[sn0], i.e. if τ is a λ-term associated with an intuitionistic proof of Int[sn0], then τ isβ-equivalent to the Church integer n, which is λf λxfnx.

In order to get a program for the function f :N →N, it is sufficient to prove

` ∀x(Int[x]→Int[f x]). For example, a proof of` ∀x(Int[x]→Int[sx]) gives a program for the successor (such asλnλf λx f.nf x); a proof of` ∀x(Int[x] →Int[px]), from the equations p0 = 0;psx=x; gives aλ-term for the predecessor in Church integers [5, 6].

3 Storage operators and the strategy of reduction for λ -terms

The strategy of head reduction (call-by-name) has the following advantages:

• Its good mathematical properties, given by the standardization theorem: if a λ-term is solvable, then we obtain a head normal form by head reduction.

• The fact that we can control the sequential progress of side effects. This is not the case with call-by-value. A well known example is the use of ”if ... then ... else”: let P, Qbe programs with side effects, and b a boolean. The program bP Q reads as ”if b then P else Q”. With call-by-name, the boolean b is computed first, then P orQ is performed.

With call-by-value, P and Q are both performed, before b to be computed! Another example is the following:

if P :A →B, Q : B → C, then Q◦P = λxQP x 'β λx(Q (λyP P x) x) :A → C (y is a dummy variable). Suppose that P, Q have side effects; with call-by-name strategy, both programs perform Q, then P. With call-by-value, the first program performs P, then Q, and the second P, P, Q!

(4)

Now, a drawback of the call-by-name strategy is the fact that the argument of a function is computed as many times as it is used. This is an important drawback, but clearly less important than the preceding one. In the first place, a program must control side effects correctly; once this is insured, we may take care of the problem of shortening computations.

The purpose of storage operators is precisely to correct this drawback. Let us consider first an example: the λ-term T =λf λn n.λg g◦s. f.0, where s is a λ-term for the successor, is a storage operator for Church integers. It has the property that, if τ 'β n, then T f τ will reduce, by head reduction, to f.sn0.

Now, if you have to compute f τ, and you want to avoid computing τ several times, you merely replace f by T f, and thus, compute T f τ instead. The head reduction will givef.sn0, which means that τ has been computed first (in the form sn0), and after that, given to f as an argument. In other words, the result of the computation of T f τ is the same as forf τ, but the integer τ has been called by value.

Another example of storage operator for integers is T0 =λf λn(n λhλg(h g◦s) λk k0f). U =λf λx(x (f 1) (f 0)) and U0 = λf λx(x λg g1λg g0f) are examples of storage oper- ators for booleans. The precise definition of storage operators is given in theorem 1.

It is a remarkable fact that we can give simple types to storage operators. We first define the G¨odel translation F of a formula F: it is obtained by replacing, in the formula F, each atomic formula A by ¬A (i.e. A→⊥). For example:

Int[x]≡ ∀X(∀y(¬Xy→ ¬Xsy),¬X0→ ¬Xx).

It is well known that, ifF is provable in classical logic, thenF is provable in intuitionistic logic.

Then, we have the following [7]:

Theorem 1 If `T :∀x(¬Int[x]→ ¬Int[x]), then T is a storage operator for integers, i.e.

for every n ∈N, there exists a λ-term αn 'β n such that, for every τ 'β n, T f τ reduces, by head reduction, to f αn[. . .] (αn[. . .] is a term obtained by some substitution in αn).

This result is, in fact, true for any data type.

Remark. Sinceαnisβ-equivalent to a closed term (the Church integern), each variable of αnis dummy, and any term substituted to it is never computed, in the call-by-name strategy.

Storage operators can be used to represent some aspects of assignment instructions in imper- ative languages. Let us consider, for example, the following program in C:

x=3^3;

for (i=1;i<=N; i++) x=F(x);

whereFis some programmed function fromN toN. A straightforward way of translating this in λ-calculus is (N F (3 3)). But, with the call-by-name strategy, the order of computation is not respected: indeed, we want that 33 be computed first, then F(33), then F(F(33)), and so on. But head reduction gives (F (F . . .(F (3 3)))). The order of computations is clearly not the one we want. Consider now the following translation: (T λx(N T F x) (3 3)). The only change is the introduction of two occurrences of the storage operatorT, which play the role of the two assignment instruction x=... By head reduction (using the property of the storage operator T), this gives (N T F s270), then (s270λg g◦s F 0 λg g◦s F 0. . . λg g◦s F 0). The order of computation is now exactly what we want.

(5)

4 Classical logic

Let us now consider second orderλ-calculus with classical logic. In fact, the rules of deduction are exactly the same, but we shall consider typed terms of the following form:

x1 :A1, . . . , xk:Ak, c:∀X(¬¬X →X)`t:F and the λ-term t, on the right-hand side, may contain the constant c.

For sake of brevity, this typed term will be written:

x1 :A1, . . . , xk:Ak,`ct:F.

We want to consider such a term t as a program, and, thus, we must give a law of reduction for c, when cis in head position. Indeed, ordinary head reduction will stop only whencis in head position. The rule of reduction is:

ctt1. . . tn ;t.λx xt1. . . tn, where x is a new variable.

Remark. This rule is a particular case of a general law of reduction for control operators, given in [1], which is: E[ct/x] ; t.λx E. But this general rule gives rise to problems with preservation of types under reduction, unless types are restricted to propositional calculus.

Remark. An instruction like exit(P) of C programming language, which carries out the program P at the top level (by discarding the environment) is translated by c.λxP, where x does not appear in P. Indeed, when c.λxP comes in head position, we get a λ-term of the form c.λxP.t1. . . tn, which reduces immediately to P. Thus, the instruction exit is written as λyc.λx y. It has the type ⊥ → ∀X X: indeed, y : ⊥ ` λx y : ¬¬X, and therefore y :⊥ ` c.λx y:X for every formula X.

Definition. The ordinary head reduction, completed with this rule, will be called head c-reduction.

Now, the problem is: given a typed term in classical logic, what kind of program is it?

We shall take the example of integers. Let us call a λ-term τ an ”intuitionistic integer”

if ` τ : Int[n], and a ”classical integer” if c: ∀X(¬¬X → X) ` τ : Int[n]. If τ is an intuitionistic integer, we know thatτ 'β n, and thus, we know the operational behaviour ofτ.

But, when τ is a classical integer, it is no longer true that τ 'β n. In order to recognize the integer n hidden inside τ, we have to make use of storage operators. Indeed, if T = λf λn(n λg g◦s f 0), thenT f τ will reduce tof.sn0 by headc-reduction, even ifτ is a classical integer (this was proved, in the frame of λµ-calculus, by M. Parigot [14]).

More generally, we have the following [9]:

Theorem 2 If ` T : ∀x(¬Int[x]→ ¬Int[x]), then, for every n ∈ N, there exists a λ-term αn 'β n such that, if τ is a classical integer of type Int[n], then T f τ reduces to f αn[. . .] by head c-reduction.

In other words, we can say that sorage operators handle classical integers exactly in the same way as intuitionistic integers.

Example of a classical integer of type Int[1]: τ =λgλx(cλy(y(g(cλz(y (g x)))))).

Indeed, g : ∀y(Xy→Xsy), x:X0, y:¬X1`cgx :X1, y(g x) : ⊥, (c λz(y (g x))) :X0 (in fact, arbitrary type).

(6)

Thus, we get λy(y(g(c λz(y (g x))))) :¬¬X1, which gives τ :Int[1].

I shall try now to explain the intuitive meaning of the previous theorem. In fact, when you consider typed terms in classical logic, only terms of type ⊥must be considered as executable programs. Thus, the process of head c-reduction can only be applied to terms of type ⊥ (if we want that types be maintained during the process of reduction). Terms of other types are not executable alone, they are modules which must be combined in order to get a term of type

⊥.

Now, suppose you write a programτ in order to compute some integer, say the 100000th prime number, for example. This program will be a classical integer, i.e.:

c:∀X(¬¬X →X)`τ :Int. It is not of type ⊥, and thus, cannot be executed alone. In fact, some operating system must take care of it, in order to launch it, to supervise its execution (hardware or software errors may occur) and display the result, or give it to another program.

Let us represent this operating system by E. Then, the executable program is Eτ, which is of type ⊥. So thatE has, rather naturally, the type ¬Int. A program like E is usually called a continuation.

Now, an essential feature of such a program E is the fact that it must call the program τ by value: we clearly want that, during the execution of Eτ, the term τ be computed first, i.e. that E begins by dealing with τ, not by carrying out its own internal procedures, which may be very numerous and long. But we know how to insure this property, by using storage operators: if E has the form T f for some f, then E will behave as we want. This is exactly the meaning of the above theorem. This also explains the interest of considering the head reduction of terms of the form T f τ, where T is a storage operator.

We shall now give some examples in order to show the use of classical logic and λ-calculus with c, to represent (and give types to) some “escape” instructions of language C, most often used in error management.

Example for the exit instruction: we have seen before that this instruction is represented by the λ-term A=λy c.λx y, which has the type ∀X(⊥ →X).

Let us consider the following program in C:

x=3^3;

for (i=1;i<=N; i++)

{if(B(x)) exit(P); else x=G(x);}

whereB(x)is some boolean condition on x, and Gsome programmed function fromN intoN. P is a top-level program, which manages the error condition (when B(x) is true), for example an error message.

It can be translated, as before, into (T λx(N T F x) (3 3)), withF =λx((B x) (AP) (G x)).

As before, it gives, by head reduction: (s270λg g◦s F0. . . λg g◦s F0) then (F s270λg g◦s F0. . .) , then ((B s270) (AP) (G s270)λg g◦s F0. . .).

Now, if (B s270) is false, the computation goes on with ((G s270)λg g◦s F0. . .). But, if it is true, we get (AP λg g◦s F0. . .), and, eventually, P by the reduction property of A.This is exactly the behaviour of the C program above.

The exit instruction is a rough “escape” instruction, since it is a jump at the top level, and, thus, terminates the program. It will be used to manage severe errors. Also, its type

∀X(⊥ →X) is a rough application of the principle of “reductio ad absurdum”.

(7)

In the C language, there exist more subtle “escape” instructions, which allow to manage errors without stopping the program. These are setjmp and longjmp. The first one sets a label and saves the environment, and the second one jumps back to this label (like a goto), and restores the environment.

For example, we can modify our previous program in such a way that, when B(x)is true, it does not stop, but returns (with some message P) to the calling procedure:

x=setjmp(ptr); /* The environment is saved at the address */

/* pointed by ptr; setjmp returns the value 0.*/

if(x!=0) {P; return(x);} /* P is, for example, a program for printing */

/* an error message. */

x=3^3;

for (i=1;i<=N; i++)

{if(B(x)) longjmp(ptr,7); /* Conditional jump at the corresponding */

/* setjmp instruction; the environment given */

/* by ptr is restored. The second argument */

/* is the returned value; it must be an */

/* integer != 0. */

else x=G(x);}

Remarkably enough, these two instructions are represented by one λ-term, which will be denoted by cc, by analogy with the Call/cc instruction in the Scheme functional language.

This term is given by a proof of the formula ((A→ ∀X X)→A)→A. Indeed, we have:

k:¬A`c λx(A(k x)) :A→ ∀X X. Thus

k:¬A, r: (A→ ∀X X)→A`cr.λxA.kx:A, (k(r λx(A(k x)) :⊥ r: (A→ ∀X X)→A `cλk(k(r λx(A(k x)) :¬¬A.

Thus, we set cc =λr cλk(k (r λx(A(k x)))), and we have

`c cc: ((A → ∀X X) →A)→ A for every formula A (in our example, A is the type of integers).

Therefore, if y:A→ ∀X X `c h:A, then (ccλy h t1. . . tn) reduces to

(h[λx(A(πx))/y]t1. . . tn), whereπ =λz(z t1. . . tn) represents the environment. When (A(πx)) is executed, with some value v substituted for x, the environment is restored and we get (v t1. . . tn).

In other words, in the λ-term (cc λy h), cc plays the role of the setjmp instruction, and occurrences of the variable y inh are longjmp instructions. More precisely, we may translate almost literally, in the C program above, the instruction setjmp(ptr) by ccλ ptr, and the instructionlongjmp(ptr,7) by (ptr7) (the variable yhas been renamedptr). It is interesting to notice that the pointer ptr has been given the type Int→ ∀X X.

The translation in λ-calculus of the program above is therefore the following:

(cc λy(T λx(N T F x)(3 3))) withF =λx((B x)(y7)(G x)).

(8)

References

[1] M. Felleisen. The Calculi of λv-CS conversion: a syntactic theory of control and state in imperative higher order programming. Ph. D. dissertation, Indiana University, 1987.

[2] J.Y. Girard. Une extension de l’interpr´etation de G¨odel `a l’analyse. In: Proc. 2nd Scand.

Logic Symp. p. 63-92. North Holland Pub. Co. (1971).

[3] J.Y. Girard. A new constructive logic: classical logic. Preprint.

[4] T. Griffin. A formulae-as-type notion of control. In Conference Record of the 17th A.C.M.

Symposium on principles of Programming Languages (1990).

[5] J.L. Krivine, M. Parigot. Programming with proofs. J. Inf. Process. Cybern. EIK 26,3 p.

149-167 (1990).

[6] J.L. Krivine. Lambda-calcul, types et mod`eles. Masson, Paris (1990). English translation, Ellis Horwood (1993).

[7] J.L. Krivine. Op´erateurs de mise en m´emoire et traduction de G¨odel. Arch. Math. Logic 30, p. 241-267 (1990).

[8] J.L. Krivine. Lambda-calcul, ´evaluation paresseuse et mise en m´emoire. Theoretical In- formatics and Applications, 25, 1, p. 67-84 (1991).

[9] J.L. Krivine. Classical logic, storage operators and second orderλ-calculus. Ann. of Pure and Appl. Log. 68, p. 53-78 (1994).

[10] J.L. Krivine. A general storage theorem for integers in call-by-name λ-calculus. Theor.

Comp. Sc. 129, p. 79-94 (1994).

[11] D. Leivant. Reasoning about functional programs and complexity classes associated with type disciplines. 24th Annual Symp. on Found. of Comp. Sc. p. 460-469 (1983).

[12] C. Murthy. Extracting constructive content from classical proofs. Ph. D. Thesis, Cornell Univ. (1990).

[13] M. Parigot. Free deduction: an analysis of computations in classical logic. Proc. Logic Progr. and Autom. Reasoning, St Petersbourg. L.N.C.S. 592, p. 361-380 (1991).

[14] M. Parigot. λµ-calculus: an algorithmic interpretation of classical natural deduction.

Proc. Logic Progr. and Autom. Reasoning, St Petersbourg. L.N.C.S. 624, p. 190-201 (1992).

[15] G. Plotkin. Call-by-name, call-by-value, and the λ-calculus. Th. Comp. Sc. 1, p. 125-159 (1975).

[16] J. Reynolds. Toward a theory of type structures. Springer Lect. Notes in Comp. Sc. 19, p. 408-425 (1974).

Références

Documents relatifs

This encoding allows us, first, to properly define matching modulo β using the notion of higher order rewriting and, secondly, to make available, in the λ Π-calculus Modulo,

Stop considering anything can be applied to anything A function and its argument have different

Stop considering anything can be applied to anything A function and its argument have different behaviors.. Demaille Simply Typed λ-Calculus 12

Hence since the birth of functional programming there have been several theoretical studies on extensions of the λ-calculus in order to account for basic algebra (see for

In [7], this storage theorem was extended to the case of second order classical logic (the idea of using storage operators in classical logic is due to M. Parigot [13] who found

Abstraction is always binary: the names (or atoms) a that appear on the left-hand side are bound, and their scope is the expression e that appears on the right-hand side.... A

We then gave a type system that enables the static inference of the trustworthiness of values and the type system was proved to have the Subject Reduction property with respect to

Since memory reuse is at the heart of our translation scheme, we prove that our translation rules are sound with respect to a region- based operational semantics, where