Since 2001, the applied picalculus has been the basis for much further work, described in many research publications (some of which are cited below) and tutorials [3, 49, 87]. This further work includes semantics, proof techniques, and applications in diverse contexts (key exchange, electronic voting, certified email, cryptographic file systems, encrypted Web storage, website authorization, zero-knowledge proofs, and more). It is sometimes embodied in useful software, such as the tool ProVerif [31, 32, 35]. This tool, which supports the specification and automatic analysis of security protocols, relies on the applied picalculus as input language. Other software that builds on ProVerif targets protocol implementations, Web-security mechanisms, or stateful systems such as hardware devices [30, 22, 17]. Finally, the applied picalculus has also been implemented in other settings, such as the prover Tamarin [76, 68].
MOVES, RWTH Aachen, Germany
Queen Mary University of London, UK
Abstract. We investigate the problem of deciding first-order theories of finite trees with several distinguished congruence relations, each of them given by some equational axioms. We give an automata-based solution for the case where the different equational axiom systems are linear and variable-disjoint (this includes the case where all axioms are ground), and where the logic does not permit to express tree relations x = f (y, z). We show that the problem is undecidable when these restrictions are relaxed. As motivation and application, we show how to translate the model-checking problem of AπL, a spatial equational logic for the applied pi-calculus, to the validity of first-order formulas in term algebras with multiple congruence relations.
CHAPTER 5. LINEARITY, PERSISTENCE AND TESTING SEMANTICS IN THE ASYNCHRONOUS PI-CALCULUS [11, 34] are based on discrimination introduced by divergence that is clearly ig- nored by the standard notion of weak bisimulation. Furthermore, the author of  suggests as future work to extend SPL, which uses only persistent messages and replication, with recursive definitions to be able to program and model re- cursive protocols such as those in [4, 73]. One can, however, give an encoding of recursion in SPL from an easy adaptation of the composition between the Aπ encoding of recursion  (where recursive calls are translated into linear Aπ out- puts and recursive definitions into persistent inputs) and the encoding of Aπ into POAπ in . The resulting encoding is correct up-to weak bisimulation. The encoding of Aπ into POAπ, however, introduces divergence and hence the com- posite encoding does not seem to invalidate the justification for extending SPL with recursive definitions. The above works suggest that the expressiveness study of persistence is relevant but incomplete if divergence is not taken into account.
Keywords: Type Systems · Pi-calculus · Process Calculi · Complexity Analysis · Implicit Computational Complexity · Size Types
The problem of certifying time complexity bounds for programs is a challenging question, related to the problem of statically inferring time complexity, and it has been extensively studied in the setting of sequential programming languages. One particular approach to these questions is that of type systems, which offers the advantage of providing an analysis which is formally-grounded, compositional and modular. In the functional framework several rich type systems have been proposed, such that if a program can be assigned a type, then one can extract from the type derivation a complexity bound for its execution on any input (see e.g. [21, 25, 22, 20, 6, 4]). The type system itself thus provides a complexity certification procedure, and if a type inference algorithm is also provided one obtains a complexity inference procedure. This research area is also related to implicit computational complexity, which aims at providing type systems or static criteria to characterize some complexity classes within a programming language (see e.g. [24, 13, 33, 18, 15]), and which have sometimes in a second step inspired a complexity certification or inference procedure.
AKIRA YOSHIMIZU, INRIA Sophia Antipolis, France
We introduce a type system for the π -calculus which is designed to guarantee that typable processes are well-behaved, namely they never produce a run-time error and, even if they may diverge, there is always a chance for them to łfinish their workž, i.e., to reduce to an idle process. The introduced type system is based on non-idempotent intersections, and is thus very powerful as for the class of processes it can capture. Indeed, despite the fact that the underlying property is Π 0 2 -complete, there is a way to show that the system is complete, i.e., that any well-behaved process is typable, although for obvious reasons infinitely many derivations need to be considered.
This paper is organized as follows: in Section II we introduce the syntax and the labelled transition semantics for the reversible π-calculus and we show its main properties in Section III. In Section IV, we then define the notion of equivalence up-to permutation that is induced by the semantics of our calculus. We then show that backtracking is done according to any path that is equivalent to the forward computation. In Section V we discuss the notion of causality induced by our semantics and show that it is maximally liberal with respect to the structural causality of the reduction semantics. In Section VI we conclude with some perspectives that our work suggests. Although this
We claim that our approach to the semantics of the Sπ-calculus is rather natural and mathematically robust, however we cannot claim that it is more canonical than, say, the weak, early bisimulation semantics of the π-calculus. We have chosen to explore a path following our mathematical taste, however, as in the π-calculus, other paths could be ex- plored. In this respect, we will just mention three directions. First, one could remark that condition (B1) in definition 5 allows to observe the branching structure of a program and argue that only suspended programs should be observed. This would lead us towards a failure semantics/testing scenario [13, 9] (in the testing semantics, a program that cannot perform internal reductions is called stable and this is similar to a suspended program in the synchronous context). Second, one could require that program equivalence is preserved by all contexts and not just the static ones and proceed to adapt, say, the concept of open bisimulation  to the present language. Third, one could plead for reduction congruence  rather than for contextual bisimulation and then try to see whether the two con- cepts coincide following . We refer to the literature for standard arguments concerning bisimulation vs. testing semantics (e.g., ), early vs. open bisimulation (e.g., ), and contextual vs. reduction bisimulation (e.g., ).
rule is simply to open the box.
In a joint work with Kohei Honda [HL06], the second author proposed a translation of a version of the π-calculus in proof-nets for a version of linear logic extended with the cocontraction rule. The basic idea consists in interpreting the parallel composition as a cut between a contraction link (to which several emitters are connected, through dereliction links) and a cocontraction link, to which several promoted receivers are connected. Being promoted, these receivers are replicable, in the sense of the π-calculus. The other fondamental idea of this translation consists in using linear logic polarities for making the difference between emitters (negative) and receivers (positive), and of imposing a strict alternation between these two polarities. This allows to recast in a polarized linear logic setting a typing system for the π-calculus previously introduced by Berger, Honda and Yoshida in [BHY03].
Typed Conversion vs. Untyped Conversion When designing the Colored λΠ-Calculus Modulo, we have chosen to constrain the conversion to contain only weakly well- typed terms because weak subject reduction makes the set of weakly well-typed terms easy to manipulate. Another approach would be to constrain the conversion to contain only well-typed terms. This approach is the one used by Martin Löf’s Type Theory [NPS90]. In this case, reduction is typed: rewriting and typing are mutually deﬁned. The relation between systems with a typed reduction and systems with an untyped reduction is not easy to make. It has been studied by Adams [Ada06] and Siles and Herbelin [SH12]. They showed that, in the case of pure type systems with β-reduction, the two approaches (typed and untyped) are equivalent: the set of well-typed terms are the same. Their approach relies on a proof of conﬂuence of the β-reduction based on parallel moves. We conjecture that their proof can be adapted for the λΠ-Calculus Modulo when the rewriting relation → βΓ is parallel-
Whereas an increased expressiveness of a modeling language typically will ease the development of models, it requires additional support for developing models, e.g. to ensure type consistency, and burdens model analysis and sim- ulation. Here we followed a tradition in concurrent programming languages, to combine a process level and a sequential core language (expression level). Since only the λ-calculus with types of low (first or second) order are used in prac- tice, we believe that our extension is justified. This holds in particular, when accepting the π-calculus as a starting point, since it is higher-order anyway. Outline. In Section 2 we present the π-calculus with priorities and the stochastic π-calculus in a uniform manner. We start from an ordered set (R, <) whose el- ements may be either priorities or stochastic rates. We provide a unified syntax for processes in both calculi, in which communication prefixes (rather than chan- nels) are annotated by values of R. We then present two operational semantics for the same syntax, a non-deterministic semantics as for the π-calculus with priorities, and a stochastic semantics as for the stochastic π-calculus.