In **abstract** **rewriting**, also called **abstract** reduction systems, a considerable amount of theory has been developed covering these basic topics (Church- Rosser’s result, Newman’s lemma, etc.). However, all results developed in **abstract** reduction systems concern congruences usually called Thue congru- ences. Here, we propose to apply **rewriting** techniques to compute over a class of algebraic **structures** larger than congruences and preorders such as congruences closed under monotonicity and modus-ponens (see Section 9 of this paper). Moreover, the idea of axiomatizing **rewriting** is not new and has been pursued with success, especially in the world of λ -calculus. Such axiomatizations deal with algebraic **structures** which are also congruences. The main works in this area are J.-J. Lévy’s residual theory [40] and its extension by P.-A. Melliès [42]. In residual theory, the structure of the objects to be rewritten is abstracted through the redex notion (i.e. a place in rewrit- ten objects which can be reduced by a rewrite rule). In this setting, many key properties of λ -calculus or of more general rewrite systems have been generalized, such as Church-Rosser’s theorem [40, 42], the standardization theorem [30] or the stability theorem [43]. The main goal of these works was not to generate convergent and complete rewrite systems which answer the validity problem. However, we share the axiomatic method with them, that is we formulate through axioms a small number of simple properties which are shared by the different settings and which are needed to yield the fundamental results mentioned above. Finally, concerning the completion process, we can cite N. Dershowitz and C. Kirchner’s work [22, 15] which generalizes the proof-ordering method to an **abstract** setting of arbitrary formal systems. This last work places itself downstream with respect to our work, and then com- pletes it, in the sense that [22, 15] fix inference and the ordering on proofs whereas we give axioms to build such an ordering (see Proposition 6.11, Corollary 6.12, and Theorem 8.8). Actually, [15, 22] aim to give an **abstract** form to completion processes whereas we are interested in **rewriting** from every angle.

En savoir plus
§ Department of Electrical and Information Engineering
Politecnico di Bari, Via Orabona 4 - 70125, Bari, Italy
**Abstract**—Methodologies **for** correct by construction reconfigu- rations can efficiently solve consistency issues in dynamic software architecture. Graph-based models are appropriate **for** designing such architectures and methods. At the same time, they may be unfit to characterize a system from a non functional perspective. This stems from efficiency and applicability limitations in han- dling time-varying characteristics and their related dependencies. In order to lift these restrictions, an extension to graph **rewriting** systems is proposed herein. The suitability of this approach, as well as the restraints of currently available ones, are illustrated, analysed and experimentally evaluated with reference to a con- crete example. This investigation demonstrates that the conceived solution can: (i) express any kind of algebraic dependencies between evolving requirements and properties; (ii) significantly ameliorate the efficiency and scalability of system modifications with respect to classic methodologies; (iii) provide an efficient access to attribute values; (iv) be fruitfully exploited in software management systems; (v) guarantee theoretical properties of a grammar, like its termination.

En savoir plus
III. T HE A BSTRACT M ACHINE KGRAM A. KGRAM’s Interfaces
KGRAM accesses the graph through an **abstract** API that hides the graph’s structure and implementation. In other words, KGRAM operates on a graph abstraction by means of **abstract** **structures** and functions and it ignores the internal structure of the nodes and edges manipulated in its function of evaluation of a query expression over a target graph. More precisely, the target graph is accessed by node and edge iterators that implement the Node and Edge interfaces of KGRAM. These are the very same interfaces that operationalize the NODE and EDGE expressions. As a result, KGRAM can process any kind of knowledge graph, in particular conceptual graphs (with n-ary relations) as well as RDF graphs (with binary relations).

En savoir plus
In [?], an abstraction refinement method is used on transition systems **for** verifying invariants with a technique combining model checking, abstraction and deductive verification. Contrary to the three previous articles, the authors do not consider liveness properties, and the spurious counterexample analysis is done with a backward method. In [?], as part of predicate abstraction, predicates are automatically discovered by analysing spurious counterexamples. The method exposed in this paper is close to the above methods, but it works on different data **structures**.

IIIA-CSIC, Bellaterra, Catalonia, Spain
2 INRIA, Universit´ e de Lorraine and LORIA, Nancy, France
**Abstract**. We introduce a combination method ` a la Nelson-Oppen to solve the satisfiability problem modulo a non-disjoint union of theories connected with bridging functions. The combination method is particu- larly useful to handle verification conditions involving functions defined over inductive data **structures**. We investigate the problem of determin- ing the data structure theories **for** which this combination method is sound and complete. Our completeness proof is based on a **rewriting** ap- proach where the bridging function is defined as a term rewrite system, and the data structure theory is given by a basic congruence relation. Our contribution is to introduce a class of data structure theories that are combinable with a disjoint target theory via an inductively defined bridging function. This class includes the theory of equality, the the- ory of absolutely free data **structures**, and all the theories in between. Hence, our non-disjoint combination method applies to many classical data structure theories admitting a rewrite-based satisfiability proce- dure.

En savoir plus
In this paper, we first define a new, simple framework **for** discrete proba- bilistic reduction systems, which properly generalizes standard **abstract** reduc- tion systems [25]. In particular, what plays the role of a reduction sequence, usually a (possibly infinite) sequence a1 → a2 → . . . of states, is a sequence µ1 µ 2 . . . of (multi)distributions over the set of states. A multidistribution is not merely a distribution, and this is crucial to appropriately account **for** both the probabilistic behaviour of each rule and the nondeterminism in rule selec- tion. Such correspondence does not exist in Bournez and Garnier’s framework, as nondeterminism has to be resolved by a strategy, in order to define reduction sequences. However, the two frameworks turn out to be equiexpressive, at least as far as every rule has finitely many possible outcomes. We then prove that the probabilistic ranking functions [4] are sound and complete **for** proving strong almost sure termination, a strengthening of positive almost sure termination [4]. We moreover show that ranking functions provide bounds on expected runtimes. This paper’s main contribution, then, is the definition of a simple framework **for** probabilistic term rewrite systems as an example of this **abstract** framework. Our main aim is studying whether any of the well-known techniques **for** ter- mination of term rewrite systems can be generalized to the probabilistic set- ting, and whether they can be automated. We give positive answers to these two questions, by describing how polynomial and matrix interpretations can indeed be turned into instances of probabilistic ranking functions, thus gener- alizing them to the more general context of probabilistic term **rewriting**. We moreover implement these new techniques into the termination tool NaTT [26]. The implementation and an extended version of this paper [?] are available at http://www.trs.cm.is.nagoya-u.ac.jp/NaTT/probabilistic.

En savoir plus
c University of Bordeaux, CNRS UMR5800 LaBRI, France
**Abstract**
We develop an algebraic approach, based on labelled-graph strategic rewrit- ing, **for** the study of social networks, specifically network generation and prop- agation mechanisms. This approach sheds a new light on these problems, and leads to new or improved generation and propagation algorithms. We argue that relevant concepts are provided by three ingredients: labelled graphs to represent networks of data or users, rewrite rules to describe concurrent local transformations, and strategies to express control. We show how these tech- niques can be used to generate random networks that are suitable **for** social network analysis, simulate different propagation mechanisms, and analyse and compare propagation models by extracting common rules and differences, thus leading to improved algorithms. We illustrate with examples the flexibility of the approach.

En savoir plus
In this paper, we present a solution to this challenge, by mixing elements of separation logic and shape analysis, and integrating them into an **abstract** interpretation framework. Separation logic in itself is not adequate **for** describing the inter-connected heaps of JavaScript. First, separation logic is based on some additional **structures**, such as lists or trees. **For** JavaScript, such **structures** can be diﬃcult to identify, as illustrated by Gardner et al. [?]. Second, JavaScript native **structures** tend to not separate nicely. Gardner et al. propose to remedy this through a partial separation operator ⊔ ⋆ (“seppish”). The formula 𝑃 ⊔ ⋆ 𝑄 describes a heap which can be split in two heaps, one satisfying 𝑃 and the other 𝑄; but these two heaps do not need to be disjoint. Here, we pursue this idea, but instead of introducing a new operator in separation logic, we inject ideas from shape analyses, and use summary nodes **for** modelling the portion of memory that may be shared. In this way, we move the approximation into the shape **structures** while keeping a precise separation operator ⋆.

En savoir plus
Fig. 1. Query pattern asking **for** events and their performers, where musical works have been performed (or conversely), with the corresponding artist(s) and release date.
an equivalent subgraph corresponding to a logical statement relating concepts and properties of the target ontology. This statement is the target part of the correspondence, if any in the alignment, in which the source part matches the initial subpattern. But the subpatterns and the correspondences in the alignment may not have the same granularity (correspondences can be either simple or can relate smaller subgraphs). Thus, we define an algorithm that is similar to a Depth-First Search algorithm (DFS) **for** traversing and searching graph data **structures** in the input query patterns. It starts at the largest subgraph, i.e. the subpattern, and recursively explores its subgraphs (i.e. subpattern > RDF triples > classes and properties), until a correspondence is found **for** the considered subgraph (in which case, the target graph is written to the subpattern being outputted) or a class or property is reached. If at the end of this process, there are entities that have not been translated, the whole subpattern will be discarded 1 .

En savoir plus
We have also proposed a classical erasing function [Cur34, Lei83, GR88] that can be applied to RhoF in order to obtain a corresponding type inference system ` a la Curry. In Rho|F|, the calculus ` a la Curry, type information is not given in the term, and the type system is not fully syntax-directed, thus enforcing a flexible polymorphic type discipline. When we look at the ρ-calculus as a kernel calculus underneath a pattern-matching based programming language, this approach corresponds to ELAN, or Maude, or Obj ∗ , or ASF+SDF, or Haskell, or ML-like languages, where the user can write programs in a completely untyped language, and types are automatically inferred at compilation-time. Type inference can be also intended as the construction of an **abstract** interpretation of the program, that can be used as a correctness criterion. Unfortunately, as it is well-known **for** the λ-calculus [Wel99], the type assignment problem **for** Rho|F| is undecidable.

En savoir plus
• Examples and insights: we provide more examples of machine executions together with reﬁned explanations and insights. In particular, we stress the commutation between evaluation and the substitution of inert terms as the key **abstract** property leading to reasonable machines **for** Open CbV.
• Minimality of the cost model: formal evidence that the number of steps in the ﬁreball calculus is a minimal time cost model (in Sect. 10 ). Technically speaking we do not prove minimality—that would require a proof of the non-existence of asymptotically faster implementations, and it is not even clear how one could prove it. Nonetheless, our rigorous examples show that a more parsimonious cost model would require some radically stronger implementation technology. At the end of the paper, Appendix A contains a glossary of **rewriting** theory and the explanation of some notations.

En savoir plus
et al., 1987; Ariola and Klop, 1996) and from a categorical/logical point of view (Corradini and Gadducci, 1999) (see also (Sleep et al., 1993) **for** a survey on term graph **rewriting**).
In this context, an **abstract** model generalising the λ-calculus and adding cycles and sharing features has been proposed in (Ariola and Klop, 1997). Their approach consists of an equational framework that models the λ-calculus extended with explicit recursion. A λ-graph is treated as a system of recursion equations involving λ-terms and **rewriting** is described as a sequence of equational transformations. This work allows **for** the combination of graphical **structures** with the higher-order capabilities of the λ-calculus. A last important ingredient is still missing: pattern matching. The possibility of discriminating using pattern matching could be encoded, in particular in the λ-calculus, but it is much more attractive to directly discriminate and to use indeed **rewriting**. Programs become quite compact and the encoding of data type **structures** is no longer necessary.

En savoir plus
Computer science involves data **structures** that are usually syntactically rep- resented as simple terms. It is interesting to develop such a tool **for** more than just a graphic representation: using graph **rewriting**, research could be done on term **rewriting** with this rich and powerful layer of language of graph structure. In the following, in Section 2 we will present the type of graphs we use, then in Section 3 the dynamic logic **for** graph **rewriting**, its syntax and its semantics. In Section 4 the **rewriting** system is presented, and propositions to logically talk about it, but with issues to express these concepts with the logic as originally introduced. Graph homomorphisms, **rewriting** steps and application to a graph, with the definition of **rewriting** rules, matching of rules and normal forms, actu- ally leads to some divergences between actual **rewriting** and its translation using the logic. Section 4 also discuss and proposes some solutions **for** these original issues, **for** such an utilisation.

En savoir plus
Isabelle.Gnaedig@loria.fr
**ABSTRACT**
Introducing priorities on rules in **rewriting** increases their expressive power and helps to limit computations. Priority **rewriting** is used in rule-based programming as well as in functional programming. Termination of priority **rewriting** is then important to guarantee that programs give a result. We describe in this paper an inductive proof method **for** termination of priority **rewriting**, relying on an explicit in- duction on the termination property. It works by generating proof trees, modeling the **rewriting** relation by using abstrac- tion and narrowing. As it specifically handles priorities on the rules, our technique allows proving termination of term rewrite systems that would diverge without priorities.

En savoir plus
Pivot model refactoring
Model transformations are implemented as **rewriting** opera- tions over pivot models.
**For** sake of clarity, we will present a few operations us- ing an imperative pseudo-code style, while specific trans- formation languages are used in practice. The main inter- est given by the concept hierarchy is to provide navigation mechanisms through models. **For** instance, it is immediate to iterate over the set of variables of a constraint, since this information is gathered in the corresponding **abstract** con- straint concept (see e.g. Algorithm 2). It is therefore possi- ble to manipulate models globally, which is very powerful. Object flattening This refactoring step replaces object in- stances, namely variables whose type is a class, by all el- ements defined in the class definition (variable, constants, constraints and other statements). In order to prevent name conflicts, named elements are prefixed with the name of ob- ject instances.

En savoir plus
1.2 Higher-dimensional **rewriting**
1.2.1 Squier’s theory and coherence theorems
Squier’s homotopical theorem deals with the **rewriting** of monoids. Since then, it has been extended to other kinds of **structures**, such as algebras [ 37 ] or higher-dimensional categories [ 38 ] [ 39 ]. The latter is particularly interesting because it can be used to prove coherence theorems **for** weak **structures**. A mathematical structure, such as the notion of monoid or algebra, is often defined as some data satisfying relations. In the case of monoids, the data is a set and a binary application, and the relations are the associativity and the unit axioms. In category theory, one often considers relations that hold only up to isomorphism. One of the simplest examples of such a structure is that of monoidal categories, in which the product is not associative, but instead there exist isomorphisms α A,B,C : pA b Bq b C Ñ A b pB b Cq. This additional data

En savoir plus
186 En savoir plus

The efficient generation of proof hints, which we call certificates, is described elsewhere [ 8 ], along with an experimental evaluation of the overall **abstract** do- main. The main contribution of the work described here is the design of the link between the Coq frontend and the untrusted backend. It avoids the conversion and transfer of polyhedra. This makes the coupling between the frontend and the backend very loose. As a result, building other certificate-producing back- ends is easy and has no impact on the Coq frontend code. Complete freedom is given on the choice of data **structures**: a backend could use constraint or dou- ble representation **for** polyhedra. Furthermore, since the backend does not give formal precision guarantees, a backend could implement relaxations of domain operators [ 9 , 10 ], trading precision **for** efficiency.

En savoir plus
The main challenge experts address when designing the rules is whether rules adequately model system behavior. Answering this question requires running a simulation (M3, G3). Although a user must be able to apply a single rule to a graph, it must also be possible to trigger the application of a derivation (G3, R3). Deciding on which rules to apply, how to combine them and how long they should be iterated has an obvious effect on the possible computation outcome. Defining an evolution scenario (M2) translates into specify- ing what/when/where rules apply, which in turn corresponds to a **rewriting** strategy (G2). A strategy is described by a **for**- mal grammar and specifies a set of admissible rules and how they combine into sequences. We thus needed to support the definition and editing of strategies, expressed using a formal language including control **structures** and specific operators (R2). Designing formal languages to specify **rewriting** strate- gies is beyond the scope of this paper and forms a whole chapter of port graph **rewriting**. More details on PORGY’s **rewriting** strategy language developed by our partners may be found in [ FKN12 ].

En savoir plus