L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

Reduction Operators without Total Order. The Knuth-Bendix completion algorithm does not require a total order on terms, which implies that it could fail. Indeed, at some point **of** the algorithm, one could have two normal forms t1 and t2 **of** a given **term** that we cannot compare for a fixed non-total order. The same phenomena holds for reduction operators when we do not assume that the order on G is total. In this case, the restriction **of** the kernel map to reduction operators is not onto. In Section 4, we deduce two important consequences **of** this fact. The first one is that it could happen that the lattice structure does not exist. The second one is more subtle: even if a set admits a lower bound, the latter does not necessarily have the "right shape". By right shape, we mean that this lower bound does not necessarily come from the lattice structure on the set **of** subspaces **of** KG. As a consequence, the F-complement is not always defined. However, the existence **of** a lower bound with the right shape is sufficient to guarantee that it exists.

En savoir plus
[ paint(car(electric, !suv)) _ bl ue, paint (car(!diesel, !suv))) _ whit e, paint (x) _ red ]
Similarly to plain **term** **rewriting** **systems** (TRS), i.e. TRS without anti-patterns and ordered rules, it is interesting to analyze the extended **systems** w.r.t. to their confluence, termination and reachability properties, for example. Generally, well-established techniques and (automatic) tools used in the plain case cannot be applied directly in the general case. There have been several works in the context **of** functional programming like, for example [13, 9, 8, 1] to cite only a few, but they are essentially focused on powerful techniques for analyzing the termination and **complexity** **of** functional programs with or- dered matching statements. We are interested here in a transformation approach which can be used as an add-on for well-established analyzing techniques and tools but also as a generic compiler for ordered TRS involving anti-patterns which could be easily integrated in any language providing rewrite rules, or at least pattern matching primitives. For example, if we consider trucks and cars with 4 fuel types and 3 styles the transformation we propose will provide the following order independent set **of** rules:

En savoir plus
4 Probabilistic **Term** Rewrite **Systems**
Now we formulate probabilistic **term** **rewriting** following [4], and then lift the interpretation method for **term** **rewriting** to the probabilistic case.
We briefly recap notions from **rewriting**; see [3] for an introduction to rewrit- ing. A signature F is a set **of** function symbols f associated with their arity ar(f) ∈ N. The set T (F, V ) **of** terms over a signature F and a set V **of** vari- ables (disjoint with F ) is the least set such that x ∈ T (F, V ) if x ∈ V and f(t1, . . . , t ar(f) ) ∈ T (F, V ) whenever f ∈ F and ti ∈ T (F, V ) for all 1 ≤ i ≤ ar(f). A substitution is a mapping σ : V → T (F, V ), which is extended homomorphi- cally to terms. We write tσ instead **of** σ(t). A context is a **term** C ∈ T (F, V ∪{ 2}) containing exactly one occurrence **of** a special variable 2. With C[t] we de- note the **term** obtained by replacing 2 in C with t. We extend substitutions and contexts to multidistributions: µσ := {{p1 : t1σ, . . . , pn : tnσ}} and C[µ] := {{p1 : C[t1], . . . , pn : C[tn]}} for µ = {{p1 : t1, . . . , pn : tn}}. Given a multidistribu- tion µ over A, we define a mapping µ : A → R ≥0 by µ(a) := P p:a∈µ p, which forms a distribution if |µ| = 1.

En savoir plus
It is also common to use trees to represent hierarchical structures in symbolic music (see [15] for a survey). For instance, the GTTM [13] uses trees to analyse inner relations in musical pieces. Trees are also a natural representation **of** rhythms, where durations are expressed as a hierarchy **of** subdivisions. Computer-aided composition (CAC) environments such as Patchwork and OpenMusic [3,6] use structures called rhythm trees (RTs) for representing and programming rhythms [2]. Such hierarchical, notation-oriented approach (see also [15]) is complementary to the performance-oriented formats corresponding to the MIDI notes’ onsets and offsets in standard computer music **systems**. It also provides a more structured representation **of** time than music notation formats such as MusicXML [9] or Guido [12], where durations are expressed with integer values. As highly structured representations, trees enable powerful manipulation and generation processes in the rhythmic domain (see for instance [11]), and enforce some structural constraints on duration sequences. In this paper, we propose a tree-structured representation **of** rhythm suitable for defining a set **of** **rewriting** rules (i.e. oriented equations) preserving rhythms, while allowing simplifications **of** notation. This represen- tation bridges CAC rhythm structures with formal tree-processing approaches, and enables a number **of** new manipulations and applications in both domains. In particular, **rewriting** rules can be seen as an axioma- tization **of** rhythm notation, which can be applied to reasoning on equivalent notations in computer-aided music composition or **analysis**.

En savoir plus
account the innermost part **of** the strategy. We want to refine the equational completion procedure **of** [GR10] in order to have a better precision in the over- approximation **of** R ∗ in (L 0 ), the set **of** descendants **of** terms in L 0 obtained by
applications **of** rules in R using an innermost strategy. At the end **of** the paper, we will see that the technique presented here is likely to be adapted to cover the leftmost part **of** the strategy, but this is ongoing work. A precise approxi- mation **of** terms reachable by leftmost innermost **rewriting** would be a simple and elegant alternative to Higher Order Recursive Schemes used for the static **analysis** **of** functional programs [OR11].

En savoir plus
R . The set **of** R-descendants **of** a set **of** ground terms E is R ∗ (E) = {t ∈ T (F ) | ∃s ∈ E s.t. s → ⋆
R t }.
The verification technique defined in [5, 4] is based on the approximation **of** R ∗ (E). Note that R ∗ (E) is possibly infinite: R may not terminate and/or E may be infinite. The set R ∗ (E) is generally not computable [6]. However, it is possible to over-approximate it [5, 4, 7] using tree automata, i.e. a finite representation **of** infinite (regular) sets **of** terms. In this verification setting, the TRS R represents the system to verify, sets **of** terms E and Bad represent respectively the set **of** initial configurations and the set **of** “bad” configurations that should not be reached. Then, using tree automata completion, we construct a tree automaton B whose language L(B) is such that L(B) ⊇ R ∗ (E). Then if L(B) ∩ Bad = /0 then this proves that R ∗ (E) ∩ Bad = /0, and thus that none **of** the “bad” configurations is reachable. We now define tree automata.

En savoir plus
Reachability **Analysis** **of** Innermost **Rewriting**
Thomas Genet ∗ Yann Salmon †
February 10, 2014
Approximating the set **of** terms reachable by **rewriting** finds more and more applications ranging from termination proofs **of** **term** **rewriting** **systems**, cryp- tographic protocol verification to static **analysis** **of** programs. However, since approximation techniques do not take **rewriting** strategies into account, they build very coarse approximations when **rewriting** is constrained by a specific strategy. In this work, we propose to adapt the Tree Automata Completion algorithm to accurately approximate the set **of** terms reachable by **rewriting** under the inner- most strategy. We prove that the proposed technique is sound and precise w.r.t. innermost **rewriting**. The proposed algorithm has been implemented in the Timbuk reachability tool. Experiments shows that it noticeably improves the accuracy **of** static **analysis** for functional programs using the call-by-value evaluation strategy. In particular, for some functional programs needing lazy evaluation to terminate, the computed approximations are precise enough to prove the absence **of** innermost normal forms, i.e. prove non termination **of** the program with call-by-value.

En savoir plus
Furthermore, in the literature the **complexity** **of** interaction nets [18] have been pondered. In contrast to **term** graphs considered here, interaction nets may admit cyclic structures, but on the other hand provide for more control in sharing or garbage collection, via the explicit use **of** duplication or erasing cells. First results on runtime **complexity** have been proposed by Perrinel in [24]. Furthermore, Gimenez and the second author study in [17] space and time complexities **of** sequential and parallel computations. The resource **analysis** is based on user-defined sized types in conjunction with potentials that are assigned to each cell in the net. While technically quite apart from the work presented here, there are conceptional similarities: potentials are conceivable as interpretations, and the dependency pair method implicitly combines a size **analysis** with a runtime **analysis**.

En savoir plus
3.4. The intuition behind the rules
The set **of** thirteen rules is divided in three modules. Notice the specificity **of** ATRS’s, namely that in each rule the left-hand side lies at the same address as this **of** the right- hand side. To avoid losing this property and prevent bad effects, an indirection d e a is created when necessary. In L + C + F this is the case once, namely for (FVar). In some other rules, namely (AppRed), (SRed), (SelRed), and (FRed) the indirection is removed and the pointer is redirected. Notice that this can only be done inside a **term**, since a pointer has to be redirected and one has to know where this pointer comes from. In the name **of** these rules the suffix “Red” stands for redirection.

En savoir plus
5 Conclusion
The choice **of** a representation determines the range **of** possible operations on a given musical structure, and thereby has a significant influence on composi- tional and analytical processes (see [11,14] for examples in the domain **of** rhythm structures). In this paper we proposed a formal tree-structured representation for rhythm inspired by previous theoretical models for **term** **rewriting**. Based on this representation, tree **rewriting** can be seen as a mean for transforming rhythms in composition or **analysis** processes. In a context **of** computer-aided composition for instance, this approach can make it possible to suggest to a user various notations **of** the same rhythmic value, with different complexities. Simi- larly, the rewrite sequence **of** Figure 7 can be seen as a notation simplification for a given rhythm. An important problem in the confluence **of** the defined rewrite relation, i.e. whether different **rewriting** from a single tree will eventually con- verge to a unique canonical form. For a quantitative approach, it is possible to use standard **complexity** measures for trees (involving depth, number **of** symbols etc.). We can therefore imagine that this framework being used as a support for rhythm quantification processes [1] in computer-aided composition environments like OpenMusic.

En savoir plus
– specific **analysis** and simulation methods for **rewriting** **systems**.
This article is structured as follows. In section 2 we describe shortly the biological phenomenon that we want to model: the mechanism **of** classical atten- uation regulation (CAR). In section 3 we introduce a class **of** terms and proba- bilistic **term** **rewriting** **systems**. In section 4 we represent a qualitative metamodel **of** the biological mechanism **of** CAR by a **term** **rewriting** system. In section 5 we refine the previous system and decorate its transitions with rates, thus obtaining a representation **of** the Markov chain by a probabilistic **term** **rewriting** system. In section 6 we show some simulation results. In section 7 we discuss some related work on **term** **rewriting** and its applications. In section 8 we conclude with a dis- cussion **of** perspectives **of** the **rewriting** approach to modeling the mechanisms involving RNA secondary structures, especially regulation.

En savoir plus
CONS are indexed by a subset **of** V ∪ E ∪ {G}. In this way,
an empty set **of** attributes or constraint is not required if an element is wished not to be attributed or constrained.
Now that AC-graphs are defined, it is possible to repre- sent a configuration **of** DIET as presented in section 3. 4.2.2 Modelling a Constrained Configuration **of** DIET This subsection is dedicated to the definition **of** a DIET con- figuration. Concerns expressed in Sec.3 are mapped into the theoretical concepts previously introduced in this Section. For sake **of** clarity, before formally introducing architec- tural styles, we show in Figure 2 what a DIET configuration would look like, once represented using an AC-graph. Notations are reported in Table 2.

En savoir plus
4 Proposed Cognitive **Complexity** Metric for
Configuration Models
As addressed earlier, existing literature proposes a number **of** approaches for measuring the cognitive **complexity** **of** OOPs. However, the application **of** this concept to UML class diagrams **of** product configuration models presents an important deviation from those pertaining to OOPs. As PCSs are knowledge-based **systems**, the **complexity** **of** business rules (BRs) have a vital impact on the development and maintenance efforts **of** these **systems** [7]. In order to account for the impact **of** the BRs, the authors have assumed that each BR corresponds to a single method with a single block **of** layered BCS. Each constituent BCS **of** a BR has been assigned a cognitive weight, based on the classification proposed by Wang [8]. Table 3 summarises the cognitive weights for BCSs for each BR.

En savoir plus
Future works. This paper lays the theoretical foundations for unification in polygraphic 2-di- mensional **rewriting** **systems** and leaves many research tracks open for future works. We plan to study the precise links between our algorithm and the usual unification for terms (every **term** **rewriting** system can be seen as a polygraphic **rewriting** system [ 2 ]) as well as algorithms for (planar) graph **rewriting**. Concerning concrete applications, since these **rewriting** **systems** essen- tially transform circuits made **of** operators (the 2-generators) linked by a bunch **of** wires (the 1-generators), it would be interesting to see if these methods can be used to optimize electronic circuits. Finally, we plan investigating the generalization **of** these methods in dimension higher than 2, which seems to be very challenging.

En savoir plus
1} = {a i
b j b k a l | (i ≤ N ∧ j = 1 ∧ k = l = 0) ∨ (k ≤ N ∧ l = 1 ∧ i = j = 0)}.
4 The **rewriting** toolkit for parameterized words
To rewrite a parameterized word, we need to factor it out via a lefthand side **of** rule. To test for confluence, we need to check equality **of** parameterized words, which shall require computing their intersection. To compute critical pairs, we need to compute overlaps **of** parameterized words. Verifying equality, computing factors and overlaps are the main algorithmic difficulties **of** this framework. We choose to present the **rewriting** toolkit first, before to introduce parameterized **rewriting** itself. For lack **of** space, we treat in details factorization, and then sketch how intersection, equality and overlaps can be derived. Examples are shown for factorization and equality. These algorithms have a non-polynomial **complexity**, but in our practice rules have usually a small size. 4.1 Auxiliary algorithms

En savoir plus
Other maple implementations have also been tested: insulate is a package that im- plements [WS05] for computing the topology **of** real algebraic curves, and top implements [GVN02]. Both packages were kindly provided by their authors. We tried to modify the packages so as to stop them as soon as they compute the real solutions **of** the corresponding bivariate system and hence achieve an accurate timing in every case. Finally, it should be noted that top has an additional parameter that sets the initial precision (decimal digits). A very low initial precision or a very high one results in inaccuracy or performance loss; but there is no easy way for choosing a good value. Hence, we followed [Ker06] and recorded its performance on initial values **of** 60 and 500 digits.

En savoir plus
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

1. General setup 1.1. The **rewriting** calculus
The ρ-calculus was introduced as a calculus where all the basic ingredients **of** **rewriting** are made explicit, in particular the notions **of** rule abstraction (represented by operator “_”), rule applica- tion (represented by **term** juxtaposition) and collection **of** results (represented by operator “≀”). Depending on the theory behind operator “≀” the results can be grouped together, for example, in lists (when “≀” is associative) or in multi-sets (when “≀” is associative and commutative) or in sets (when “≀” is associative, commutative and idempotent). This operator is useful for representing the (non-deterministic) application **of** a set **of** rewrite rules and consequently, the set **of** possible results. The usual λ-abstraction λx.t is replaced by a rule abstraction P _ T , where, in the most general case, P is a generic ρ-**term**. Usually some restrictions are imposed on the shape **of** P to get desirable properties for the calculus.

En savoir plus