• Aucun résultat trouvé

Completeness for domain semirings and star-continuous Kleene algebras with domain

N/A
N/A
Protected

Academic year: 2021

Partager "Completeness for domain semirings and star-continuous Kleene algebras with domain"

Copied!
135
0
0

Texte intégral

(1)

Completeness

for

Domain Semirings

and

Star-Continuous Kleene Algebras with Domain

Sokhna Diarra Mbacke

Supervised by

Jules Desharnais

Research report

(2)

This research report is a translation and slight adaptation of

Sokhna Diarra Mbacke, Complétude pour les demi-anneaux et algèbres de Kleene étoile-continues avec domaine, M. Sc. thesis, Université Laval, 2018,

(3)

Abstract

Due to their increasing complexity, today’s computer systems are studied using multiple models and formalisms. Thus, it is necessary to develop theories that unify different approaches in order to limit the risk of errors when moving from one formalism to another. It is in this context that monoids, semirings and Kleene algebras with domain were born about a decade ago. The idea is to define a domain operator on classical algebraic structures, in order to unify algebra and the classical logics of programs. The question of completeness for these algebras with a domain operator is still open. It constitutes the object of this thesis.

We define tree structures called trees with a top and represented in matrix form. After proving fundamental properties of these trees, we define simulation relations, which are used for comparing them. Then, we show that, modulo a certain equivalence relation, the set of trees with a top has the structure of a monoid with domain. This result makes it possible to define a model for semirings with domain and to prove its completeness. We also define a model for∗-continuous Kleene algebras

(4)

Contents

Abstract iii

Contents iv

1 Introduction 1

1.1 Contributions . . . 2

1.2 Plan of the thesis . . . 2

2 Preliminaries 4

2.1 Matrices . . . 4

2.2 Relations and direct sums . . . 5

2.3 Domain Monoids and Domain Semirings . . . 13

3 Kleene algebras and extensions 18

3.1 History and axiomatization . . . 18

3.2 Kleene algebras with tests . . . 25

3.3 Kleene algebras with domain . . . 30

4 Trees with a top 32

4.1 Alphabet and termes . . . 32

4.2 Construction of trees with a top . . . 37

4.3 Simulation . . . 46

5 Completeness 74

5.1 Preliminary results . . . 74

5.2 Models for DS and KAD∗ . . . . 106

5.3 Completeness . . . 114

6 Conclusion 127

(5)

Chapter 1

Introduction

Monoids, semirings and Kleene algebras are fundamental structures in theoretical computer science. They have been applied to many fields such as relation algebras [Ng84,Tar41], the logic and seman-tics of programs [Koz81,Pra88], the theory of finite automata and formal languages [Kui87,KS86], as well as the construction and analysis of algorithms [AHU74,Koz92]. The main interest of these alge-braic structures is that they make it possible to carry out abstract, concise and systematic reasoning. Monoids and semirings are classical mathematical structures. As far as Kleene algebras are con-cerned, they have been studied for the first time by Kleene [Kle56] and have undergone considerable development, both from the theoretical and the practical viewpoints. There are many axiomatiza-tions of Kleene algebras and the most used is that of Kozen [Koz94]. It consists of a finite number of equations and equational implications.

In order to reconcile the algebraic approach with classical program analysis methods such as temporal logic, dynamic logic or Hoare logic, a domain operator has been defined on the monoids, semirings and Kleene algebras. Beyond Kleene algebras with domain, there are many variants and extensions of Kleene algebras, including Kleene algebras with tests [Koz97], Kleene algebras with relations [Des03], and Kleene algebras with converse [BP14].

Kleene algebras with domain (KAD) were introduced by Desharnais et al. [DMS06]. The ap-proach used in [DMS06] is to start from Kleene algebras with tests [Koz97] and add three axioms defining a domain operator. These axioms yield a considerable gain in terms of expressiveness. In-deed, it is proved in [DMS06] that the language of Kleene algebras with domain allows the definition of modality operators (useful for the specification and analysis of programs and transition systems), the expression of Noethericity (used for program termination analysis) and the algebraic reconstruc-tion of Hoare logic. On the other hand, another axiomatizareconstruc-tion of domain algebras, called internal axiomatization, is given in [DS11]. This axiomatization has many advantages over that of [DMS06]. Indeed, it allows to study the domain operator without going through the theory of Kleene alge-bras with tests. Moreover, with the axiomatization of [DMS06], the domain is unique, whereas with internal axiomatization, one can have several distinct domain functions.

(6)

The theory of algebras with a domain operator (domain monoids, domain semirings, Kleene algebras with domain) is very rich and has been developed significantly over the last decade. How-ever, questions about the completeness and decidability of the equational theory of these algebras are still largely open, even if they are fundamental. In [JS08], the authors study the structure of one-generated domain semirings. At the end of their study, they argue that even the case of semirings generated by two elements is complex and conclude that the approach used in their article does not work for these algebras. Moreover, the study of completeness and decidability for Kleene algebras with domain is more complex than for Kleene algebras with tests (KAT). Indeed, since for the latter the test algebra contains atoms, it is possible, in a certain sense, to enumerate, for a primitive test p, all the atomic tests α such that α ≤ p (see section3.2). Thus the authors of [KS96] built a complete model for KAT based on “guarded strings”. This approach cannot work for domain algebras, since there are no atoms in the test algebra.

1.1

Contributions

In this thesis, we study the structure of some domain algebras, generated by an arbitrary finite number of elements. We introduce a structure called tree with a top. A tree with a top is a triple of matrices built from a term. We show several important results on the structure of these trees, as well as on the link between the shape of a term and the properties of the corresponding tree. On the other hand, we define the notion of monogenic term and show that, in a certain sense, any simple term is equivalent to a product of monogenic terms. This result is very useful for the study of domain monoids, because it allows one to limit oneself to the study of terms having a very specific form.

We prove that, modulo simulation equivalence, the set of trees with a top on a finite alphabet is a domain monoid. From this result, we construct a model for domain semirings and one for ∗

-continuous Kleene algebras with domain (KAD∗). Thus, we build a correspondence between terms

and sets of trees. On these sets, we define another notion of simulation and use simulation equiva-lence as equivaequiva-lence relation.

After proving the consistency of our models, we demonstrate their completeness. These results contribute to the advancement of knowledge on the structure of algebras equipped with a domain operator.

1.2

Plan of the thesis

In chapter2, we present the mathematical notions necessary to understand this thesis. We discuss matrices and some basic algebraic structures. Next, we present Boolean algebras, relation algebras and direct sums. Chapter3is an introduction to Kleene algebras. We present an axiomatization and give the main models. Then, we present Kleene algebras with tests and their classical model based on guarded strings, before giving a summary of the main ideas used to prove its completeness.

(7)

Chapters4and5contain the main contributions of this thesis. In chapter4, we introduce trees with a top and give examples. We see how to construct, for each term of a domain monoid, the corresponding tree. In addition, we define a simulation relation over trees and prove several results about trees, terms and the link between the form of a term and the structure of the corresponding tree. In chapter5, we show that the model of trees with a top is complete for domain semirings. Moreover, we define a model for∗-continuous Kleene algebras with domains and prove its completeness modulo

a new axiom.

(8)

Chapter 2

Preliminaries

This chapter contains the mathematical concepts necessary to understand this thesis. Section2.1is about matrices and the notations that we will use to manipulate them; section2.2is a brief introduc-tion to relaintroduc-tion algebra and direct sums; finally, secintroduc-tion2.3presents the domain monoids, domain semirings and some basic results concerning them.

2.1

Matrices

Definition 2.1.1. A matrix M over a set E possibly containing the elements 0 and 1 is a function M : {1, . . . , m} × {1, . . . , n} → E where m, n ∈ N. The shape of M is m × n and m is called the number of rows and n the number of columns. One can have m = 0 or n = 0 (or both).

Notation 2.1.2. Here are some symbols we will use. Let M be a matrix and m, n, i, j ∈ N. M : Matrix M without indication of the shape.

M : m × n : Matrix M of shape m × n. M [i, j] : Entry i, j of the matrix M.

I : Identity matrix, without indication of the shape. Im : Identity matrix of shape m × m.

0 : Zero matrix (all entries are 0), without indication of the shape. 0m×n : Zero matrix of shape m × n.

1 : A matrix whose entries are all 1, without indication of the shape. 1m×n : Matrix of shape m × n whose entries are all 1.

If m = n, then M is a square matrix. Moreover, M is a vector if n = 1 and a covector if m = 1. If v is a vector or a covector whose entries are all 0, except one which is 1, then v is a said to be unitary. Finally, a binary matrix is a matrix whose entries are all in {0, 1}.

(9)

Remark 2.1.3. Let E be a set on which + and · are defined and let M1, M2be two matrices over E

such that M1 : m1 × n1 and M2 : m2 × n2. The matrix M1+ M2 : m1 × n1 exists if and only if

m1= m2and n1= n2and is defined as follows, for all 1 ≤ i ≤ m1and 1 ≤ j ≤ n1:

(M1+ M2)[i, j] = M1[i, j] + M2[i, j].

Moreover, the matrix M1· M2: m1× n2, noted M1M2, exists if and only if n1= m2and is defined

as follows:

(M1M2)[i, j] = (P k | 1 ≤ k ≤ n1: M1[i, k] · M2[k, j]),

for all 1 ≤ i ≤ m1and 1 ≤ j ≤ n2. A block matrix is a matrix broken into matrices of smaller shape.

For instance, if M =    a b c d e f g h i   , then defining N1 = " a b d e # , N2 = " c f # , N3 = h g h i et N4 = h i i , we can write M = " N1 N2 N3 N4 # .

The addition and the product of block matrices are done exactly as with conventional matrices. For example, let M1 = " A B C D # and M2 = " E F G H #

where A, B, C, D, E, F, G et H are matrices. Then, M1M2 = " AE + BG AF + BH CE + DG CF + DH # .

We assume that the shape of the matrices A, B, C, D, E, F, G and H allow these operations.

2.2

Relations and direct sums

In this section, we discuss some basic notions about relations. But first, let us look at some elementary algebraic structures.

(10)

2.2.1 Monoids and semirings

A monoid is a structure hM, ·, 1i such that M is a set, · : M × M → M and 1 ∈ M, and for all x, y, z ∈ M,

(x · y) · z = x · (y · z) and

x · 1 = 1 · x = x.

In other words, · is associative and 1 is its identity. A semiring is any structure hS, +, ·, 0, 1i such that + : S × S → S, · : S × S → S and 0, 1 ∈ S, and for all x, y, z ∈ S, the following properties hold: (x + y) + z = x + (y + z) (2.1) 0 + x = x (2.2) x + 0 = x (2.3) (x · y) · z = x · (y · z) (2.4) 1 · x = x (2.5) x · 1 = x (2.6) x + y = y + x (2.7) x · (y + z) = x · y + x · z (2.8) (x + y) · z = x · z + y · z (2.9) 0 · x = 0 (2.10) x · 0 = 0. (2.11)

A semiring hS, +, ·, 0, 1i is said to be idempotent if + is, meaning:

x + x = x, (2.12)

for all x ∈ S. As usual in algebra, we write xy for x · y.

2.2.2 Binary relations and relation algebras

Let E1, E2be two sets. A binary relation R between E1and E2is a subset of the Cartesian product

E1× E2, that is to say,

R ⊆ {(x, y) : x ∈ E1, y ∈ E2}.

We write x R y for (x, y) ∈ R. When E1 and E2 are finite sets, we can write the relation R as a

binary matrix. The shape of the matrix R is |E1| × |E2|and for all 1 ≤ i ≤ |E1|, 1 ≤ j ≤ |E2|:

R[i, j] = (

1, if i R j, 0, otherwise.

(11)

An identity relation is an m × m matrix, for m ≥ 1, defined as follows: I[i, j] =

(

1, if i = j, 0, otherwise.

It is denoted by I or by Im whenever we want to emphasize the shape.

Example 2.2.1. Let E1 = {1, 2}, E2 = {1, 2, 3}and R = {(1, 2), (2, 1), (2, 3)}. The matrix

repre-sentation of R is R = " 0 1 0 1 0 1 # .

In the following, we use the terms relation and binary matrix interchangeably.

Definition 2.2.2. A binary relation R : E × E is a preorder if the following properties hold: x R x,for all x ∈ E (Reflexivity) x R yand y R z ⇒ x R z, for all x, y, z ∈ E (Transitivity) Moreover, R is an equivalence relation if besides reflexivity and transitivity, it has the property of symmetry:

x R y ⇒ y R x, for all x, y ∈ E (Symmetry) Finally, R is a partial order if R is a preorder with the property of antisymmetry:

x R y ∧ y R x ⇒ x = y, for all x, y ∈ E. (Antisymmetry) If R : E × E is a partial order, then E is said to be partially ordered by R.

The following proposition gives a connection between preorder and equivalence relations. Proposition 2.2.3. Let E be a set and R : E × E a preorder. Let Rsym: E × Ethe relation defined by

x Rsymy ⇔ x R y ∧ y R x.def

Then Rsymis an equivalence relation.

Proof. First, Rsymis reflexive because R is. Second, transitivity is proved as follows:

x Rsymy ∧ y Rsymz

⇒ h Definition of Rsymi

(12)

⇔ h Commutativity and associativity of conjunction i x R y ∧ y R z

∧ z R y ∧ y R x

⇒ h By hypothesis, R is a preorder, hence R is transitive i x R z ∧ z R x

⇔ h Definition of Rsymi

x Rsymz.

Finally, Rsymis symmetric, by definition. 

The following proposition shows that it is possible to define, in any idempotent semiring, a partial order.

Proposition 2.2.4. Let hS, +, ·, 0, 1i be an idempotent semiring. The set S is partially ordered by the relation ≤ defined as follows:

x ≤ y ⇔ x + y = y.def (2.13)

The relation ≤ is the natural order on S.

Proposition 2.2.5. Let hS, +, ·, 0, 1i be a semiring and ≤ the natural order on S. For all x, y, z ∈ S, the following properties hold.

1. x + y ≤ z if and only if x ≤ z and y ≤ z. 2. Isotony of +: if x ≤ y, then x + z ≤ y + z. 3. Isotony of ·: if x ≤ y, then xz ≤ yz and zx ≤ zy. Proof.

1. First, we assume x + y ≤ z to prove x + z = z and y + z = z which, by (2.13), is equivalent to x ≤ z et y ≤ z. By the assumption x + y ≤ z, x + y + z = z. Using x + y + z = z, idempotence of +, and x + y + z = z again,

x + z = x + x + y + z = x + y + z = z and

y + z = y + x + y + z = x + y + z = z.

Then, we assume x ≤ z and y ≤ z and prove x + y ≤ z. By the assumptions x ≤ z and y ≤ z, x + y + z = x + z = z.

(13)

2. Assume x ≤ y. By (2.1), (2.7), idempotence of + and the assumption x ≤ y, which is equivalent to x + y = y by (2.13),

(x + z) + (y + z) = x + y + z = y + z. By (2.13), this means x + z ≤ y + z.

3. Assume x ≤ y. Using distributivity (2.9) and the assumption x ≤ y, we have xz + yz = (x + y)z = yz,

which is equivalent to xz ≤ yz. Likewise, using distributivity (2.8) and the assumption x ≤ y, we have

zx + zy = z(x + y) = zy,

which is equivalent to zx ≤ zy. 

Definition 2.2.6. A Boolean algebra [GH08] is any structure hB, +, ·, , 0, 1i such that hB, +, ·, 0, 1i is an idempotent semiring and for all x, y, z ∈ B, the following properties hold.

xx = x (2.14) xy = yx (2.15) x + yz = (x + y)(x + z) (2.16) x + x = 1 (2.17) xx = 0 (2.18) (x + y) = x y (2.19) xy = x + y (2.20) x = x (2.21) x(x + y) = x (2.22) x + xy = x (2.23)

Now we introduce relation algebras.

Definition 2.2.7. A relation algebra [SS93] is any structure hR, +, u, , `, 0, I, >isuch that 1. hR, +, u, , 0, Ti is a Boolean algebra,

2. hR, +, ·, 0, Ii is an idempotent semiring, 3. `and are unary operators and

4. the following rule, called the Schröder rule, holds:

(14)

In any relation algebra, the following properties hold, for all R1, R2, R3 ∈ R. R1 ≤ T (2.24) R`1` = R1 (2.25) R1u R1 = 0 (2.26) R1 ≤ R1R`1R1 (2.27) (R1+ R2)` = R`1 + R ` 2 (2.28) (R1R2)` = R`2R ` 1 (2.29) R1u R2 ≤ R1 (2.30) R1u R2 ≤ R2 (2.31) R1 = R1u (R2+ R2) (2.32) R1(R2u R3) ≤ R1R2u R1R3 (2.33) (R1u R2)R3 ≤ R1R3u R2R3 (2.34) R1 ≤ R2 ⇔ R1= R1u R2 (2.35) R1 ≤ R2 ⇔ R`1 ≤ R ` 2 (2.36) R1 ≤ I ⇒ R`1 = R1 (2.37) R1 ≤ I ⇒ I = R1+ (I u R1) (2.38)

In addition to the above laws, the following rules are also verified: • Dedekind Rule: R1R2u R3 ≤ (R1u R3R`2)(R2u R`1R3).

• If R1and R2are vectors, then

R1 ≤ R2T ⇔ R1 ≤ R2. (2.39)

Definition 2.2.8. Let R be a relation. We call R injective if RR` ≤ I, deterministic if R`R ≤ I, surjective if I ≤ R`Rand total if I ≤ RR`. If R is both total and deterministic, the R is called an

application.

The following proposition gives some properties of relations.

Proposition 2.2.9. Let R1 and R2 be relations and I the identity matrix. The following properties

hold.

1. If R1≤ I and R2 ≤ I, then R1u R2= R1R2.

2. If R1+ R2 = I and R1R2 = 0, then I u R1 = R2.

(15)

4. If R1and R2are vectors, then R1≤ R2if and only if R1R`1 ≤ R2R`2.

Proof.

1. Using the hypothesis R1, R2 ≤ I, (2.35), (2.33), (2.34), (2.30) and (2.31), we have

R1R2 = (R1u I)(R2u I) ≤ R1R2u R1u R2u I ≤ R1u R2.

Next,

R1u R2

≤ h R1= R1I & Dedekind rule i

(R1u R2I)(I u R`1R2)

≤ h R1u R2≤ R1 ≤ Iby (2.30) and the hypothesis R1 ≤ I & isotony i I u R`1R2

≤ h I u R`1R2 ≤ R`1R2 by (2.31) i

R`1R2

= h Hypothesis R1≤ I & (2.37) i R1R2.

Hence, since R1R2 ≤ R1u R2 and R1u R2 ≤ R1R2, we have R1R2 = R1u R2.

2. We assume R1+ R2 = I and R1R2 = 0. First,

I u R1 = h Hypothesis R1+ R2 = I i (R1+ R2) u R1 = h Distributivity of u over + i (R1u R1) + (R2u R1) = h R1u R1= 0 i R2u R1 ≤ h (2.30) i R2. Next,

(16)

R2

= h By (2.32) i R2u (R1+ R1)

= h Distributivity of u over + i (R2u R1) + (R2u R1)

= h Hypotheses R1+ R2 = I and R1R2 = 0imply R1u R2= 0, by

propo-sition2.2.9.1 & commutativity of u i R2u R1

= h Hypothesis R1+ R2 = I, hence R2 ≤ I & (2.35) i

R2u I u R1

h By (2.31) i I u R1.

Hence, I u R1 ≤ R2and R2 ≤ I u R1, which is to say that I u R1= R2.

3. By the definition of injectivity2.2.8, the isotony of ·, the Schröder rule and (2.25), R2injective ⇔ R2R`2 ≤ I ⇒ R1R2R`2 ≤ R1 ⇔ R1R`2

`≤ R

1R2 ⇔ R1R2 ≤ R1R2.

4. We assume that R1 and R2are vectors such that R1 ≤ R2. By the hypothesis R1 ≤ R2, the

isotony of · and (2.36),

R1R`1 ≤ R2R`1 ≤ R2R`2.

Thus R1 ≤ R2 ⇒ R1R`1 ≤ R2R`2. The following derivation proves R1R`1 ≤ R2R`2 ⇒

R1≤ R2.

R1R`1 ≤ R2R`2

⇒ h Isotony of · i R1R`1R1 ≤ R2R`2R1

⇒ h R1 ≤ R1R`1R1 by (2.27) & R`2R1 ≤ Tby (2.24) & isotony &

transi-tivity of ≤ i R1≤ R2T

⇔ h (2.39) i

(17)

2.3

Domain Monoids and Domain Semirings

It is possible to define a domain operator on monoids and idempotent semirings.

Definition 2.3.1. A domain monoid [DJS09] is a structure hM, ·, p, 1i such that · : M × M → M, p : M → Mand 1 ∈ M, and the following axioms hold, for all x, y, z ∈ M:

x(yz) = (xy)z (DM1) 1x = x (DM2) x1 = x (DM3) pxx = x (DM4) p(xy) = p(xpy) (DM5) p(pxy) = pxpy (DM6) pxpy = pypx. (DM7)

In other words, hM, ·, 1i is a monoid and the axioms (DM4)-(DM7) hold. The axiom system (DM1

)-(DM7) will be denoted DM.

The following proposition gives some basic properties of domain monoids.

Proposition 2.3.2. Let hM, ·, p, 1i be a domain monoid and x ∈ M. The following properties hold. 1. p1 = 1. 2. ppx = px. 3. pxpx = px. Proof. 1. By (DM3) and (DM4), p1 = p11 = 1. 2. By (DM3), (DM6) and proposition2.3.2.1, ppx = p(px1) = pxp1 = px1 = px. 3. By (DM6) and (DM4), pxpx = p(pxx) = px. 

(18)

is an idempotent semiring and the operator p : S → S verifies x +pxx = pxx (DS1) p(xy) = p(xpy) (DS2) px + 1 = 1 (DS3) p0 = 0 (DS4) p(x + y) = px + py. (DS5)

Since hS, +, ·, p, 0, 1i is idempotent, the axioms (DS1) and (DS3) can be reformulated as follows, by

(2.13):

x ≤pxx, (2.40)

px ≤ 1. (2.41)

Hence, we will the most suitable form, depending on the context. Moreover the axiom system (2.1 )-(2.11) with (DS1)-(DS5) will be denoted DS. Hence, if we write

DS ` x = y,

that means that the equality x = y can be proved with the axioms of DS. One can prove that the axioms (DM1)-(DM7) hold in any domain semiring.

Theorem 2.3.4. Let hS, +, ·, p, 0, 1i be a domain semiring. Then hS, ·, p, 1i is a domain monoid. Proof. It remains only to prove (DM4), (DM6) and (DM7). In [DS11], the following properties are

proved. (i) ppx = x. (ii) pxpy = pypx. (iii) p(pxy) = pxpy.

Which is to say that (DM4), (DM6) and (DM7) are all proved in [DS11]. 

Hence, if a theorem is provable in any domain monoid, it is also provable in any domain semiring: DM ` X ⇒ DS ` X.

Proposition 2.3.5. Let hS, +, ·, p, 0, 1i be a domain semiring. For all x, y, z ∈ S, the following properties hold.

(19)

1. If x ≤ y, then px ≤ py. In other words, the domain operator p is isotone. 2. If px ≤ py, then pxpy = px.

3. px ≤ py if and only if x ≤ pyx. Proof.

1. Using the definition of ≤ and (DS5),

x ≤ y ⇔ x + y = y ⇒ p(x + y) =py ⇔ px + py = py ⇔ px ≤ py. 2. Assume px ≤ py. First, using (DS3), the isotony of · and (2.6), we have

pxpy ≤ px1 = px. Next, by isotony of · and proposition2.3.2.3,

px ≤ py ⇒ pxpx ≤ pxpy ⇔ px ≤ pxpy. Hence, if px ≤ py, then pxpy = px.

3. Assume px ≤ py. By (DS1), the assumption px ≤ py and isotony of ·,

x ≤pxx ≤ pyx.

It remains to prove x ≤ pyx ⇒ px ≤ py. First, by (DS3) and isotony of ·,

pypx ≤ py. (2.42)

Using proposition2.3.5.1, (DM6), (2.42), (DS3) and transitivity of ≤,

x ≤pyx ⇒ px ≤ p(pyx) ⇔ px ≤ pypx ≤ py ⇒ px ≤ py. 

2.3.1 Direct sums

Definition 2.3.6(Direct sums). Let k ≥ 1 and let {X1, . . . , Xk}be a set of k relations. We say that

hX1, . . . , Xkiis a direct sum if for all 1 ≤ i, j ≤ k where i 6= j, the following properties hold:

XiX`i = I, (SD1)

XiX`j = 0, (SD2)

(P i | 1 ≤ i ≤ k : X`

i Xi) = I. (SD3)

(20)

Example 2.3.7. Let X1 = h 1 0 0 i and X2= " 0 1 0 0 0 1 # . We show that hX1, X2iis a direct sum. First,

X1X`1 = h 1 0 0 i    1 0 0   = [1] and X2X ` 2 = " 0 1 0 0 0 1 #    0 0 1 0 0 1   = " 1 0 0 1 # .

Thus (SD1) is satisfied. Next,

X1X`2 = h 1 0 0 i    0 0 1 0 0 1   = h 0 0 i and X2X1`= " 0 1 0 0 0 1 #    1 0 0   = " 0 0 # .

This proves that (SD2) is satisfied. Finally, the following shows that hX1, X2isatisfies (SD3):

X1`X1+ X`2X2 =    1 0 0 0 0 0 0 0 0   +    0 0 0 0 1 0 0 0 1   =    1 0 0 0 1 0 0 0 1   .

The direct sum of example 2.3.7is a special case of direct sums of the form hX1, X2i where

X1 = [I | 0]and X2 = [0 | I]. The following result shows that such a pair hX1, X2iis necessarily

a direct sum. Proposition 2.3.8.

1. Let m and n be two natural numbers and let X1 = [Im | 0m×n]and X2 = [0n×m | In]. Then

the ordered pair hX1, X2iis a direct sum.

2. Let hX1, . . . , Xkibe a direct sum. Then for all 1 ≤ p ≤ k,

I u Xp`Xp = (P i | 1 ≤ i ≤ k ∧ i 6= p : X`i Xi). Proof. 1. First, X1X1`= [Im | 0m×n] " Im 0n×m # = Im

(21)

and X2X`2 = [0n×m | In] " 0m×n In # = In.

Thus (SD1) is satisfied. Next,

X1X2`= [Im | 0m×n] " 0m×n In # = 0m×n and X2X1` = [0n×m | In] " Im 0n×m # = 0n×m.

Which proves that (SD2) is also satisfied. Finally,

X1`X1+ X2`X2 = " Im 0m×n 0n×m 0n×n # + " 0m×m 0m×n 0n×m In # = Im+n.

Hence, (SD3) is also satisfied.

2. First, Xp`Xp(P i | 1 ≤ i ≤ k ∧ i 6= p : X`i Xi) = h Distributivity (2.8) i Xp`(P i | 1 ≤ i ≤ k ∧ i 6= p : XpX`i Xi) = h By (SD2) XpX`i = 0, if i 6= p & (2.10) & (2.11) i 0. Since, by (SD3), we have X`pXp+ (P i | 1 ≤ i ≤ k ∧ i 6= p : Xi`Xi) = I,

from the previous derivation and proposition2.2.9.2, we deduce I u X`

(22)

Chapter 3

Kleene algebras and extensions

This chapter is a brief introduction to Kleene algebras (KA) and their extensions. Section3.1presents a brief history and an axiomatization, and gives the main models of KA; section 3.2contains the definition and some results of Kleene algebras with tests; finally, section3.3presents Kleene algebras with domain and some of their models.

3.1

History and axiomatization

3.1.1 History

Kleene algebras (KA) are named after the mathematician and logician of the 20th century Stephen Cole Kleene. He poses, in 1956, the problem of the axiomatization of the equational theory of regular languages. In 1964, Redko [Red64] proves that there is no finite set of equational axioms to capture this theory. Over the years, several non-equivalent axiomatizations are proposed. Thus, in 1966, Salomaa [Sal66] gives two complete axiomatizations for the theory of regular languages. Unfortu-nately, these systems are not satisfied by some non-standard models. In a book entirely devoted to Kleene algebras [Con71,Con12], John Conway proposes five possible definitions of these algebras and studies their properties in depth. In his book, Conway uses an infinite summation to define the Kleene star. Moreover, the most used axiomatization today is the one proposed by Dexter Kozen in 1994[Koz94]. The latter consists of a finite number of equations and equational implications.

3.1.2 Axiomatization

(23)

an idempotent semiring and∗: K → Kverifies:

1 + aa∗ ≤ a∗ (3.1)

1 + a∗a ≤ a∗ (3.2)

b + ax ≤ x ⇒ a∗b ≤ x (3.3)

b + xa ≤ x ⇒ ba∗≤ x, (3.4)

where ≤ is the natural order on K.

It is possible to prove the isotony of∗, namely that for all a, b ∈ K,

a ≤ b ⇒ a∗ ≤ b∗.

Definition 3.1.2. Let hK, +, ·,∗, 0, 1ibe a Kleene algebra and a ∈ K. We define an, for all n ∈ N, inductively:

a0 = 1, (3.5)

an+1 = ana. (3.6)

Another way to axiomatize Kleene algebra is to use the star-continuity axiom.

Definition 3.1.3. A ∗-continuous Kleene algebra (KA∗) is a structure hK, +, ·,∗, 0, 1i such that hK, +, ·, 0, 1iis an idempotent semiring and∗ : K → Kis defined by

ab∗c = (P n | n ∈ N : abnc), (3.7)

where Σ refers to the supremum in relation to the partial order ≤ defined on K. The axiom (3.7) is called the star-continuity axiom. It should be noted that this axiom imposes the existence of a supremum for all sets of the form {abnc : n ∈ N}, for all a, b, c ∈ K. By taking a = c = 1, we obtain

a definition of b∗, as opposed to the axioms (3.1) - (3.4) which give a description of the star.

Star-continuous Kleene algebras are Kleene algebras in the sense of the definition3.1.1, but the converse is false because there are Kleene algebras that are not star-continuous [Koz90]. However, KAand KA∗have the same equational theory [Koz94].

3.1.3 Matrices over a Kleene algebra

Let hK, +, ·,∗, 0, 1ibe a Kleene algebra and hM, Ki be the set of matrices over K. One can extend

the operators +, · and∗ over hM, Ki in a natural way as follows[Koz94]. Let M, N ∈ hM, Ki be

two matrices of shape m1× m2 and n1× n2 respectively, where m1, m2, n1, n2 ∈ N. The matrix

M + N is defined if and only if m1= n1 and m2= n2 and is given by

(24)

The matrix MN is defined if and only if m2 = n1and is given by

(M N )[i, j] = (P k | 1 ≤ k ≤ m2: M [i, k]N [k, j]).

The matrix M∗is defined if and only if M is a square matrix. However, the definition of Mis a bit

complicated. If M is a matrix of shape 0 × 0, then M∗ def= M. When the shape of the matrix is 1 × 1,

we have: h a i∗ def = h a∗ i

for a ∈ K. The general definition is inductive. First, it should be noted that any matrix M : m × m where m ≥ 2 can be decomposed as follows:

M = "

A B

C D

#

where A, B, C and D are matrices and A and D are not empty. Denote F = A + BD∗Cand define

[Koz94]: M∗ def= " F∗ F∗BD∗ D∗CF∗ D∗+ D∗CF∗BD∗ # . (3.8)

The definition (3.8) looks strange at first glance. To understand it better, take the example of a matrix M of shape 2 × 2 defined by M = " a b c d # , where a, b, c, d ∈ K. By (3.8), M∗= " f∗ f∗bd∗ d∗cf∗ d∗+ d∗cf∗bd∗ # ,

where f = a + bd∗c. Now, consider the two-state finite automaton A whose transition matrix is M.

The transition diagram of A is:

1 2

a

b

c

d

The automaton A helps to understand that the expression given for M∗[i, j]is the regular

ex-pression corresponding to the set of words read by going from state i to state j, for i, j ∈ {1, 2}.

3.1.4 Models

(25)

Example 3.1.4(Regular languages). Let Σ be a finite alphabet and let Σ∗be the set of finite words over Σ. The structure LanΣ = h2Σ

, ∪, ·,∗, ∅, {ε}iis a∗-continuous Kleene algebra, where 2Σ∗ is the set of subsets of Σ∗, ∪ is set union, · is the concatenation word for word, ∅ is the empty set and ε

the empty word. In this algebra, the star is defined as follows: for all L ⊆ Σ∗, L= {w

1w2. . . wn:

n ≥ 0, wi ∈ L}.

Example 3.1.5(Matrix languages). Let Σ be a finite alphabet and let Σ∗ the set of words over Σ. The set hM, 2Σ∗iof matrices over Σis an idempotent semiring. This structure can be extended to

a Kleene algebra using the definition of∗given in section3.1.3.

Example 3.1.6 (Relational models). Let E be a set and 2E×E be the set of binary relations over

E. The algebra RelE = h2E×E, ∪, ◦,∗, ∅, Iiis a Kleene algebra, where ∪ is set union,∗is reflexive

transitive closure and I is the identity relation, namely I = {(x, x) : x ∈ E}. In that algebra, the composition ◦ is the relational composition

R1◦ R2 = {(x, z) : (∃y | y ∈ E : (x, y) ∈ R1 ∧ (y, z) ∈ R2)}.

Example 3.1.7(Path models). Let Σ be a set of states (or nodes). Then Σ∗can be seen as the set of all possible paths in the complete graph whose nodes are exactly those in Σ. In that case, ε is the empty path and the composition 1 can be defined as follows for a, b ∈ Σ and x, y ∈ Σ∗:

xa1 by = ( xay if a = b, undefined otherwise. Moreover, ε1 ε = ε and the expressions

ε1 xa and xa 1 ε

are undefined. For all A, B ∈ 2Σ∗, the composition A 1 B is defined as follows:

A1 B = {x 1 y : x ∈ A and y ∈ B}.

Moreover, A∗ = (S n | n ≥ 0 : An), where A0 = {ε}and An+1 = AnAfor all n ∈ N. The algebra

CheΣ= h2Σ

, ∪,1,∗, ∅, {ε}iis a Kleene algebra.

The following proposition gives some properties of KA.

Proposition 3.1.8([Koz94]). Let hK, +, ·,∗, 0, 1ibe a Kleene algebra and a, b, x ∈ K. The following properties hold.

(26)

1. 1 + aa∗= a.

2. 1 + a∗a = a.

3. xa ≤ bx ⇒ xa∗ ≤ bx.

Proof.

1. It is enough to show a∗ ≤ 1 + aa, since 1 + aa≤ ais axiom (3.1). First, note that

1 + a(1 + aa∗) ≤ 1 + aa∗. Indeed, by (3.1) and isotony,

1 + aa∗≤ a∗ ⇒ a(1 + aa∗) ≤ aa∗ ⇒ 1 + a(1 + aa∗) ≤ 1 + aa∗. By (3.3) and identity (2.6), this implies a∗ ≤ 1 + aa.

2. It is enough to prove a∗ ≤ 1 + aa. The proof is similar to the previous one. First, notice that

by (3.2) and isotony,

1 + a∗a ≤ a∗ ⇒ (1 + a∗a)a ≤ a∗a ⇒ 1 + (1 + a∗a)a ≤ 1 + a∗a. Hence, by (3.4) and identity (2.5), we deduce: a∗≤ 1 + aa.

3. First, using distributivity (2.9), the idempotence of + (2.12) and the definition of ≤ (2.13), x ≤ (1 + b∗b)x ⇔ x ≤ x + b∗bx ⇔ true.

Thus, we can use

x ≤ (1 + b∗b)x (3.9)

as a theorem. The following derivation shows xa ≤ bx ⇒ xa∗≤ bx.

xa ≤ bx

⇒ h Isotony of · i b∗xa ≤ b∗bx

⇒ h Idempotence of + & (2.13) i b∗xa ≤ b∗bx ≤ x + b∗bx

⇒ h Transitivity of ≤ & distributivity (2.9) i b∗xa ≤ (1 + b∗b)x

⇒ h By (3.9) i

x ≤ (1 + b∗b)x ∧ b∗xa ≤ (1 + b∗b)x ⇔ h Proposition3.1.8.2i

(27)

⇔ h Proposition2.2.5.1i x + b∗xa ≤ b∗x

⇒ h By (3.4) i

xa∗ ≤ b∗x 

Now we define the notion of nilpotent element in a semiring.

Definition 3.1.9. Let hK, +, ·, 0, 1i be a semiring and x ∈ K. We say x is nilpotent if there exists n ∈ Nsuch that xn= 0. In this case, we define the order of x as the least n such that xn= 0.

The appeal of definition3.1.9is illustrated by the following theorem.

Theorem 3.1.10. Let hK, +, ·,∗, 0, 1i be a Kleene algebra and M ∈ K an idempotent element of order n. Then,

M∗= (P i | 0 ≤ i ≤ n : Mi). (3.10)

Theorem3.1.10gives an effective way of calculating M∗from M and any upper bound p of the

order of M. Indeed, since Mp = 0, we can replace n by any integer p ≥ n in (3.10). Hence, in the

following chapters, when we manipulate matrices seen as nilpotent elements of a Kleene algebra, we can use (3.10) instead of the less convenient (3.8).

In order to prove theorem3.1.10, we prove the following lemma.

Lemma 3.1.11. Let hK, +, ·,∗, 0, 1ibe a Kleene algebra and x ∈ K. Then, for all k ∈ N, x∗ = (P i | 0 ≤ i ≤ k : xi) + xk+1x.

Proof. Let x ∈ K. We perform a proof by induction on k. The induction predicate is P , with P (k) : x∗ = (P i | 0 ≤ i ≤ k : xi) + xk+1x.

We have to prove (∀k | k ∈ N : P (k)).

• Base step: k = 0. By the definition of P , P (0) is

x∗ = x0+ x0+1x∗,

which is equivalent to x∗= 1 + xx, which is true by proposition3.1.8.1.

(28)

x∗ = h Hypothesis P (k − 1) i (P i | 0 ≤ i ≤ k − 1 : xi) + xkx∗ = h Proposition3.1.8.1 i (P i | 0 ≤ i ≤ k − 1 : xi) + xk(1 + xx) = h Distributivity (2.8) i (P i | 0 ≤ i ≤ k − 1 : xi) + xk+ xk+1x∗ = h Sum i (P i | 0 ≤ i ≤ k : xi) + xk+1x∗  With lemma3.1.11, the proof of theorem3.1.10is simple.

Proof(Proof of theorem3.1.10). We assume that M ∈ K is nilpotent of order n. By lemma3.1.11, M∗ = (P i | 0 ≤ i ≤ n : Mi) + Mn+1M

.

However, since M is of order n, we have Mn+1 = 0. Because 0 is the zero of multiplication (2.10),

M∗= (P i | 0 ≤ i ≤ n : Mi). 

Definition 3.1.12. Let Σ be a finite alphabet which does not contain 0 and 1. The set RegΣ of regular expressions over Σ is the least set such that

• 0, 1 ∈ RegΣ,

• Σ ⊆ RegΣ,

• s, t ∈ RegΣ ⇒ s∗, s + t, st ∈ RegΣ.

The model of regular languages or standard model of regular expressions is the morphism L : RegΣ→ 2Σ∗

defined as follows: • L(0) = ∅,

• L(1) = {ε},

• L(a) = {a}, for all a ∈ Σ, • L(x + y) = L(x) ∪ L(y),

(29)

• L(x∗) = (S n | n ∈ N : L(x)n), where L(x)0 = {ε}and L(x)n+1= L(x)nL(x).

In [Koz94], Kozen proves that the model of regular languages is complete for KA.

3.2

Kleene algebras with tests

Kleene algebra with tests (KAT) has been introduced by Kozen [Koz97] and has seen many devel-opments at both the theoretical and application levels. The idea is to enrich Kleene algebras with elements called tests. In a Kleene algebra with tests, the set of tests is a Boolean algebra.

The definitions and results presented in this section come from [Koz97] and [KS96].

3.2.1 Definition of Kleene algebra with tests

Definition 3.2.1. A Kleene algebra with tests is a structure hK, B, +, ·,∗, , 0, 1isuch that 1. hK, +, ·,∗, 0, 1iis a Kleene algebra,

2. hB, +, ·, , 0, 1i a Boolean algebra and 3. B ⊆ K.

The operators +, · and∗ are defined on K and is defined on B only.

3.2.2 Guarded Strings

The standard model for KAT is based on structures called guarded strings. These strings can be seen as extensions of classical strings over an alphabet Σ. Given that there are two algebras K and B, the terms considered in the language of KAT are richer than regular expressions.

In the remaining of the section, we assume that we have two finite and disjoint alphabets Σ and B. Members of Σ are called primitive actions and those of B primitive tests. We also assume that 0 and 1 are neither in Σ, nor in B.

The set TB of Boolean KAT-terms is the least set such that

• 0, 1 ∈ TB,

• B ⊆ TB,

• s, t ∈ TB ⇒ s + t, st, s ∈ TB.

(30)

• TB ⊆ TΣ,B,

• Σ ⊆ TΣ,B,

• x, y ∈ TΣ,B ⇒ x + y, xy, x∗∈ TΣ,B.

Let hK, B, +, ·,∗, , 0, 1ibe a Kleene algebra with tests. An interpretation is any morphism I :

TΣ,B→ Ksending the members of TB to B.

Before defining guarded strings, a preliminary definition.

Definition 3.2.2. An atom [KS96] over B = {p1, . . . , pk}is a string x1. . . xksuch that xi ∈ {pi, pi}

for each 1 ≤ i ≤ k.

Hence, if α is an atom over B, then for each 1 ≤ i ≤ k, α contains exactly one member of the set {pi, pi}. When α contains pi, we say that piappears positive in α.

The set of atoms over B is notated AB. Note that this definition assumes an arbitrary but fixed

order over the elements of B. Note also that the set ABis finite of cardinality 2k, where k = |B|.

Example 3.2.3. Let B = {p1, p2, p3}. We have:

AB = {p1p2p3, p1p2p3, p1p2p3, p1p2p3, p1p2p3, p1p2p3, p1p2p3, p1p2p3}.

Some counterexamples: p1p2p2 is not an atom because neither p3nor p3 appear; p1p3p2 is not an

atom because the elements do not appear in the right order; p1p2p4is not an atom because p4∈ B/ .

Definition 3.2.4. We call guarded string [KS96] any element of the language AB(ΣAB)∗. In other

words, it is a string of the form

α0a1α1. . . anαn,

where n ≥ 0, ai ∈ Σfor all 1 ≤ i ≤ n and αi ∈ AB for all 0 ≤ i ≤ n. When n = 0, the guarded

string is an atom α0. The set of guarded strings over Σ and B is denoted SG and is obviously infinite

if Σ 6= ∅.

The following example illustrates definition3.2.4.

Example 3.2.5. Let Σ = {a, b, c, d} and B = {p1, p2, p3}. Here are some examples of guarded

strings:

p1p2p3dp1p2p3dp1p2p3, p1p2p3, p1p2p3ap1p2p3cp1p2p3bp1p2p3.

And here are some counterexamples:

(31)

The problem with w1 is that it ends with an element of Σ and not with an atom; w2 contains two

atoms that are not separated by a primitive action; w3does not even contain an atom and w4contains

two elements of Σ that are not separated by an atom.

The analog of the concatenation for guarded strings is the coalesced product.

Definition 3.2.6. The coalesced product of two guarded strings, noted , is defined as follows:

xα  βy def= (

xαy, if α = β, undefined, otherwise.

In other words, when the last atom of the first string is identical to the first atom of the second, the coalesced product concatenates the two sequences and removes the repeating atom. In the case where these two atoms are distinct, the product is not defined. We hence see an obvious difference with concatenation, since the latter is defined whatever the two sequences are.

The coalesced product  can be generalized to sets of guarded strings. Let A, B ⊆ SG. A  B def= {w1 w2 : w1 ∈ Aand w2 ∈ B}.

Note that for all A, B ⊆ SG, the product A  B is defined. Indeed, if w1 w2is undefined for

all w1 ∈ Aand w2 ∈ B, then A  B = ∅.

The standard interpretation of KAT-terms, noted H, sends each element of TΣ,B to a set of

guarded strings.

Definition 3.2.7. The interpretation H : TΣ,B → 2SGis defined inductively:

• H(0) = ∅.

• H(1) = AB, the set of atoms over B.

• H(p) = {α ∈ AB : pappears positive in α}, for all p ∈ B.

• H(a) = {αaβ : α, β ∈ AB}, for all a ∈ Σ.

• H(x + y) = H(x) ∪ H(y), for all x, y ∈ TΣ,B.

• H(xy) = H(x)  H(y), for all x, y ∈ TΣ,B.

• H(s) = AB\ H(s), for all s ∈ TB.

• H(x∗) = H(x)= (S n | n ∈ N : H(x)n), where H(x)0 = A

B and H(x)n+1 = H(x)n

(32)

Let us see some examples of construction of H(x) for x ∈ TΣ,B.

Example 3.2.8. Assume Σ = {a, b, c, d} and B = {p1, p2, p3}. Then,

H(p1) = {p1p2p3, p1p2p3, p1p2p3, p1p2p3},

H(p2) = {p1p2p3, p1p2p3, p1p2p3, p1p2p3},

H(p2) = AB\ H(p2) = {p1p2p3, p1p2p3, p1p2p3, p1p2 p3},

H(p3) = {p1p2p3, p1p2p3, p1p2p3, p1p2p3}.

Consider the KAT-terms p1p3and p3p2ap3p2. By the inductive definition3.2.7,

H(p1p3) = {p1p2p3, p1p2p3}

and

H(p3p2ap3p2) = {p1p2p3ap1p2p3, p1p2p3ap1 p2p3, p1p2p3ap1p2p3, p1p2p3ap1 p2p3}.

3.2.3 Completeness

The image of H is a ∗-continuous Kleene algebra with tests, where the set of atoms 2AB is the

corresponding Boolean algebra. In [KS96], the authors prove that for all x, y ∈ TΣ,B, if H(x) =

H(y), then I(x) = I(y) for any KAT∗-interpretation I. In other words, H is a complete model for

-continuous Kleene algebras with tests.

In order to prove that this model is also complete for KAT, the authors of [KS96] show that KAT and KAT∗ have the same equational theory. The proof of this result is based on the following key

lemma.

Lemma 3.2.9. [KS96] Let x ∈ TΣ,B. There exists ˆx ∈ TΣ,Bsuch that

(i) KAT  x = ˆx, (ii) L(ˆx) = H(ˆx),

where L is the standard interpretation of regular expressions. In other words, there exists a finite alphabet Σsuch that ˆx is a regular expression over Σ and the sets L(ˆx) and H(ˆx) are the same.

It is clear that to prove this lemma, we can assume that there is no occurrence of 0 and 1 in x. The idea is to first notice that each KAT-term x which does not contain 0 and 1 can be transformed into another KAT-term x0such that KAT  x = x0and such that the ¯ operator only applies over

primitive tests in x0. In other words, x0is a regular expression over the alphabet Σ ∪ B ∪ ¯B, where

¯

B = {¯p : p ∈ B}is a disjoint copy of B. Then, an inductive construction allows passing from x0 to ˆ

(33)

Once we have lemma3.2.9, we can easily show that KAT and KAT∗ have the same equational

theory.

Theorem 3.2.10. [KS96] Let x, y ∈ TΣ,B. Then,

KAT∗  x = y ⇔ KAT  x = y.

Proof. It is clear that KAT  x = y ⇒ KAT∗  x = y, since any∗-continuous Kleene algebra with tests is a Kleene algebra with tests. It remains to prove the implication in the other direction. Let ˆx and ˆy verify the properties of lemma3.2.9for x and y respectively.

KAT∗  x = y

h KAT  x = ˆxand KAT  y = ˆy i KAT∗  ˆx = ˆy

⇔ h The model H is complete for KAT∗i H(ˆx) = H(ˆy)

⇔ h By lemma3.2.9, L(ˆx) = H(ˆx) and L(ˆy) = H(ˆy) i L(ˆx) = L(ˆy)

⇔ h The model L is complete for KA (see [Koz94]) i KA  ˆx = ˆy

⇒ h Any Kleene algebra with tests is a Kleene algebra i KAT  ˆx = ˆy

h KAT  x = ˆxand KAT  y = ˆy i

KAT  x = y 

Another important result of [KS96] is the completeness of relational models.

3.2.4 Decidability

The decidability of the equational theory of KAT, as presented in [KS96], is based on a reduc-tion to proposireduc-tional dynamic logic [FL79]. This reduction puts the theory in the complexity class EXPTIME. In another article [CKS96], a PSPACE algorithm is presented. Since KA is PSPACE-complete, one can deduce that KAT is also PSPACE-complete.

(34)

3.3

Kleene algebras with domain

3.3.1 History and axiomatizations

Kleene algebras with domain were introduced in [DMS06]. In this article, the authors start with Kleene algebras with tests and add the domain operator. Furthermore, in [DS11], an internal axiom-atization of Kleene algebras with domains is presented. This approach is more general, since the domain operator is defined in all semirings. This is the approach we will use.

3.3.2 Definition

A Kleene algebra with domain (KAD) [DS11] is a structure hK, +, ·,∗, 0, 1,pisuch that

1. hK, +, ·,∗, 0, 1iis a Kleene algebra and

2. hK, +, ·, p, 0, 1i a semiring with domain.

When hK, +, ·,∗, 0, 1iis a-continuous Kleene algebra, we say hK, +, ·,, 0, 1,piis a-continuous

Kleene algebra with domain.

3.3.3 Models

We present some examples of Kleene algebras with domain.

Example 3.3.1. Let Σ be a finite alphabet. From example3.1.4, the structure LanΣ= h2Σ

, ∪, ·,∗, ∅, {ε}i is a∗-continuous Kleene algebra. It is possible to extend Lan

Σ to a Kleene algebra with domain by

defining, for all A ⊆ Σ∗,

pA = (

∅, if A = ∅, {ε}, otherwise.

Example 3.3.2. Let E be a set and RelE = h2E×E, ∪, ◦,∗, ∅, Iibe the algebra defined in

exam-ple3.1.6. Defining for all R ⊆ E × E

pR = {(x, x) : (∃y | y ∈ E : (x, y) ∈ R)}, the structure RelE = h2E×E, ∪, ◦,∗,p, ∅, Iiis a Kleene algebra with domain.

Example 3.3.3. In example3.1.7, we presented the Kleene algebra CheΣ = h2Σ

, ∪,1,∗, ∅, {ε}i. Defining for all w ∈ Σ∗

pw = (

ε, if w = ε,

(35)

and for all A ⊆ Σ∗

pA = {px : x ∈ A},

(36)

Chapter 4

Trees with a top

In this chapter, we introduce trees with a top. These are the tree structures used in the proofs of completeness of the chapter5. Trees with a top are trees (in the sense of graph theory) whose edges are labeled with elements of a finite alphabet. In this chapter, we present the inductive construction of trees with a top and some of their properties. In the first section, we define simple terms on a finite alphabet and define functions on these terms; then, in section4.2we present the inductive construc-tion of trees with a top, from the terms; finally, in secconstruc-tion4.3we define the notion of simulation for trees with a top and prove some results.

4.1

Alphabet and termes

Definition 4.1.1. Let Σ be a finite alphabet not containing 0 and 1. The set MΣof simple terms on

Σis the smallest set such that • 1 ∈ MΣ,

• Σ ⊆ MΣ,

• x, y ∈ MΣ ⇒ px, xy ∈ MΣ.

The set test is the subset of MΣdefined inductively as follows:

• 1 ∈ test,

• x ∈ MΣ ⇒ px ∈ test,

(37)

The elements of the set test are called tests. We call monogenic term any simple term of the form axor p(ax), where a ∈ Σ and x ∈ MΣ. In addition, the monogenic terms of the form p(ax) are called

monogenic tests. Moreover, we define the set DΣof DS-terms as the smallest set such that

• 0, 1 ∈ DΣ,

• Σ ⊆ DΣ,

• x, y ∈ DΣ ⇒ px, xy, x + y ∈ DΣ.

Finally, the KAD-terms are the elements of the smallest set that satisfies the following properties: • 0, 1 ∈ TΣ,

• Σ ⊆ TΣ,

• x, y ∈ TΣ ⇒ px, xy, x + y, x∗ ∈ TΣ.

Thus, the set of simple terms MΣ contains the KAD-terms in which + and ∗ do not appear,

whereas DΣcontains the KAD-terms in which∗does not appear.

Remark 4.1.2. Manipulating the terms, we will reason modulo the axioms of domain semirings, considering 0 and 1 as 0 and 1 respectively. The goal being to prove the equality of terms modulo the axioms of DS, we can use these in our reasonings.

As far as the matrices are concerned, according to [DMS06, Example 2.8], if hS, +, ·, 0, 1i is an idempotent semiring, then the set of matrices on S is an idempotent semiring. So, DS axioms can be used on matrices as well.

We now define some functions that will be used later.

Definition 4.1.3. The function ε : MΣ → {0, 1}is defined as follows:

• ε(1) = 1,

• ε(a) = 0, for all a ∈ Σ, • ε(ps) = 1, for all s ∈ MΣ,

• ε(st) = ε(s)ε(t), for all s, t ∈ MΣ.

Then we define |s| as the number of occurrences of elements of Σ in a simple term s. More formally, |s|is defined inductively as follows:

(38)

• |1| = 0,

• |a| = 1, for all a ∈ Σ, • |ps| = |s|, for all a ∈ MΣ,

• |st| = |s| + |t|, for all s, t ∈ MΣ.

Remark 4.1.4. Let s be a simple term. If s ∈ test, then ε(s) = 1.

The remark4.1.4is easy to check using the inductive definition of test4.1.1. Indeed, by definition, ε(1) = ε(ps) = 1, for all s ∈ MΣ. Finally, if s, t ∈ test and ε(s) = ε(t) = 1, then ε(st) = ε(s)ε(t) =

11 = 1. 

Now we define a function supp called support. In [KTW15], the authors define and study a function similar to this one. However, we cannot use their results because they use the Hadamard product instead of the classical matrix product.

Definition 4.1.5. Let x, y ∈ DΣbe two DS-terms in which p does not appear. We define:

supp(0) def= 0, (4.1)

supp(1) def= 1, (4.2)

supp(a) def= 1, (4.3)

supp(x + y) def= supp(x) + supp(y), (4.4)

supp(xy) def= supp(x) supp(y). (4.5)

Let M : m × n be a matrix whose entries are DS-terms in which p does not appear. Then, supp(M) is the matrix whose shape is identical to the shape of M and defined as follows:

( supp(M ))[i, j] def= supp(M [i, j]), (4.6) for all 1 ≤ i ≤ m and 1 ≤ j ≤ n.

Example 4.1.6. Let Σ = {a, b, c}. If M = " a 0 b 0 1 c # , then supp(M) = " 1 0 1 0 1 1 # .

The following proposition gives some properties of the function supp.

Proposition 4.1.7. Let X1and X2be two matrices whose entries are DS-terms with no occurrence of

(39)

1. If X1is a binary matrix, then supp(X1) = X1.

2. X1 ≤ X2 ⇒ supp(X1) ≤ supp(X2). In other words, the function supp is isotone on matrices.

3. supp(X1+ X2) = supp(X1) + supp(X2).

4. supp(X1X2) = supp(X1) supp(X2).

5. supp( supp(X1)) = supp(X1).

Proof.

1. It is enough to notice that by definition 4.1.5, if x ∈ {0, 1}, then supp(x) = x. Thus, if X1 : m × nis a matrix such that X1[i, j] ∈ {0, 1}for all 1 ≤ i ≤ m and 1 ≤ j ≤ n, then

( supp(X1))[i, j] = X1[i, j]and hence supp(X1) = X1.

2. First, we prove that for all x, y ∈ Σ ∪ {0, 1}, we have x ≤ y ⇒ supp(x) ≤ supp(y). By definition4.1.5and the hypothesis x ≤ y, which is equivalent to x + y = y,

supp(x) + supp(y) = supp(x + y) = supp(y).

Hence, supp(x) ≤ supp(y). Now, consider two matrices X1 and X2 of shape m × n such

that X1 ≤ X2. By definition, that is equivalent to X1[i, j] ≤ X2[i, j]for all 1 ≤ i ≤ m and

1 ≤ j ≤ n. According to what we just proved, this implies supp(X1[i, j]) ≤ supp(X2[i, j]).

Thus, ( supp(X1))[i, j] ≤ ( supp(X2))[i, j]. We hence proved that X1 ≤ X2 ⇒ supp(X1) ≤

supp(X2).

3. Let X1 : m × n, X2 : m × nand m, n ≥ 1. For all 1 ≤ i ≤ m and 1 ≤ j ≤ n,

( supp(X1+ X2))[i, j]

= h (4.6) i supp((X1+ X2)[i, j])

= h Matrix addition i supp(X1[i, j] + X2[i, j])

= h (4.4) i

supp(X1[i, j]) + supp(X2[i, j])

= h (4.6) i

( supp(X1))[i, j] + ( supp(X2))[i, j]

= h Matrix addition i ( supp(X1) + supp(X2))[i, j].

(40)

Thus supp(X1+ X2) = supp(X1) + supp(X2).

4. Let X1 : m × nand X2 : n × p. When one of m, n, p equals 0, the property is trivial. Hence,

we assume m, n, p ≥ 1. It is enough to prove that for all 1 ≤ i ≤ m and 1 ≤ j ≤ p, we have ( supp(X1X2))[i, j] = ( supp(X1) supp(X2))[i, j]. Let 1 ≤ i ≤ m and 1 ≤ j ≤ p.

( supp(X1X2))[i, j] = h (4.6) i supp((X1X2)[i, j]) = h Matrix multiplication i supp((P k | 1 ≤ k ≤ n : X1[i, k]X2[k, j])) = h (4.4) i (P k | 1 ≤ k ≤ n : supp(X1[i, k]X2[k, j])) = h (4.5) i

(P k | 1 ≤ k ≤ n : supp(X1[i, k]) supp(X2[k, j]))

= h (4.6) i

(P k | 1 ≤ k ≤ n : ( supp(X1))[i, k]( supp(X2))[k, j])

= h Matrix multiplication i ( supp(X1) supp(X2))[i, j]

Hence, supp(X1X2) = supp(X1) supp(X2).

5. Since supp(X1)is a binary matrix, proposition4.1.7.1implies supp( supp(X1)) = supp(X1).

 Proposition 4.1.8. Let Q1 and Q2 be two matrices whose entries are DS-terms in which p does not

appear.

1. If σ is an application, then Q1σ ≤ Q2 ⇔ Q1≤ Q2σ`.

2. If σ is an application, then σ`Q

1 ≤ Q2 ⇔ Q1≤ σQ2.

Proof.

1. Assume that σ is an application, which, by definition, is to say that σ is total and deterministic. First, using isotony of ·, the assumption that σ is total and transitivity of ≤, we have

(41)

Next, isotony of ·, the assumption that σ is deterministic and transitivity of ≤ give Q1≤ Q2σ` ⇒ Q1σ ≤ Q2σ`σ ⇒ Q1σ ≤ Q2.

Thus Q1σ ≤ Q2 ⇔ Q1≤ Q2σ`.

2. Again, assume that σ is an application. Using isotony of ·, the assumption that σ is total and transitivity of ≤, we have

σ`Q1 ≤ Q2 ⇒ σσ`Q1≤ σQ2 ⇒ Q1≤ σQ2.

Moreover, using isotony of ·, the assumption that σ is deterministic and transitivity of ≤, we have

Q1 ≤ σQ2 ⇒ σ`Q1 ≤ σ`σQ2 ⇒ σ`Q1 ≤ Q2.

Hence, we proved that σ`Q

1 ≤ Q2 ⇔ Q1≤ σQ2.

4.2

Construction of trees with a top

We define, for each s ∈ MΣ, a tree with a top T (s). The tree T (s) can be considered as an alternative

representation of the simple term s. We begin by presenting trees in a schematic way, before going on to the formal definitions.

A tree is a finite, directed, connected and acyclic graph. The root of a tree is the only node without any incoming arc. A tree with a top is a tree with a special node called top. The arcs are labeled with elements of the alphabet Σ. In the following figures, it is assumed that the arcs are oriented from top to bottom. The top is represented by a hollow circle, as in the two trees below.

T1 = a a b T2= d d c a

The tree T (s) is built inductively.

• We define T (1) as the tree with a single node, which is both root and top. T (1) =

• For all a ∈ Σ,

a T (a) =

(42)

• Let s ∈ MΣ. The tree T (ps) is built from T (s) by defining the top as the root. For example, T (s) = d d c a ⇒ T (ps) = d d c a

• Let s, t ∈ MΣ. The tree T (st) is built by merging the top of T (s) with the root of T (t). The

root of T (st) is the root of T (s) and the top of T (st) is the top of T (t). For instance,

T (s) = d c and T (t) = a b ⇒ T (st) = d c a b

Definition 4.2.1. A tree with a top (or simply tree) is a triplet of matrices hu, M, vi = h " 11×1 0m×1 # , " 01×1 λ1×m 0m×1 Am×m # , " ε1×1 ζm×1 # i, (4.7)

where m ∈ N, the vectors u and v are respectively the root and the top of the tree, and M is a square matrix whose entries are in Σ∪{0} and is the adjacency matrix of the tree whose root is given by u.

Now, we define inductively, for each s ∈ MΣ, a tree

T (s) = hus, Ms, vsi = h " 11×1 0m×1 # , " 01×1 λs 0m×1 As # , " ε(s) ζs # i (4.8) where m = |s|.

We define T (s) by structural induction.

• T (1) = hu1, M1, v1i = h[1], [0], [1]i. This structure has the shape of (4.8) with m = |1| = 0

and ε(1) = 1. The tree T (1) is a tree with a single node and no transition. • If a ∈ Σ, T (a) = hua, Ma, vai = h " 11×1 01×1 # , " 01×1 a 01×1 01×1 # , " 01×1 11×1 # i.

(43)

We see that T (a) also has the form of (4.8) with m = |a| = 1 and ε(a) = 0. It is a tree with two nodes and a single transition labelled a.

• The tree T (ps) is identical to T (s) except that vps = us. In other words,

T (ps) = hus, Ms, usi.

• Assume m = |s|, n = |t|, T (s) = hus, Ms, vsiand T (t) = hut, Mt, vti, where

T (s) = h " 11×1 0m×1 # , " 01×1 λs 0m×1 As # , " ε(s) ζs # i and T (t) = h " 11×1 0n×1 # , " 01×1 λt 0n×1 At # , " ε(t) ζt # i. Then, T (st) = hust, Mst, vsti = h    11×1 0m×1 0n×1   ,    01×1 λs ε(s)λt 0m×1 As ζsλt 0n×1 0n×m At   ,    ε(st) ζsε(t) ζt   i.

It is clear that T (st) also has the shape of (4.8). We denote TΣthe set of trees on a given alphabet Σ.

The construction of T (st) is based on a technique used in [BCM14] for finite automata.

Example 4.2.2. We give an example with T (p(ab)cpd). Since a, b, c, d ∈ Σ, we have T (a) = h " 1 0 # , " 0 a 0 0 # , " 0 1 # i, T (b) = h " 1 0 # , " 0 b 0 0 # , " 0 1 # i, T (c) = h " 1 0 # , " 0 c 0 0 # , " 0 1 # i, T (d) = h " 1 0 # , " 0 d 0 0 # , " 0 1 # i. Here is T (ab): T (ab) = h    1 0 0   ,    0 a 0 0 0 b 0 0 0   ,    0 0 1   i.

By definition4.1.3, |ab| = |a| + |b| = 1 + 1 = 2. This is why T (ab) has the form of (4.8) with m = 2. We continue with T (p(ab)). By definition4.2.1, we just have to replace vabwith uab. Hence,

T (p(ab)) = h    1 0 0   ,    0 a 0 0 0 b 0 0 0   ,    1 0 0   i.

(44)

From T (d), we build T (pd): T (d) = h " 1 0 # , " 0 d 0 0 # , " 1 0 # i.

Since we have T (c) and T (pd), we can build T (cpd): T (cpd) = h    1 0 0   ,    0 c 0 0 0 d 0 0 0   ,    0 1 0   i.

Finally, here is T (p(ab)cpd) from T (p(ab)) and T (cpd):

T (p(ab)cpd) = h         1 0 0 0 0         ,         0 a 0 c 0 0 0 b 0 0 0 0 0 0 0 0 0 0 0 d 0 0 0 0 0         ,         0 0 0 1 0         i.

We give a graphic illustration of this example. First, a ,

T (a) = T (b) = b ,

c and

T (c) = T (d) = d .

Here are T (ab) and T (p(ab)):

T (ab) = a b and T (p(ab)) = . a b From T (d), we build T (pd): d . T (pd) =

(45)

Then, from T (c) and T (pd), we build T (cpd):

T (cpd) = .

c

d

Finally, we build T (p(ab)cpd) from T (p(ab)) and T (cpd):

T (p(ab)cpd) = .

a b

c

d

The following remark shows that we could have adopted a more algebraic definition for T (st). Remark 4.2.3. Let s, t ∈ MΣ, T (s) = hus, Ms, vsi, T (t) = hut, Mt, vti, |s| = m and |t| = n.

Define

ν1 = [Im+1 | 0(m + 1)×n] and ν2= [0n×(m + 1) | In].

From proposition2.3.8.1, the ordered pair hν1, ν2iis a direct sum. Moreover, by definition4.2.1,

ust = ν`1us, (4.9)

Mst = ν`1Msν1+ ν`1vsλtν2+ ν`2Atν2, (4.10)

vst= ν`1vsε(t) + ν`2ζt. (4.11)

As we will see in the following, the next remark will be very useful. Remark 4.2.4. Let s ∈ MΣand m = |s|. Define

ν = [0m×1 | Im] .

The ordered pair

hu`s, νi = h[I1 | 01×m] , [0m×1 | Im]i

is a direct sum by proposition2.3.8. Moreover, from definition 4.2.1, T (s) has the following form (4.8): T (s) = hus, Ms, vsi = h " 11×1 0m×1 # , " 01×1 λs 0m×1 As # , " ε(s) ζs # i. Thus, we can write

Ms= usλsν + ν`Asν, (4.12)

(46)

The following proposition gives some basic properties of trees with a top.

Proposition 4.2.5. Let a ∈ Σ, s, t ∈ MΣ, T (s) = hus, Ms, vsi, T (t) = hut, Mt, vti, |s| = m and

|t| = n. The following properties hold.

1. us=

" 11×1

0m×1

#

. Plus, usu`s ≤ Iand u`sus = I1. In other words, usis injective, surjective and

deterministic. Hence, u`

s is an application.

2. The vector vs is unitary. Hence, vsv`s ≤ Im+1 and v`svs = I1, which implies that v`s is an

application.

3. Msus= 0(m + 1)×1.

4. ζps = 0m×1.

5. |s| = |t| ⇒ us= ut.

6. uspt= ustand Mspt = Mst.

7. If s is a monogenic term, namely s ∈ {ar, p(ar)} with r ∈ MΣ, then λs = [a | 01×|r|]and

As= Mr.

8. If s = ar, for r ∈ MΣ, then ζs= vr.

9. If ε(s) = 1, then ζs= 0m×1. 10. If s ∈ test, then us= vs. 11. If s ∈ test, then DS ` s ≤ 1. Proof. 1. By (4.8), us = " 11×1 0m×1 # .

Using the previous equation and matrix multiplication, we have usu`s = " 11×1 0m×1 # [11×1 | 01×m] = " 11×1 01×m 0m×1 0m×m # ≤ Im+1 and u`sus= [11×1 | 01×m] " 11×1 0m×1 # = [1] = I1.

(47)

2. Let us prove that vsis a unitary vector. We perform a proof by induction on the structure of s.

The induction predicate P is defined as follows:

P (s) : vsis a unitary vector.

• If s ∈ Σ ∪ {1}, then vsis clearly unitary. Indeed, definition4.2.1gives

v1= [1] and va= " 0 1 # , if a ∈ Σ.

• Assume P (s) and prove P (ps). By definition4.2.1and proposition4.2.5.1,

vps= us= " 11×1 0m×1 # . Hence P (ps) holds.

• Let us assume P (s) and P (t) and prove P (st). By definition4.2.1,

vs= " ε(s) ζs # , vt= " ε(t) ζt # and vst =    ε(s)ε(t) ζsε(t) ζt   .

We perform a proof by case.

• ε(s) = 1and ε(t) = 1. In this case, the induction hypothesis implies ζs= 0m×1and

ζt= 0n×1. Hence, by definition4.1.3, vst =    ε(s)ε(t) ζsε(t) ζt   =    11×1 0m×1 0n×1   ,

which proves that vstis unitary.

• ε(s) = 1and ε(t) = 0. In this case, the induction hypothesis implies that ζs = 0m×1

and that ζtis a unitary vector. Since

vst =    ε(s)ε(t) ζsε(t) ζt   =    01×1 0m×1 ζt   ,

we can conclude that vstis a unitary vector.

• ε(s) = 0and ε(t) = 1. The induction hypothesis implies that ζs is a unitary vector

and ζt= 0n×1. Hence, vst=    ε(s)ε(t) ζsε(t) ζt   =    01×1 ζs 0n×1   ,

(48)

• ε(s) = 0and ε(t) = 0. In this case, the induction hypothesis implies that ζs and ζt

are unitary vectors. Hence, vst =    ε(s)ε(t) ζsε(t) ζt   =    01×1 0m×1 ζt   ,

which is a unitary vector.

Thus, for each s ∈ MΣ, vsis a unitary vector. Hence vsv`s ≤ Im+1 and v`svs= I1.

3. By definition4.2.1and proposition4.2.5.1,

Msus= " 01×1 λs 0m×1 As # " 11×1 0m×1 # = 0(m + 1)×1.

4. By definition4.2.1, vps = us. Using this and proposition4.2.5.1, we have

vps = us= " 11×1 0m×1 # = " ε(ps) ζps # . Hence, ζps = 0m×1. 5. By proposition4.2.5.1, us= " 11×1 0m×1 # and ut= " 11×1 0n×1 # . Hence, if m = n, then us= ut. 6. By definition4.1.3, |spt| = |s| + |pt| = |s| + |t| = |st|. Combining that with proposition4.2.5.5, we get

uspt= ust.

It remains to prove that Mspt= Mst. Recall that, by definition4.2.1,

Mst=    01×1 λs ε(s)λt 0m×1 As ζsλt 0n×1 0n×m At    and Mspt=    01×1 λs ε(s)λpt 0m×1 As ζsλpt 0n×1 0n×m Apt   . Moreover, " 01×1 λpt 0n×1 Apt # = Mpt= Mt= " 01×1 λt 0n×1 At # . Thus, since λpt = λtand Apt= Apt, we have Mspt = Mst.

(49)

7. Assume r ∈ MΣ. By definition4.2.1, T (a) = h " 11×1 01×1 # , " 01×1 a 01×1 01×1 # , " 01×1 11×1 # i and T (r) = h " 11×1 0|r|×1 # , " 01×1 λr 0|r|×1 Ar # , " ε(r) ζr # i. If s ∈ {ar, p(ar)}, then, by definitions4.2.1and4.1.3,

Ms=    01×1 λa ε(a)λr 01×1 Aa ζaλr 0|r|×1 0|r|×1 Ar   =    01×1 a 01×|r| 01×1 01×1 λr 0|r|×1 0|r|×1 Ar   . Hence, λs= [a | 01×|r|]. Moreover, since Mr = " 01×1 λr 0|r|×1 Ar # , we also have As= Mr.

8. Assume s = ar, where r ∈ MΣ. Again by definition4.2.1,

" ε(a) ζa # = va= " 01×1 11×1 # , " ε(ar) ζar # = vs =    ε(ar) ζaε(r) ζr   =    ε(ar) ε(r) ζr    and vr= " ε(r) ζr # . Thus ζs= ζar = vr.

9. By proposition4.2.5.2, vsis a unitary vector. Hence, since

vs= " ε(s) ζs # , if ε(s) = 1, then ζs= 0m×1.

10. We perform a proof by induction on the structure of s. The induction predicate P , is defined as follows

(50)

• Base case: s = 1. In this case, us= [1] = vs, by definition4.2.1.

• Assume s = pr with r ∈ MΣ. Definition4.2.1gives us= ur = vs.

• Finally, we assume s1, s2 ∈ test, P (s1) and P (s2)and prove P (s) with s = s1s2. By

definition4.2.1and the induction hypothesis, " ε(s1) ζs1 # = vs1 = us1 = " 11×1 0|s1|×1 # and " ε(s2) ζs2 # = vs2 = us2 = " 11×1 0|s2|×1 # . Hence, ε(s1) = ε(s2) = 1, ζs1 = 0|s1|×1and ζs2 = 0|s2|×1. We infer

vs=    ε(s1s2) ζs1ε(s2) ζs2   =    11×1 0|s1|×1 0|s2|×1   .

Using |s| = |s1| + |s2|and proposition4.2.5.1, we get

vs = " 11×1 0|s|×1 # = us.

11. We perform a proof by induction on the structure of s ∈ test. The induction predicate P is defined as follows

P (s) : DS ` s ≤ 1. • The case P (1) is trivial.

• For all s ∈ MΣ, the axiom (DS3) gives ps ≤ 1. Hence, P (ps) holds.

• We assume P (s1)and P (s2)and prove P (s1s2). Using the induction hypothesis, isotony

of · (proposition2.2.5.3) and (2.5), we have

s1s2 ≤ 11 = 1.

Hence, for all s ∈ test, the inequality s ≤ 1 is provable in DS. 

4.3

Simulation

(51)

Definition 4.3.1. Let T (s) = hus, Ms, vsi, T (t) = hut, Mt, vti and R be a relation. We say that

T (s) R-simulates T (t), written T (s) 4RT (t)if the following conditions hold:

u`t ≤ u`sR, (sim1)

RMt≤ MsR, (sim2)

Rvt≤ vs. (sim3)

Our definition of simulation is close to De Roever’s [DREB98]. We also define

T (s) 4 T (t) ⇔ (∃R |: T (s) 4def RT (t)), (4.14)

T (s) ' T (t) ⇔ T (s) 4 T (t) ∧ T (t) 4 T (s).def (4.15) According to this definition, T (s) simulates T (t) if

1. there is a relation R such that both roots are in R;

2. if i and j are two nodes such that (i, j) is in R, then i can do anything that j can do; 3. only the top of T (s) can simulate the top of T (t).

Moreover, if T (s) ' T (t), this means that there are two relations R and R0such that T (s) 4 RT (t)

and T (t) 4R0 T (s). The relations R and R0need not be identical. Neither do they need to be the

converse of each other, as is the case with bisimulation. Thus, simulation equivalence is weaker than bisimulation.

Theorem 4.3.2. The relation 4 is a preorder. Moreover, ' is an equivalence relation. Proof. Let r, s, t ∈ MΣ. First, 4 is reflexive since T (s) 4I T (s). Indeed,

u`s = u`sI, IMs= MsI and Ivs= vs.

Then, to prove the transitivity of 4, we assume that R and R0are two relations such that

T (r) 4RT (s) and T (s) 4R0 T (t)

and we prove that T (r) 4RR0 T (t). Using the hypotheses T (r) 4R T (s), T (s) 4R0 T (t) and

isotony, we have:

u`t ≤ u`sR0 ≤ u`rRR0, RR0Mt≤ RMsR0 ≤ MrRR0

and

(52)

Hence, 4 is a preorder.

Finally, since 4 is a preorder, ' is an equivalence relation according to proposition2.2.3and the

definition of '. 

One may think that 4 is an antisymmetric relation. The following example shows that this is not the case.

Example 4.3.3. Let s = a and t = paa. By definition4.2.1,

T (s) = hua, Ma, vai = h " 1 0 # , " 0 a 0 0 # , " 0 1 # i and

T (t) = hupaa, Mpaa, vpaai = h    1 0 0   ,    0 a a 0 0 0 0 0 0   ,    0 0 1   i.

Let R and R0be the following relations:

R = " 1 0 0 0 1 1 # and R0 =    1 0 0 0 0 1   .

Let us show that T (s) 4RT (t). First,

u`sR = [1 0] " 1 0 0 0 1 1 # = [1 0 0] = u`t. Hence (sim1) is satisfied. Next,

RMt= " 1 0 0 0 1 1 #    0 a a 0 0 0 0 0 0   = " 0 a a 0 0 0 # and MsR = " 0 a 0 0 # " 1 0 0 0 1 1 # = " 0 a a 0 0 0 # .

Since RMt= MsR, we deduce that (sim2) is also satisfied. Finally, (sim3) is satisfied because

Rvt= " 1 0 0 0 1 1 #    0 0 1   = " 0 1 # = vs.

Références

Documents relatifs

Continuous first-order logic was developed in [BU] as an extension of classical first- order logic, permitting one to broaden the aforementioned classical analysis of algebraic

Indeed, if we forget the (n + 1)-dimensional coherence cells and look only at their n-dimensional borders in (2.3.5), (2.3.6) and (2.3.8), we obtain precisely the diagrams used to

We now enter a different sphere. It is no longer a question of protecting man against the evils of war, but against the abuses of the State and the vicissitudes of life. If

Given the category AGraphs ATGI with all typed at- tributed graphs over attributed type graph ATGI with node type inheritance as objects and all typed attributed graph morphisms

Radiologie Hématologie O.R.L Biophysique Chirurgie - Pédiatrique Chirurgie Cardio – Vasculaire Gynécologie Obstétrique Cardiologie Gastro-entérologie Cardiologie

DUCOROY 2 , SEGOLENE GAILLARD 3 , NOEL PERETTI 4 AND GILLES FERON 1 1 Centre des Sciences du Goût et de l’Alimentation, UMR6265 CNRS, UMR1324 INRA, Université de Bourgogne,

Making information seeking both more efficient and simple is an important research endeavor. We developed Stackables as a new tangible solution for faceted browsing and search and

Our analysis of this combined geophysical and thermal monitoring approach focused on (i) the ability of the A-ERT system to monitor the spatiotemporal variability in the active