• Aucun résultat trouvé

Automatic verification of the timing properties of MMT automata

N/A
N/A
Protected

Academic year: 2021

Partager "Automatic verification of the timing properties of MMT automata"

Copied!
93
0
0

Texte intégral

(1)

Verification of the Timing Properties

of MMT Automata

by

Ekrem Sezer S6ylemez

Submitted to the Department of Electrical Engineering and

Computer Science

in partial fulfillment of the requirements for the degree of

Master of Engineering in Computer Science and Engineering

at the

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

February 1994

©

Massachusetts Institute of Technology 1994. All rights reserved.

Author

...

Department of Electrical Engineering and Computer Science

February 4, 1994

Certified by.

Nancy A. Lynch

Professor

,is Supervisor

Certified by... .

ag

I

\

Accepted by...

Sphen

J. Garland

| - e

pervisor

ciupervtisr

. . . I ) .

... ...

.N

Leonard A. Gould

Chairman,

ADepartnretalCmmistee

on Graduate Students

(2)

Automatic Verification of the Timing Properties of MMT

Automata

by

Ekrem Sezer S6ylemez

Submitted to the Department of Electrical Engineering and Computer Science on February 4, 1994, in partial fulfillment of the

requirements for the degree of

Master of Engineering in Computer Science and Engineering

Abstract

This thesis represents the first use of the Larch tools to verify the timing properties of distributed algorithms, as specified by MMT automata. It shows general methods to formalize and verify automata and timed forward simulations. Additionally, it describes a set of libraries to aid the axiomization of MMT automata and simulation relationships. It includes a sample verified forward simulation relationship. Finally, it details the difficulties that will face future users, and offers some solutions.

Thesis Supervisor: Nancy A. Lynch Title: Professor

Thesis Supervisor: Stephen J. Garland Title: Principal Research Scientist

(3)

Acknowledgments

I would like to thank all the people who made this possible. First and foremost, this means my thesis advisors, whose expertise in the material was invaluable in doing the research, and whose comments helped improve the quality of the document immeasurably. Stephen Garland in particular helped me throughout the proving process, and was subjected to many pages of unpolished document. I would also like to thank my family, who gave me the motivation I needed to sit down and write the first draft, and to keep writing until it was done. Without them, the thesis might never have been written.

(4)

Contents

1 Introduction

1.1 Automata ...

1.2 Proving Properties of MMT Automata 1.2.1 Operational Proofs ...

1.2.2 Invariant Proofs ... 1.3 Difficulties of Invariant Proofs ...

1.3.1 Proof Obligation Difficulties . . 1.3.2 Uncertainty About Correctness 1.4 Solution: Machine-Assisted Proofs . . .

1.4.1 Previous Research ... 1.5 My Research ... 1.5.1 The Goal ... 1.5.2 Method ... 1.5.3 The Example ... 2 Background 2.1 Input/Output Automata ... 2.2 Forward Simulation ... 2.3 Timed Automata ... 2.4 MMT Automata ... 2.4.1 Invariants ...

2.5 Timed Forward Simulation ...

2.5.1 Invariant Definition . . . . .. 8 8 9 .. o .o .o o .o .. . o.o 9 . . . . o..o. . .. . .. 9 . . . . .. . . 10 . . . . .. . . 10 . . . . .. . . 11 . . . . .. . . 1 1 . . . . .. . . 12 . . . . .. . . 12 . . . .. . . 12 . . . . .. . . 13 . . . . .. . . 14 15 15 16 17 18 20 20 21

...

...

...

...

...

... I ...

...

...

...

(5)

3 The Larch Tools

3.1 Overview and Purpose of Each Tool. 3.2 Formalizing the Model with LSL . . .

3.2.1 Notation in Larch ...

3.2.2 Generic Background for LSL . 3.2.3 The Layered Approach to LSL

3.2.4 I/O Automata. ...

3.2.5 MMT Automaton ... 3.2.6 Simulations ... 3.2.7 The Specific Traits ... 3.2.8 Using the LSL Checker ... 3.3 Using LP ... 3.3.1 LP Concepts ... 3.3.2 Commands ... . o. . . . o . o. .. . . . . o. . . .. . .. . . . . . . .o . o. . . . .. . . . .. Specification . . . . .. . . . ,. .. . . . .. . . . . . . . . .. .. . . . ,. . . .. . . . .. . . .. . .. .. .. . . . .. . . . . .. . . . . .. . . . . . . . .. . .. . . . . . . . .. , . . . .. . .. .

3.3.3 A Guided Tour of a Simple Proof .

4 Example - Counting Automaton

4.1 Hand Proof ... 4.1.1 Automata Definitions ... 4.1.2 Mapping. ... 4.1.3 Theorem . ... 4.1.4 Proof. ... 4.2 LSL Formalization ... 4.3 Commented Proof Scripts ...

4.3.1 Verification of the Implementation Automaton's Invariant 4.3.2 Verification of the Simulation Relationship ...

5 Techniques for Using LP

5.1 Hand Proof ...

5.2 Breaking Down Proofs ... 5.2.1 Top Down Proof Design ...

25 25 26 29 31 34 39 40 41 42 42 44 48 52 52 52 54 55 55 62 66 67 68 77 .. 78 .. 79 .. 79 23 24

(6)

5.2.2 Proof Tree . . . 81

5.3 Pitfalls in LP Use ... 82

5.4 Getting Stuck ... 84

5.5 Slogans ... 85

5.6 Polishing the Proof ... 86

6 Conclusions 89 6.1 Future Work ... 90

(7)

List of Figures

3-1 Pictorial Overview of the Formalization Process ... 24

3-2 Module Dependency Diagram of a Timed Forward Simulation Rela-tionship ... 30

3-3 LSL trait defining basics of input-enabled I/O automata ... . . 31

3-4 LSL trait defining executions I/O automata ... 32

3-5 LSL trait defining traces for I/O automata ... 33

3-6 LSL trait bringing together I/O automata definition ... . 34

3-7 LSL trait defining requirements for I/O automata ... .... . 34

3-8 LSL trait defining the properties of time ... ... . 35

3-9 LSL trait defining a single class of the boundmap ... . 36

3-10 LSL trait for checking if the automaton has a time in its state ... . 36

3-11 LSL trait describing the creation of a Timed Automaton ... . 38

3-12 LSL trait defining basics of input-enabled I/O automata ... . 40

4-1 LSL trait defining automata's common actions ... 63

4-2 LSL trait defining the specification automaton ... 63

4-3 LSL trait defining the implementation automaton ... .... . 64

(8)

Chapter

1

Introduction

The purpose of this thesis is to explore the idea of using the Larch facilities to verify hand proofs of distributed algorithms. It represents a first attempt at using these tools in this context, and opens avenues for future exploration in this area. This section informally discusses the mathematical model, previous research, and the problem the thesis addresses.

1.1 Automata

Algorithms can be described using automata. The most fundamental type of au-tomaton is the I/O auau-tomaton [9]. This model is used to describe algorithms without any properties that involve time. Once an algorithm has been described using I/O automata, a variety of techniques can be used to prove the algorithm's correctness. In this paper, the most commonly used type of automata is the Merritt, Modugno, Tuttle (MMT) automata [12]. This model provides a simple way to describe basic timing properties of an algorithm. Unlike other models that can be used to describe more complex timing properties, MMT automata are very similar to I/O automata.

It is helpful to think of an MMT automaton as an I/O automaton that is built out

(9)

1.2 Proving Properties of MMT Automata

Since MMT automata are very similar to I/O automata, timing properties of MMT automatoncan be proved in much the same way as other properties of I/O automata. The timing requirements become additional proof obligations.

1.2.1 Operational Proofs

The most obvious way to reason about a program is to discuss its possible executions. Thus, to prove a certain property of the automaton, one must show that every execu-tion has that property. Such a proof is called an operaexecu-tional proof. Unfortunately, it is very difficult to create a rigorous operational proof. This is because there are many executions, and it is hard to be certain that all have been considered. Furthermore, it is at best very hard, and maybe impossible, to create a standard form for operational proofs, as they in general depend on the possible executions.

1.2.2 Invariant Proofs

Fortunately, there is an alternate proof technique, called invariant reasoning, that al-leviates some of the difficulties of operational reasoning. In order to write an invariant proof of some property p of an automaton, one finds some logical statement that is true in every reachable of the automaton and that implies p. This logical statement,

called the invariant, corresponds to the intuitive reason why the automaton has the property. Finding the invariant can be difficult, but once it is done, most of the rest of the proof follows a general pattern. If one makes sure to prove the invariant in

the automaton's initial states, that every action preserves the invariant, and that the

invariant implies the property, one can be sure the property is true of any execution

of the automaton.

Frequently, it is more convenient to express the goal of the proof as a high level automaton, rather than as a set of properties. This high level automaton is referred to as the specification automaton, or sometimes simply the specification. Essentially,

(10)

Once one has a specification written, proving correctness amounts to showing that the implementation simulates the specification. In other words, anything the im-plementation does could also have been done by the specification on the same input. This relationship can be stated formally using an abstraction function from the states and actions of the implementation to the states and actions of the specification. More precisely, the function must map each state of the implementation to' a set of states of the specification, and each action of the implementation to a sequence of actions of the specification, such that the state before the sequence is an element of the

abstrac-tion funcabstrac-tion of the state before the implementaabstrac-tion's acabstrac-tion, and the same condiabstrac-tion

holds for the states after the action. Thus, most invariant proofs of simulations have very similar obligations and structures.

1.3 Difficulties of Invariant Proofs

Although their similar structures makes invariant proofs easier to write and under-stand, there are still many difficulties with them. The primary difficulty is that it is hard to create a complete proof. There are several reasons for this, most of which involve the quantity of things to be proved.

1.3.1 Proof Obligation Difficulties

One major contributor to this excess of things to be proved is the quantity of things to be proved about each action performed by the automaton. One must show that each action of the implementation preserves the relationship defined by the abstraction function and preserves the invariants, both those associated with the base automaton and with the timing properties. Furthermore, one must show that for any action in the implementation there is a legal sequence of actions in the specification within the abstraction function of the implementation's action. Many of these proof obligations are extremely easy. The consequence of the quantity and relative ease of the proof obligations is that it is often tempting to omit certain portions of the proof that seem "obvious," but which may in fact be somewhat subtle. Worse still, it is sometimes

(11)

easy to forget a proof obligation entirely, or to make an unjustified, but "obvious" assumption; and it is very difficult, when reading someone else's proof, to be sure that they cover all of their obligations and make no unjustified assumptions.

1.3.2 Uncertainty About Correctness

Thus, it is at worst difficult and at best extremely tedious to construct a completely

rigorous hand proof of an I/O automaton's properties. Furthermore, the addition of

timing information into MMT automata dramatically increases the number of proof obligations for the easy steps. It also increases the number of actions by introducing an action that represents the passage of time. Therefore, although invariant proofs offer the possibility of complete rigor, it is difficult to achieve in practice.

Worse still, when one is examining someone else's proof, there is no way to be sure that it is completely correct other than by carefully examining each case to be sure that the proof correctly deals with each obligation. This can be almost as difficult as writing the proof in the first place, and is much more tedious.

1.4 Solution: Machine-Assisted Proofs

In some ways, the fact that most of the proof steps are uninteresting is a blessing in disguise. While it means that much of the proving is boring, it also means that a substantial amount of it can be automated with a machine assistant. The general concept of machine assisted proving is to get a computer to keep track of the proof obligations and to perform the easy steps of the proof. With a machine assistant, the user will only need to do the interesting steps and provide general guidelines. Interesting steps correspond to providing invariants, defining the abstraction function, deciding on important lemmas to prove, applying key facts, etc. Machine-assisted proving seems to have the potential to alleviate the two main difficulties of invariant proofs. It eliminates many of the tedious steps, while guaranteeing complete rigor.

(12)

1.4.1 Previous Research

The appeal of automatic proving is obvious. It allows anyone who comes in contact

with the proof to be certain of the proof's correctness, to the extent that they can

trust the proof assistant, which is usually more than they can trust any single proof. Furthermore, it alleviates some of difficulties of writing a proof by hand.

There are many different approaches to automatic proving. These range from fully automated proving to computer assistance. This section briefly discusses some research done in the area before my thesis.

Perhaps the most popular method is to use the Higher Order Logic (HQL) theorem prover [3] in proving simulations [11]. This prover provides mechanisms for very general systems of logic, which the user defines. Some attempts have been made to perform proofs of timed systems with it [1].

Another approach, taken by [5], is to describe a concurrent algorithm using two component finite state machines described with process algebra. This is significantly different, as it does not involve simulation relations at all.

Of the many approaches, the most similar work to mine is [13]. This is similar because it uses the Larch tools [4, 2] as a computer assistant. Furthermore, it proves a simulation relationship between I/O Automata. The primary difference is that the automata involve no timing properties. The absence of timing properties simplifies proofs enormously. It dramatically reduces the proof obligations in small examples. Furthermore, it greatly reduces the amount of algebraic and logical manipulations necessary. The proofs performed here are among the first to make heavy use of LP's quantifier facilities.

1.5 My Research

1.5.1 The Goal

Ideally, we would like to publish proofs that have been computer verified. Thus, whenever a proof is published, the reader would know that it is error free. Before

(13)

this goal can be attained, however, several questions must be resolved. First of all, is machine-assisted proving of timing properties even possible, and if so how? Even if it is possible with current provers, it may be to difficult to be practical. Another issue is the readability of computer-verified proofs, which tend to be long and difficult to read. What is the best trade-off between clarity, brevity, and completeness?

1.5.2 Method

In order to answer these questions, I used the Larch Tools to verify an invariant proof of a simulation between MMT automata. This thesis answers questions of how the process works, including the difficulties encountered during the process of verifying a proof.

Tools

The Larch tools were originally developed for rigorous reasoning about sequential programs, and have been used on relatively easy untimed I/O automata simulations [13]. Using the Larch tools to develop a rigorous proof consists of two phases: formal-izing the axioms and proof goals with the Larch Shared Language (LSL), and proving these goals using the Larch Prover (LP) computer assistant.

The LSL formalization of the model is much like the hand proof model. Each concept, such as I/O Automaton or forward simulation, is defined by a trait, in terms of lower level traits and primitive concepts. For example, the TimedAutomaton trait is built upon the IOAutomaton and Time traits, among others. The TimedAutomaton trait is, in turn, used to define the properties of specific automata. A simulation relationship is then defined between two specific automata. Thus, since the LSL traits for timed automata and simulation relationships have already been created, defining a new MMT automaton is as simple as writing an LSL description of the

automaton's properties, and stating that it is an MMT automaton. Once the traits

have been defined, the LSL checker is used to produce a set of axioms and proof obligations. For example, if the trait defining a specific automaton claims that a set of invariants hold throughout its operation, then running the LSL checker on

(14)

that trait produces a proof obligation that states that the invariant holds in every reachable state; the user must verify this obligation with the Larch Prover.

The Larch Prover is the machine assistant for the proof itself. In many ways, it can be thought of as an interpreter for the proof. LP keeps track of known facts, and goals to be proved. The facts that LP remembers include axioms from the LSL traits, lemmas proved to make later sections easier, and context-dependent hypotheses. The job of the user is to provide significant proof steps that the prover cannot figure out itself. At present, there are many steps for which LP needs guidance, but may not ultimately. For example, many algebraic manipulations are difficult in LP and must be guided through specific steps (e.g., use transitivity here, add c to each side there, etc). Soon this difficulty will be overcome as LP's developers are working on a module to handle these proof steps without user guidance. Even in the long run, however, the user will need to provide an overall proof strategy for LP. This includes such things as whether to use induction or contradiction, providing and using lemmas, and dealing with quantifiers.

1.5.3

The Example

The proof I have verified is a relatively simple simulation relationship. The proof

shows that an automaton that counts down at a certain speed and outputs a report when it gets to zero must output the report within a certain time range. This was

chosen as the example for several reasons. First of all, it is a fairly easy proof, so

that I could focus on the difficulties inherent to automatic verification rather than the

difficulties specific to the proof. Secondly, the hand proof has been worked through many times in many variants. One appears in [8]. Finally, despite its relative ease, it captures many elements common to most simulation proofs.

(15)

Chapter 2

Background

This chapter presents a brief technical introduction to the concepts and methods used

in hand proofs. To this end, it discusses a basic model of untimed algorithms and an approach to correctness proofs with it. The chapter then discusses an extension that adds some timing information to the model, and how this extension changes the proof techniques. This chapter presents only the tip of the iceberg. A more detailed understanding of the material would be helpful for continuing research in this area but is not necessary for the remainder of the thesis. For a more complete explanation of the untimed model and proof methods, see [9]. For a more complete explanation of the timed model and proof methods, see [8], [10], or [7].

2.1 Input/Output Automata

An Input/Output (I/O) automaton is a way of describing the asynchronous execution of a process. An automaton A has two components: actions(A) and states(A).

An action is a transition from one state to another. The automaton is allowed to

begin in a set of start states: start(A) C states(A). The set actions(A) is divided

into internal and external subsets. The external subset is further split into input

and output actions. Finally, the actions are partitioned into equivalence classes by

part(A). These equivalence classes are referred to as the classes of the automaton,

(16)

will be seen in the timed setting. The ways in which an action may affect the state is defined by steps(A). This is a set of (state, action, state) triples. If there is a triple of the form (s, a, s'), then a is said to be enabled in s, and the effects of a in s are said to be s'. I/O automata are required to be input enabled. This means that every input action is enabled in every state. We represent an execution of the I/O automaton as a sequence

so, a0, s, al,

such that so E start(A) and Vi(si, ai, si+,) E steps(A). An execution may be either finite or infinite. An execution fragment is a similar sequence, but so need not be a start state. The trace of an execution fragment is the list of all external actions in that fragment. An extended step is a triple (s, 3, s') such that there is an execution fragment il that starts in s and ends with s'. A state s is said to be reachable if there is some execution that ends in s. Often, we wish to establish a property for all reachable states of an automaton. Such properties are called invariants. Since invariants are often used, certain techniques have developed for proving them. The most common is to prove the invariant in every start state, and then prove that every action preserves the invariant. In other words, if the invariant is true before the action, then it will be true after the action. These two steps ensure that the invariant is true in every reachable state s by induction on the length of the execution fragment necessary to reach s. Invariants are also discussed in [7].

2.2 Forward Simulation

Often, proving the correctness of an automaton that implements an algorithm is done

by showing a forward simulation relationship between that implementation automa-ton and a specification automaautoma-ton that provides a high-level description of what it

means for the implementation to be correct. A forward simulation from an

(17)

implementation can do, the specification can also. This can be expressed somewhat more formally by saying that the traces of the implementation are a subset of the traces of the specification.

The formal definition of a forward simulation from A to A' relies on an abstraction function. This is a function f from the states of A (the implementation) to sets of states of A' (the specification). The requirements on f are

1. Vs E start(A) 3u E start(A'),such that u E f(s), and

2. If s is a reachable state of A, u is a reachable state of A', u E f(s), and

(s, a, s') E steps(A), then there is an extended step (u, /3, u') such that u' E f(s'),

and trace(a) = trace(p).

This ensures the condition described informally above by induction on the length of an execution. In other words, the basis is the initial states, and the inductive step is an action of the implementation.

The definition above is expressed in terms of the reachable states of the automata. One can alternatively express the second obligation as

2'. If s is state of A such that s E IA, u is a state of A', u f(s)nIA,, and

(s, a, s') steps(A), then there is an extended step (u, 3, u') such that u' e f(s'),

and trace(a) = trace(3).

where IA and IA' are the sets of states that satisfy the invariants of A and A', respec-tively. This second phrasing is actually stronger than the first, but it is also easier to show. Section 2.5.1 discusses a similar transformation for the timing case in more detail.

2.3 Timed Automata

A timed automaton is an augmented I/O automaton that has additional information to allow discussion of timing properties. The additional information takes the form of a boundmap that expresses the time requirements for each class of the automaton. The

(18)

timed automaton consisting of I/O automaton A and boundmap b is represented by

(A, b). The boundmap associates a lower and upper bound with each class (sometimes

referred to as bl(C) and b,,(C), respectively). Informally, the bounds associated with a class signify the interval of time during which, if an action in the class is enabled, an action must occur. For example, consider a class c that has a lower bound of a and an upper bound of b, and that becomes enabled at time t (and remains enabled). In this case, some action in c must happen between the times t + a and t + b. Just as I/O automata have executions, traces, and execution fragments, timed automata

have timed executions, timed traces and timed execution fragments that have a time

associated with each action. E.g.,

so7 (al t), Si, (a2, t2),

s2,*-In such a sequence, the subscripts are referred to as the indices. From any such sequence a of states, actions, and times, one can produce the ordinary sequence

(signified ord(a)) that contains the same states and actions, but omits the times. The formal use of the boundmap is defined as follows (taken from [8]).

Suppose (A, b) is a timed automaton. Then a timed sequence a is a timed execution

of (A,b) provided that ord(oa) is an execution of A and satisfies the following conditions, for each class C E part(A) and every action with index i in class C and

execution oa.

1. If bu(C) < co then there exists j > i with tj t + b(C) such that either

7rj E C or s E disabled(A, C).

2. There does not exist

j

> i with t < t + b1(C) and

rjy

E C.

2.4 MMT Automata

In order to carry out assertational reasoning on timed systems in the same way as it is done on untimed systems, it is convenient to incorporate the time into the state

(19)

of the automata. This section presents the MMT automaton, which can be created from a regular timed automaton.

The MMT automaton for a timed automaton (A, b) is denoted by time(A, b) in [8] and [12], where a more thorough explanation of MMT automata can be found. An action of an MMT automaton created from (A, b) must be either an action of

A, augmented by the time at which it occurred, or a special NULL(t) action, which

advances time to t. The input and output subsets of time(A, b) are divided the same way as in A, and the internal actions subset is internal(A) U{NULL}.

The state of A is kept in a basic component of the state of time(A, b). Further-more, the state is augmented with a now component that represents the current time.

Finally, each class C has values first(C) and last(C) in the state. We use record notation to signify values in the state. For example, in a state s, the basic compo-nent is represented as s.basic. If s is a start state, then for every enabled class C,

s.first(C) = b(C), and s.last(C) = b,(C). For the classes that are not enabled in

s, s.first(C) = 0, and s.last(C) = co, as a default. The values of first and last outside of the start states, as well as the definition of a step (s, (r, t), s'), are defined as follows (taken from [8]).

1. if r E actions(A) then:

(a) s.now = s'.now

(b) (s.basic, r, s'.basic) steps(A) (c) i. If 7r E C, then s'.first(C) < t.

ii. If s'.basic E enabled(A, C), 7r 0 C, and s.basic E enabled(A, C), then

.st.first(C) = s.first(C) and s'.last(C) = s.last(C).

iii. If s'.basic E enabled and either 7r E C or s.basic E disabled(A, C),

then s'.first(C) = t + b

1

(C) and s'.last(C) = t + b~(C).

iv. If s'.basic e disabled(A, C), then s'.first(C) = 0 and s'.last(C) =

oo.

2. if 7r = NULL, then

(20)

(b) s'.basic = s.basic

(c) VC e part(A)t < s.last(C).

(d) VC e part(A)s'.first(C) = s.first(C) and s'.last(C) =

s.last(C).

2.4.1 Invariants

The following invariants, taken from 8], hold for all reachable states of all MMT automata. They are easy to check, but are not proved here.

For every class C and any reachable state s of an MMT automaton time(A, b):

1. s.last(C) > s.now

2. if s.basic E enabled(A, C), then s.last(C) < s.now + b,,(C)

3. if s.basic E disabled(A, C), then both first(C) = 0 and last(C) =

oo

2.5 Timed Forward Simulation

This is simply an extension of the normal forward simulation which ensures the timed correctness of the simulation. In other words, we want to guarantee that any time the implementation takes an action, the specification could have taken an equivalent action. It is also called a strong possibilities mapping in [8]. The formal definition listed below comes directly from [8].

Let (A, b) and (A', b') be timed automata with the same set II of external actions.

Let f be a mapping from the states of time(A, b) to sets of states time(A',

b').

Then

f is a strong possibilities mapping from time(A, b) to time(A', bt) provided that the

following conditions hold:

1. For every start state s of time(A, b) there is a start state u of time(A', b') such

that u E f(s).

2. If s is a reachable state of time(A, b), u E f(s) is a reachable state of time(A', b')

and (s, (7r, t), s') is a step of time(A, b), then there is an extended step (u,3, u') of

(21)

3. If s and u are reachable states of time(A, b) and time(A', b'), respectively, and

u E f(s), then u.now = s.now.

2.5.1 Invariant Definition

The definition above of timed forward simulations is expressed in terms of reachable states. In this context, it is more convenient to use an equivalent definition that is instead expressed in terms of invariants of the system. This is because it is easier to express and verify invariants than reachability with the Larch tools. Consequently, we will use the following definition of timed forward mapping, taken loosely from [7], where it is referred to as "weak forward simulation."

Let (A, b) and (A', b') be timed automata with the same set HI of external actions. Let IA and IA, be invariants of (A, b) and (A', b'), respectively. Let f be a mapping

from the states of time(A, b) to sets of states time(A',

b').

Then f is a strong

possibil-ities mapping from time(A, b) to time(A', b') provided that the following conditions

hold:

1. For every start state s of time(A, b) there is a start state u of time(A', b') such

that u E f(s).

2. If s is a state of time(A,b) such that s E IA, u f(s)nIA, is a state of

time(A', b') and (s, (r, t),s') is a step of time(A, b), then there is an extended step

(u, 3, u') of time(A', b') such u' e f(s') and the timed traces of 7r and ,B are equal. 3. If s and u are states of time(A, b) and time(A', b'), respectively such that s E IA

and u e f(s) fnlIA,, then u.now = s.now.

In order to use this definition, we must first show that if a function satisfies the conditions of the invariant definition, then it will also satisfy the conditions of the reachability definition. This is true because each of the conditions of the invariant definition implies the same condition of the original definition:

1. This condition of the invariant definition clearly implies the same condition of the original definition, as the conditions are identical.

(22)

when-ever the states s and u of the automata satisfy the invariants. Howwhen-ever, if these states are reachable, then they must satisfy the invariants. Therefore, the same

condition of the original definition must hold for s and u also.

3. This condition holds for the same reason as the prior one.

Therefore, the invariant definition is a stronger requirement than the original one, and if a function satisfies the invariant definition, then it must also satisfy the original definition.

(23)

Chapter 3

The Larch Tools

The Larch Tools support a method of systematically constructing and verifying proofs. The process consists of two phases. Informally, the first is describing the proof goals. This phase, commonly called formalizing the model, is done in the Larch Shared Language (LSL). The formalization provides the axioms and proof obligations. Proof obligations are facts that must be proved to show the correctness of a trait. Once the formalization is done, the proof itself is carried out using the Larch Prover (LP). Note that these phases may be intertwined, especially if one discovers an error in the formalization after having worked within LP for a while.

This chapter is intended to give an introduction to the Larch Tools, and to provide an understanding of the role each plays in constructing and verifying proofs. It provides a basis for the next chapter, which contains an actual proof, as an example of how the Larch Tools work when dealing with timed automata. To that end, this chapter is broken down into sections that

* provide an idea of what each tool is for, and a basic'understanding of its use.

* describe the use of the Larch Shared Language in formalization, including the overall approach and some examples.

(24)

3.1

Overview and Purpose of Each Tool

Formalize (manual)

,

Run Larch Shared Language Checker (Automatic)

Verify with Larch Prover (Computer Assisted)

Figure 3-1: Pictorial Overview of the Formalization Process.

The Larch Tools provide a method for formalizing an informal proof. This process consists of formalizing the specification, and performing the proof itself. Figure

3-1 provides an overview of the process and shows the role of each tool. The Larch Shared Language is used to define the axiomization and proof obligations, and the Larch Prover carries out the proof.

The Larch Shared Language provides a method for describing the automata and proof goals at a high level. This description must be done manually, and consists of creating a number of traits. Each of the traits describes a conceptual object. The tool itself takes the form of a compiler: once the traits have been entered, the LSL checker is run on them to produce the axioms and conjectures that need to be proved in a way that the Larch Prover can understand. By contrast, the Larch Prover is interactive.

Completed Hand Proof Predefined Libraries: Timed Simulation, MMT Automata, etc Larch Shared Language Traits Formal Axioms and Proof Obligations Completed Proof - - --0-1 I l

(25)

Once the axioms and proof goals have been defined, LP attempts to perform the proof. However, if it comes to a proof requirement that it cannot perform automatically, it asks the user for input to help it with the proof. The user's input can be remembered and used again in the future to handle the same or similar obligations. Sequences of commands can be thought of as a program, and LP as their interpreter.

3.2 Formalizing the Model with LSL

As described in the previous section, LSL is a language to formally describe the model used and the theorem to be proved at a higher level than the Larch Prover. It produces a sequence of LP commands that assert the axioms and introduce the conjectures be proved. The process itself is referred to as formalization and the model used in the hand proof is called the informal model, in contrast to the formal model

described in LSL.

3.2.1

Notation in Larch

In this thesis, Larch code or names are printed in typewriter font, in order to differentiate them from regular text. Furthermore, Larch code has often been printed with mathematical symbols where the real Larch code actually contain ascii strings. This section provides a guide for translating the symbolic definitions shown here to their ascii equivalents.

Printed Symbol Ascii Equivalent

V \/

A /\

U ->

<=>

(26)

<

~~~~~~

\<

>

~~~~~\>

V \A 3 \E

<

<~~~~~~=

> >= EG~~~ ~\in e \ominus

There is one exception. The first V in the asserts section uses the string \forall

rather than \A.

3.2.2 Generic Background for LSL

This section presents a conceptual overview of the Larch Shared Language, without going into details related to automata. The goal is to give the reader a sufficient understanding of the issues to write most specifications, and to understand more advanced works on LSL if the information here is insufficient for the reader's needs. For one such treatment see [4]. An LSL formalization consists of a group of traits. Each trait represents a conceptual object. For example, a trait might describe the integers, or associativity. Each trait is allowed to be parameterized. Thus, you could say that i is an integer or that + is associative. In practice, each trait is kept in its own file. This allows users to easily find the traits they need. Each trait defines two different kinds of things: sorts, and operators. Sorts are basically types, in the programming sense. Operators are functions on zero or more arguments. (An

operator with zero arguments is a constant).

When a trait has been completed, it can be tested for errors and compiled into an LP-compatible format using the LSL checker. This will produce proof obligations for the trait. The remainder of this section will describe traits at a high level, and discuss how they are mapped into LP.

(27)

Defining Sorts

One way to define sorts is while defining an operator. If the name of a previously undefined sort is specified in the signature of an operator (see the section on Defining Operators), then that name is taken to be the name of a new sort.

Another method of defining a sort within a trait is explicitly. For example, the trait that represents the bounds on a single class contains the following line:

Bounds tuple of first:Time, last: Time

This line introduces the sort Bounds.

The final way to define a sort is by referencing another trait. If, for example, you needed to use time in a trait, then you could simply refer to the time trait. Later in this section is a more detailed discussion of interrelating traits.

Defining Operators

The introduces section defines a list of operators (and, by extension, constants, since they are simply operators without arguments) for use in the trait. For example, the

introduces section for the trait Time includes the following lines.

introduces

0: -> T

--__ + __, __ - _ : T T -> T

To understand these lines one must know that each one is divided into two sections by the colon. The first section describes the name and format of the operator (+ or - are names). Operators are allowed to be prefix, postfix, or mixfix. The -_ symbol

represents the location of arguments in this section. The second half of each line

tells the signature of the operator, i.e., the sorts of the arguments and the result.

Note that if the second half says there are arguments, but the first half does not show any locations for them, then they follow the name of the operator in function notation. The default form of named operators is OPERATOR( ARG1, ARG2, ...

(28)

the + and - operators go in between two items of sort T, and the result of adding or subtracting two items of sort T is another item of sort T.

Facts

LSL provides two means of defining facts about a trait. The first is for axioms, which are true by definition. The second is for facts that can be proved from the other facts in the trait. When the LSL checker is run on a trait, the axioms become facts, and the provable facts become proof obligations. However, if the trait is referenced from another trait (how to do this is discussed in the next section), then both become facts.

The asserts section found in each trait states the axioms of the trait. There

are two parts of the asserts section. The first tells how things are generated. This is approximately an enumeration of the possible values something can take. For example, in every automaton, there is a generated by statement such as

Actions generated by actI, act2, act3

where actl, act2, and act3 would be replaced by a complete list of the actions of

the automaton.

The rest of the asserts section describes general facts about the trait.

This section is fairly easy to read without any particular explanation. However, for an explanation of the details of this section, see [4]. The information in [4] is slightly out of date. Statements in the asserts section are now allowed to have existential and universal quantifiers. Also, the == operator has been eliminated. Ambiguities in the parsing of = should be resolved with parentheses instead. The major useful but non-intuitive fact about this section is that when you put in some fact such as a = b, the LSL checker takes this as a strong hint that a is defined as b. Thus, the rewrite rule that LP creates will probably be ordered as a -> b. Rewrite rules are discussed in more detail in section 3.3.

The implies section of a trait lists some useful, supposedly provable facts about that trait. This section differs from the asserts section only in that when the LSL

(29)

checker is run on the trait, the implies section become proof obligations rather than facts. Thus, almost all the remarks in the previous paragraph hold true for this

section as well.

Relating to Other Traits

Most often, a trait does not exist in a vacuum, but is instead defined in terms of other traits. There are three methods of doing this. The first two methods are the assumes and includes sections of a trait. The difference between these two is somewhat subtle. If another trait is included, then the trait included is a part of the definition of the new trait. However, if another trait is assumed, then the definition of the new trait does not make sense unless the assumed trait is included elsewhere. The best way to see this distinction is to see it in use. One example is the Executions trait, where the distinction is discussed in context (see page 32.) The third method is as a

statement in the implies section. This requires the user to prove the properties in

the trait. For example, when one wishes to prove a simulation relationship between two automata A and B, one usually uses a line such as Simulation(A, B) in the implies section of the complete trait for the system.

3.2.3

The Layered Approach to LSL Specification

The informal description of an MMT automaton is based on several other concepts, such as I/O automata, and time. Similarly, a specific automaton is described as an

MMT automaton that has certain additional properties. In other words, the general

approach is to build up the description in layers, so that each layer is defined in terms of the lower levels. Thus, understanding a concept necessitates understanding

the lower level concepts. This same approach is used in LSL. In LSL each concept is defined in its own trait. Thus, the specific simulation relationship to be proved is expressed as a trait that is defined in terms of lower level traits such as the spe-*-cific automata and simulation relationships in general. Figure 3-2 shows a module dependency diagram for the proof I carried out. One of the major benefits of this approach is the reusability of traits. This means that, in effect, there is a library of

(30)

predefined traits which now includes things like MMT automata and timed simula-tion relasimula-tionships. Because of this library, defining a new simulasimula-tion will really only require describing the details of the specific automata and the relationship between them. The traits that need to be redefined when doing this are shown in dotted boxes in figure 3-2. This reusability cuts down on a lot of work, both in formalizing the simulation, and in the proving process, as the traits in the library have already been tested and honed. Furthermore, the use of predefined libraries increases the user's confidence in results proved, as the axioms will be less likely to have inconsistencies.

, .

Simulation Relationship

- - - -I

Figure 3-2: Module ship.

Dependency Diagram of a Timed Forward Simulation

Relation-The rest of this section describes in detail each of the traits used in defining a specific automaton. In other words, this section describes the reusable library for automata. Traits for the automata used in the example can be found in the example chapter. This should be useful both as a reference and as an example of what real traits look like.

(31)

AutomatonBasics (A): trait

introduces

start : A$States -+ Bool

enabled : A$States, A$Actions -+ Bool

effect : A$States, A$Actions, A$States -+ Bool

isExternal : A$Actions -+ Bool

isInternal : A$Actions - Bool

isInput : A$Actions -+ Bool

isOutput : A$Actions -+ Bool

asserts V a: A$Actions, s: A$States

isExternal(a) 4* isInput(a) V isOutput(a); - (islnput(a) A isOutput(a));

isInternal(a) -n isExternal(a);

isInput(a) = enabled(s, a)

Figure 3-3: LSL trait defining basics of input-enabled I/O automata

3.2.4 I/O Automata

This section shows the traits that specify an I/O Automaton. The first four of these collectively describe input-enabled I/O Automata in general. The fifth discusses the invariants of automata, and provides some proof obligations that are required of a trait for it to actually describe an I/O automaton.

The AutomatonBasics trait provides the basic vocabulary of automata, as well as some basic concepts that arise directly from the definitions. Note that since it is defined in terms of nothing else, but used in other definitions, it does not include or assume any other traits. Just as in the informal definition, the start states are a subset of the states. In the formal model this subset is defined by the boolean start

operator. Thus,

Vs E states, s E start , start(s) = TRUE

The constructs isExternal, islnternal, isInput, and isOutput express the notions of the external, internal, input, and output subsets of actions in the same

way. Furthermore, the trait says that external actions are partitioned into input and

(32)

Executions (A): trait

assumes AutomatonBasics(A) introduces

isStep : A$States, A$Actions, A$States -+ Bool

null : A$States -+ A$StepSeq

__(__) : A$StepSeq, A$Actions, A$States-+ A$StepSeq

execFrag : A$StepSeq -+ Bool

first, last : A$StepSeq -+ A$States

asserts

V s, s': A$States, a, a': A$Actions, ss: A$StepSeq

isStep(s, a, s') -= enabled(s, a) A effect(s, a, s');

execFrag(null(s));

execFrag(null(s)(a,s')) 4= isStep(s, a, s');

execFrag((ss(a,s))(a',s')) .

execFrag(ss(a,s)) A isStep(s, a', s');

first(null(s)) - s;

last(null(s)) = s;

first(ss(a,s)) = first(ss); last(ss(a,s)) = s;

Figure 3-4: LSL trait defining executions I/O automata

it expresses the idea that input actions are always enabled. The effects of an action are expressed as a subset of all possible pre/post pairs of states. Thus, the effects of an action of the automaton are not required to be deterministic, as an action can take the automaton from one pre-state to any of several post-states. This property is also true of the informal model, but is not often used in practice.

The Executions trait formalizes the informal notion of executions, and execution fragments. Within the context of this trait, null(s) refers to the empty sequence

of actions beginning at the state s, rather than the passage of time action discussed

later in the timed setting. The Executions trait lets one define a formal analog to

the sequence so, a, s, al, an-1, Sn. To express this sequence in LSL, one would use

(...((null(s)<aO,

sl>)<al,s2>)...<an-l,sn>

Note that this trait assumes AutomatonBasics, rather than including it. This is

because the notion of executions is intended to make sense on top of the definition of an existing automaton, rather than on an automaton defined by this trait.

(33)

Traces (A): trait

assumes AutomatonBasics(A), Executions(A) introduces

common : A$Actions -+ CommonActions

empty : -+ Traces

__ ^ __ : Traces, CommonActions -+ Traces

trace : A$Actions -+ Traces

trace : A$StepSeq -+ Traces

asserts

Traces generated by empty,

V s: A$States, a: A$Actions, ss: A$StepSeq

trace(null(s)) = empty;

trace(ss(a,s)) =

(if isExternal(a) then trace(ss) ^ common(a) else trace(ss)); trace(a) = (if isExternal(a) then empty common(a) else empty)

Figure 3-5: LSL trait defining traces for I/O automata

The trait Traces formalizes the notion of the behavior of an execution. In this trait, the trace of an execution fragment is the sequence of all external actions in that fragment. It shows how to construct the trace of any finite sequence of actions.

We introduce a function common to map the actions of an automaton into a new sort CommonActions. This is necessary because LSL requires sorts to repre-sent disjoint, non empty sets. This allows the traces of an automaton A1 (which have actions of sort Al$Actions) to be compared with the traces of an automaton A2 (which have actions of sort A2$Actions), via the common actions. The definition of CommonActions must be provided when specifying the particular automaton. This is usually a fairly mechanical process.

The IOAutomaton trait gathers these three traits and introduces the.idea of

equiv-alence classes of actions. This is the complete description of an input-enabled I/O

automaton. It includes all the axioms. The AutomatonBasics, Executions, and

Traces traits define most of these axioms, so the IOAutomaton trait merely needs to add a few finishing touches. These are the definition of classes, and the addition of a

function inv that maps states to booleans, to express the invariant of the automaton. Even though the IOAutomaton trait defines all the axioms about input enabled I/O

(34)

IOAutomaton (A): trait

includes AutomatonBasics(A), Executions(A), Traces(A) introduces

class : A$Actions -+ A$Classes

enabled : A$States, A$Classes -+ Bool

inv : A$States -, Bool

asserts V s: A$States, a: A$Actions, c:A$Classes

enabled(s, c) <* 3 a (enabled(s, a) A class(a) = c);

Figure 3-6: LSL trait bringing together I/O automata definition

Invariants (A): trait

assumes IOAutomaton(A)

asserts V s, s': A$States, a: A$Actions start(s) = inv(s);

inv(s) A enabled(s, a) A effect(s, a, s') => inv(s'); enabled(s, a) = 3 s' effect(s, a, s')

Figure 3-7: LSL trait defining requirements for I/O automata

automata, it alone is not sufficient for defining an I/O automaton. This is because I/O automata must satisfy certain requirements in order to be correct. The Invariants trait expresses these requirements. The first two statements in this trait deal with proving the correctness of the invariant. The final statement requires that an action may only be enabled in a certain state if there is a post-state for that action in that

state.

Thus, defining an I/O automaton requires two interactions with other traits. First, one must include the IOAutomaton trait. and imply the Invariants trait. For a

complete description of defining an automaton, see section 3.2.7.

3.2.5 MMT Automaton

Before showing how an MMT automaton is defined, we must first show three auxiliary traits. These are Time, Bounds, and NowExists.

The Time trait provides a basic definition of time. It has probably received more attention than most of the other traits in the formalization. This is because earlier

(35)

Time T): trait

includes TotalOrder(T), Natural(- for 8), AC(+, T)

introduces 0: - T __ + __, __ - __: T, T -+ T __ * __: N, T -+ T asserts V t, t, t2: T, n: N O < t; + t = ti; O * t = O; succ(n)*t = (n*t) + t;

t

<

(t +

tl);

(t +

t)

-

t

=

t;

(t + t) < (t + t2) 4= tl < t2;

(t

+

t) < (t + t2) 4

t

< t2;

(t + t) = (t + t2) D t = t2; 0 < n = ((n*t) < t);

n >

=~ ((n-l)*t) + t = n*t

Figure 3-8: LSL trait defining the properties of time

versions attempted to allow time to be infinite, in order to allow classes without upper bounds. This led to some inconsistencies in earlier versions of the axioms, as the requirement t

$

oo was omitted accidentally. When this requirement was inserted in the proper places, it became difficult to perform algebra involving time. Thus, to solve this problem, infinity was moved to the Bounds trait, and time is required to be finite.

One of the notable things about the Time trait is its heavy reliance on the library of traits. It should be noted that AC (associative/commutative) has special support in LP. Thus using them is more powerful than an equivalent formalization of the same properties.

The Bounds trait represents the concept of the period of time during which an action from a class must occur. The trait also includes a definition of what it means for a class's occurrence to be unbounded in time, and what it means to add a time to the bounds. Both of these are useful in the TimedAutomaton trait. As discussed in the previous paragraph, time must be finite, but the concept of an infinite upper bound is allowed by this trait.

(36)

Bounds: trait

includes Time(Time)

Bounds tuple of bounded: Bool, first, last: Time

introduces

_+ _: Bounds, Time - Bounds _+_: Bounds, Bounds -+ Bounds

_*_: N, Bounds -+ Bounds

__ C _: Bounds, Bounds -+ Bool

__ E __: Time, Bounds -+ Bool

asserts V b, bl, b2: Bounds, t: Time, n:

N-b.first < b.last;

b + t = [b.bounded, b.first + t, b.last + t] bl + b2 =

[bl.bounded A b2.bounded, bl.first + b2.first, bl.last + b2.1ast]; n * b = [b.bounded, n * b.first, n * b.last];

bl C b2 =

(bl.bounded A b2.bounded A b2.first < bl.first A bl.last < b2.1ast)

V -ib2.bounded;

t E b (b.first < t A t < b.last) V -ib.bounded

Figure 3-9: LSL trait defining a single class of the boundmap

NowExists (A): trait

introduces --__.now : A$States-+ Time

(37)

The NowExists trait is provided as support for the TimedForward trait, which formalizes the notion of timed simulation relationships described in Chapter 2. In the TimedForward trait (figure 3-12, described in more detail later), NowExists is used to check if the I/O automata involved in the simulation are actually MMT automata which always have a now component in their state. Thus, the TimedForward trait assumes NowExists for each of the two automata involved in order to make sure that they are MMT automata. Although the requirements of MMT automata are, in reality, more stringent than simply having a now component, it is not necessary for

an I/O automaton to meet these other requirements for the TimedForward trait to

make sense. Thus, using NowExists leaves open the possibility of using another type of automaton with timing properties in conjunction with the TimedForward trait.

The TimedAutomaton trait is the largest and most complex of the traits used in our formalization. It describes how an MMT automaton may be created from an

untimed I/O automaton and a boundmap, which relates a bounds to every class. It

follows the definition of MMT automata outlined in Chapter 2 and [8]. Because of this, it is much like the definition of most other I/O automata, only more complex. The next several paragraphs discuss the TimedAutomaton trait's various noteworthy and difficult aspects.

As with the informal model, the actions of an MMT automaton are the null action, along with the actions of the untimed I/O automaton, each associated with the time of execution. Much of the TimedAutomaton trait is devoted to explicitly stating the various facts associated with this. Some of the facts included are

* the boundmap must define bounds for every class,

* the null action is internal,

* the effect of the null(t) action is to advance time to t,

* how far forward the null(t) action can advance time,

* an action from the I/O automaton is enabled in the timed automaton if it is enabled in the same state of the untimed automaton,

(38)

TimedAutomaton (A, b, TA): trait

assumes IOAutomaton(A) includes

IOAutomaton(TA), Bounds,

FiniteMap(A$Bounds, A$Classes, Bounds, __[__] for apply) TA$States tuple of basic: A$States, now: Time, bounds: A$Bounds

introduces

b : A$Classes -+ Bounds

null : Time -+ TA$Actions

addTime : A$Actions, Time -+ TA$Actions asserts

TA$Actions generated by null, addTime

V s, s': TA$States, c: A$Classes, a: A$Actions, t: Time defined(s.bounds, c); isInternal(null(t)); isInternal(addTime(a, t)) t= isInternal(a); isInput(addTime(a, t)) 4= isInput(a); start(s) = start(s.basic) A s.now = 0 A V c ( ( enabled(s.basic, c) s.bounds[c] = b(c)) A (-enabled(s.basic, c) = - (s.bounds[c]).bounded)); enabled(s, null(t)) -=~ s.now < t A V c (t E s.bounds[c]); effect(s, null(t), s)

s'.now = t A s'.basic = s.basic A s'.bounds = s.bounds;

enabled(s, addTime(a, t)) =

s.now = t A enabled(s.basic, a)

A (-isInput(a) = t E s.bounds[class(a)]);

effect(s, addTime(a, t), s') =

s'.now = t A effect(s.basic, a, s'.basic)

A V c ( (enabled(s'.basic, c) A enabled(s.basic, c)

A class(a) -= c => s'.bounds[c] = s.bounds[c])

A (enabled(s'.basic, c) A class(a) = c => s'.bounds[c] = b(c) + t) A (enabled(s'.basic, c) A -enabled(s.basic, c) =* s'.bounds[c] = b(c) + t) A (-enabled(s'.basic, c) = -, (s'.bounds[c]).bounded)); trace(addTime(a, t)) = trace(a); common(addTime(a, t)) = common(a); inv(s) V c ( s.now E s.bounds[c] A (-enabled(s.basic, c) = -(s.bounds[c]).bounded)) implies Invariants(TA) V a: A$Actions, t: Time isOutput(addTime(a, t)) -t isOutput(a:A$Actions)

(39)

* the effects of an action from the I/O automaton on the times at which other actions may occur,

* all other actions are internal or external just as they were in the I/O automaton,

* the trace of the timed automaton is the same as the trace of the I/O automaton, and

* the common actions of the timed automaton are the same as the common actions

of the untimed automaton.

It is quite easy to recognize which line in the trait corresponds to each of these facts. The boundmap is defined with the FiniteMap trait, which is taken from the LSL library of standard traits. It is used to introduce a function b from classes to time bounds. Note that the use of the FiniteMap trait restricts our definition of MMT automata to those which contain only a finite number of classes, as the FiniteMap trait only allows a finite number of entries. The boundmap associates a time bound b(c) with each class c in the untimed automaton. The values of each of the bounds must be defined with the trait defining the'specifics of the simulation relationship.

Invariants that hold for all MMT automata are listed at the end of the asserts section. As with the definition of any I/O automaton, invariants must be set up as a

function inv that maps states of the automaton to boolean values. This allows the

Invariants trait to be used to verify the invariants. They will be useful for future

users of the library, as they allow us to use these properties as needed from LP. The fact that they are preserved has been verified using LP. In fact, the proof of one of them is discussed in the LP section of this chapter as an example.

3.2.6

Simulations

This section describes the TimedForward trait used to define the notion of a timed simulation between two automata Al and A2. The requirements of a formally defined timed simulation are exactly the same as the requirements put forth in section 2.5.1.

(40)

TimedForward (A1, A2, f): trait

assumes IOAutomaton(A1), IOAutomaton(A2), NowExists(Al), NowExists(A2)

introduces f: Al$States, A2$States - Bool

asserts V s, s': Al$States, u: A2$States, a: Al$Actions, alpha: A2$StepSeq

start(s) 3 u (start(u) A f(s, u)); f(s, u) = u.now = s.now;

f(s, u) A inv(s) A inv(u) A isStep(s, a, s')

3 alpha (execFrag(alpha) A first(alpha) = u

A f(s', last(alpha)) A trace(alpha) = trace(a))

Figure 3-12: LSL trait defining basics of input-enabled I/O automata

Note that the assumes line requires that the two automata be I/O automata

that have a now component. We take this approach because we don't explicitly have access to the untimed versions of the automata and the boundmaps, which would be necessary in order to have an assumes section like

assumes

TimedAutomaton(UntimedAl, Boundmapl, A),

TimedAutomaton(UntimedA2, Boundmap2, A2),

While the approach we take does not require the automata in the relationship to be MMT automata, it does guarantee everything necessary for the TimedForward trait to make sense. This allows the trait to be somewhat more general: any form of timed

automata that contains a now component can use the trait.

3.2.7

The Specific Traits

Now that the library has been displayed, the natural question is, what is left for

the user to do? The user needs to do the following to formalize a timed simulation

relationship:

* Write formal descriptions of two untimed automata that include the IOAutomaton

(41)

* Write a CommonActions trait that has all the common actions of the two

au-tomata.

* Write a simulation trait that includes the traits defining the specific

au-tomata, defines the simulation relation f, and implies the TimedForward trait.

An example of this can be seen in the next chapter.

3.2.8 Using the LSL Checker

The LSL checker is run on a specific trait. It checks the syntax and static semantics of the trait and all its subtraits. Furthermore, if the -ip option is specified, the checker generates axioms and proof obligations for the trait. These are put in two files, as

follows.

* traitLaxioms.lp This file gives an LP description of all the axioms of the trait.

Generally, it is run once, and a freeze file is produced. (See next section for a description of what this means.)

* traitchecks.lp This is a list of proof obligations for the trait. These generally

re-sult from the assumes and implies statements in that trait. This file generally

is taken and edited extensively to prove the theorems.

One notable fact about the LSL Checker is that it only produces checks for the trait itself, and not for the subtraits. Thus, in order to prove a simulation relationship, you must verify the proof obligations in the Simulation trait and in the traits defining the two automata, which have their own proof obligations due to the implied Invariants trait. Only after checking all three traits can you be sure that the simulation actually holds, as otherwise the invariants of the automata may be left unproved. Usually, proving these invariants is quite easy, as they will normally follow quickly by an inductive argument.

Figure

Figure  3-1:  Pictorial  Overview of the  Formalization  Process.
Figure  3-2:  Module ship.
Figure  3-3:  LSL trait  defining  basics of input-enabled  I/O  automata
Figure  3-4:  LSL trait  defining executions  I/O  automata
+7

Références

Documents relatifs

When it is suitable for the perceptual system to combine the two partial percepts in a coherent unit, the perceived object can take different aspects, in that one sensory modality

While a model with a single compartment and an average factor of reduction of social interactions predicts the extinction of the epidemic, a small group with high transmission factor

Each round is played as follows: Player 1 chooses one action α ∈ Act 1 among the actions that label the controllable transitions leaving the location where the pebble lieson , and

The structure of the paper is the following: we first recall some background on timed automata and we describe the implementation of the forward analysis algorithm that we

With the goal of establishing a common underlying logic language as a support for those model checking techniques, this work focuses on explaining how (Discrete) Timed Automata can

Example 2.1- Consider the automaton A pictured in Figure 1 where the initial state is 1 and the final state is 4. The prefix automaton of A is pictured in Figure 2... The

The immediate aim of this project is to examine across cultures and religions the facets comprising the spirituality, religiousness and personal beliefs domain of quality of life

Surprisingly the standard forward reachability algorithm based on zones has been recently shown to be correct only for TA with simple constraints, but not always correct for TA