• Aucun résultat trouvé

4.2 Model Checking

4.2.4 Compositional Refinement Verification

Given the state-space explosion problem, there has long been the idea that design should be approached top-down, moving from the abstract to the implementation.

Automated verification tools are envisioned to be the enabling link that allows the designer to incrementally verify the design as implementation detail is added, so that the final implementation is consistent with the abstract specification.

The compositional aspect comes from the idea that the design is decomposed into small parts whose properties can be verified in isolation, and the final system is taken as the composition of these elements. Properties that must hold for at the system level are proven in terms of the properties of the individual elements, and a compositional rule specifies exactly when the composition of properties can be used to infer the properties of the entire system.

A compositional proof is usually build using the following reasoning. If P and Q are processes,

φ

an assumption about the environment, and is the property that we wish to prove then we must first show that given the environment assumption

φ

,

the property ψ holds in P, written as

P, φ ψ | =

. We must also show that the environment assumption

φ

holds in Q, or

Q | = φ

. Then we can infer that the composition of P and Q maintains the property ψ . This is written as

P

The difficulty in such a proof arises from the fact that the environment assumptions needed to verify interacting processes are interdependent. For example, take P and Q to be sender and receiver entities, respectively, in a protocol system. We wish to prove that property ψ holds in P given some assumption about the environment – here we might take the environment to be the high-level behavior of Q.

Informally, the proof is as follows “given that Q exhibits the proper behavior up to time t, show that P exhibits the proper behavior up to timet+ 1.” However, the behavior of Q up to time t depends on the behavior of P up to time

t

, so we have circularity if we attempt to apply the above compositional rule.

Recent work by McMillan [McM97][McM98] overcomes the circularity for the cases of hardware processes involving zero-delay gates and unit-delay latches. He defines a compositional rule for Mealy machines by defining a signal to be a sequence of values, and a machine to be a collection of assertions about a set of signals. That is,

M M

S

= ∧

σ σ

where

σ

is a signal in the finite collection S, and

M

σis a component. Composition of processes in this framework is simply conjunction – that is, each component can be viewed as a process that produces a single signal, so that the system is composed of the conjunction of all the components.

The goal of this approach is to show that a detailed implementation Q implies an abstract specification P. To show this, a set of models (temporal logic assertions) are associated with the specification P and the implementation Q. The proof amounts to showing that the set of models associated with P contains the set of models associated with Q.

Abadia and Lamport break the circularity problem described above by using induction over time. Letting

m A

τ stand for “m holds up to time

t = τ

”, the

following induction is used [AL93]:

m m

properties of the conjunction are satisfied.

McMillan’s contribution to this approach was to define an inference rule for composition that can allows cyclic use of environment signals using synchronous processes that have zero-delay, as opposed to the interleaving, unit delay model of Abadi and Lamport. Further, his inference rule is sound even when the environment contains many processes that constrain the same signal. Other approaches [Kur94][GL94] do not allow cyclic assumptions, and his is the first work to define a sound inference rule in the presence of multiple assignments to the same signal (the value of which will be explored in the next chapter).

Without undue detail, the inference rule is presented here for completeness and to lay the theoretical groundwork for the refinement techniques presented in Chapter 5. The reader is referred to [McM97][McM98] for the proof.

Let P and Q be a set of processes, and

a given well founded order on the signals in P. We will call P the specification, and Q the implementation. If

p

1

p

2, then

p

2depends on

p

1, we use

p

1

A

τto prove

p

2

A

τ, else we use

p

1

A

τ1. Define

Zp =QU

l

p′∈P p: ′→ p

q

to be the set of processes that can be used to prove p with zero delay. Then to prove a particular p up to the present time

τ

, an arbitrary set of environment assumptions

ε

pPUQ may be chosen, with those signals in

Z

passumed up to the present time, and the reset assumed up to the last instant τ− 1. Then, with a process understood to be the conjunction of its components, the following theorem holds.

Theorem 1 [McM98]: For all

pP

, choose

ε

pPUQ. The following inference rule is sound:

for all p P p

Q P

p p p p

Z Zc

A

A

A

A

:

ε

τ

ε

τ

ω ω

I I

d i d i

1

In words, this says that for all p in P, if the environment assumptions taken up to the appropriate time (depending on whether the assumed signal is in

Z

p) implies p, then the implementation implies the specification.

McMillan used this approach to verify Tomasulo’s algorithm using an automated model checking system (SMV). In his example, the specification is a machine P that executes instructions in order as they arrive, stalling non-deterministically before emitting the answer. The implementation Q is a pipelined arithmetic unit that executes a stream of operations on a register file. Tomasulo’s algorithm allows execution of instructions in data-flow order, rather than sequential order, so that the proof entails showing that the implementation produces the correct stream of output answers. Combining this approach with symmetry reductions described in the preceding section, it was possible to prove the system for 32-bit data, 16 32-bit registers, and 16 32-bit reservation stations.

In the next chapter, we apply this technique to solve two problems. First, we would like to relate an SDL model with asynchronous, atomic message-passing semantics to a synchronous, sequential message transfer implementation. Second, we would like to verify properties about our hardware implementation of data transfer protocols. Due to the state space explosion problem, verifying properties about large blocks of data moving through a system has to date been out of reach of automated verification tools.

C h a p t e r 5

A H a r d w a r e I m p l e m e n t a t i o n M e t h o d o l o g y U s i n g R e f i n e m e n t V e r i f i c a t i o n

5.1 Overview

The design methodology presented in this work has as its goal linking formal specification to implementation. This chapter focuses on an approach by which it is possible to refine a high-level finite state machine with large-granularity, atomic-transfer message passing semantics to an implementation in synchronous hardware where message transfers take place in a sequence of smaller transactions. Further, the approach will allow us to formally verify the equivalence of the high-level specification and the low-level implementation.

The basic problem is that the high level specification must be both time granularity and behavioral detail. A state machine described in the SDL language, for example, has execution semantics in which signals arrive instantaneously over channels

with non-deterministic delay. SDL processes react to a signal arrival by executing a transition that executes in zero time, and in which other signals may be emitted.

The convenience of dealing with packet-level transfers using these execution semantics is that it allows one to focus on what the system does rather how it does it. It frees the designer from worrying about shared resources such as system busses and memories, and instead allows the focus to be on message-passing, transaction-level interactions.

In the implementation, however, one must face design details such as passing messages a single word at a time over a shared bus, allowing many messages to be interleaved and simultaneously in transit. And, not surprisingly, it is in the detailed level wherein the greatest chance for design error arises.

In this chapter, we consider the problem of relating an SDL specification to a hardware implementation. We will start with an SDL specification for a finite state machine to define the asynchronous, atomic message-passing behavior of the processes in our system. This SDL specification is used to manually (i.e., informally) create a high-level, synchronous atomic message-passing specification for the hardware implementation in the SMV language. The SMV specification is then refined to a cycle-accurate model of synchronous hardware implementation in which the high-level “atomic” message transfers correspond to a sequence of smaller transfers. We will use the compositional refinement verification technique outlined in the preceding chapter to check that the implementation-level SMV model is consistent with the specification-level SMV model (Figure 5— 1).

SDL

Figure 5— 1. A “semi-formal” refinement methodology

Sw

Figure 5— 2. Generalized data transfer network

5.2 Refinement and verification of a generalized data