• Aucun résultat trouvé

We addressed the situation of students in programming lessons during distance learning studies. The problem here is the usually missing or insufficient direct communication between learners and teachers and between learners. This makes it more difficult to get around problems during self-tests and homework assign-ments.

In this paper we have presented the AT(x) approach, which is capable of au-tomatically analyzing programs with respect to given tests and a reference solu-tion. In the framework of small homework assignments with precisely describ-able tasks, the AT(x) instances are describ-able tofind many of the errors usually made by students and to communicate them in a manner understandable for beginners in programming (in contrast to the error messages of most compilers.)

The AT(x) framework is designed to be used in combination with WebAssign, which is available at the FernUniversit¨at Hagen, and provides a general framework for all activities occurring in the assignment process. This causes AT(x) to be constructed from two main components, an analysis component (often written in the target language) and a uniform interface component written in Java.

By implementing the necessary analysis components, instances of AT(x) for different programming languages are generated. This was presented for the in-stance AT(S), which performs the analysis task for Scheme programs. This anal-ysis component is robust against programs causing infinite loops and runtime er-rors, and is able to generate appropriate messages in these cases. The general interface to WebAssign makes it easy to implement further instances of AT(x), for which the required main properties are also given in this paper.

During the next semesters, AT(S) will be applied in courses at the FernUniver-sit¨at Hagen and its benefit for Scheme programming courses in distance learning will be evaluated.

Future work on AT(S) can address the following topics. While the current system aids the students in preparing their homework assignments, an automatic assessment stage comparable to [5] can reduce the effort required by the teacher to correct them. This, however, makes it necessary to understand errors not only in terms of the I/O-behaviour, but in terms of the source code. The precise assess-ment can be calculated as the similarity of the student’s solution to a specimen program according to some appropriate distance function. Understanding errors in terms of the source code is also necessary in order to extend AT(S) towards an ITS. Furthermore, an ITS needs a model of the student’s programming skills and possible misunderstandings, in order tofind reasons for certain errors and to pro-vide more specialized help. In all these extensions we believe that useful online assistance to the students should always be one of the most important aims (or even the most important aim) in distance learning.

REFERENCES

[1] C. Beierle, M. Kulaˇs, and M. Widera. Automatic analysis of programming assign-ments. InProceedings of the 1. Fachtagung ”e-Learning” der Gesellschaft f¨ur Infor-matik (DeLFI 2003). K¨ollen Verlag, Bonn, 2003.

[2] J. Brunsmann, A. Homrighausen, H.-W. Six, and J. Voss. Assignments in a Vir-tual University – The WebAssign-System. InProc. 19th World Conference on Open Learning and Distance Education, Vienna, Austria, June 1999.

[3] K. Claessen and J. Hughes. QuickCheck: a lightweight tool for random testing of haskell programs. InProceedings of the ACM Sigplan International Conference on Functional Programming (ICFP-00), volume 35.9 ofACM Sigplan Notices, pages 268–279, N.Y., Sept. 18–21 2000. ACM Press.

[4] M. Flatt.PLT MzScheme: Language Manual, Aug. 2003.

[5] S. Foubister, G. Michaelson, and N. Tomes. Automatic assessment of elementary standard ml programs using ceilidh.Journal of Computer Assisted Learning, 1996.

[6] A. Homrighausen and H.-W. Six. Online assignments with automatic testing and correction facilities (abstract). InProc. Online EDUCA, Berlin, Germany, October 1997.

[7] R. Kelsey, W. Clinger, and J. R. (Editors). Revised5report on the algorithmic lan-guage scheme.ACM SIGPLAN Notices, 33(9):26–76, Sept. 1998.

[8] R. Lelouche. Intelligent tutoring systems from birth to now. K¨unstliche Intelligenz, 13(4):5–11, Nov. 1999.

[9] R. L¨utticke, C. Gn¨orlich, and H. Helbig. Vilab - a virtual electronic laboratory for applied computer science. InProceedings of the Conference Networked Learning in a Global Environment. ICSC Academic Press, Canada/The Netherlands, 2002.

[10] Homepage LVU, FernUniversit¨at Hagen,http://www.fernuni-hagen.de/LVU/.

2003.

[11]PLT DrScheme: Programming Environment Manual, May 2003. version204.

[12] H. WebAssign. http://www-pi3.fernuni-hagen.de/WebAssign/.

2003.

[13] H. Zhu, P. Hall, and J. May. Software unit test coverage and adequacy. ACM Com-puting Surveys, 29(4):366–427, Dec. 1997.

Chapter 8

Testing Reactive Systems with GAST

Pieter Koopman and Rinus Plasmeijer

1

Abstract G∀STis a fully automatic test system. Given a logical property, stated as a function, it is able to generate appropriate test values, to execute tests with these values, and to evaluate the results of these tests. Many reactive systems, like automata and protocols, however, are specified by a model rather than in logic. There exist tools that are able to test software described by such a model-based specification, but these tools have limited capabilities to generate test data involving data types. Moreover, in these tools it is hard or even impossible to state properties of these values in logic. In this paper we introduce some extensions of G∀STto combine the best of logic and model based testing. The integration of model based testing and testing based on logical properties in a single automated system is the contribution of this paper. The system consists only of a small library rather than a huge stand-alone system.

8.1 INTRODUCTION

Within the fully automatic test system G∀ST[15], properties over functions and data types are expressed infirst order logic. These properties are written as func-tions in the functional programming language CLEAN[18]. Based on the types used in these functions, G∀STautomatically and systematically generates test val-ues. It evaluates the property for these values and analyses the test results. This avoids the burden to design and evaluate a test suite by hand and makes it easy to repeat the test after changing the program (regression tests). This automatic and systematic generation of test data is a distinguishing feature of G∀STthat even allows proofs forfinite types by exhaustive testing. In [15] we focused mainly on the concepts and implementation of G∀ST.

1Nijmegen Institute for Computer and Information Science, Nijmegen University, The Netherlands. Email:{pieter,rinus}@cs.kun.nl

It is possible to specify the behaviour of reactive systems, like the famous coffee-vending machines and protocols [5], in logic, as demonstrated byZ [20].

However, these reactive systems are usually specified by a model, instead of by a property in logic. Many formalisms are used in the literature to specify reactive systems. We use labelled transition systems (LTS), since they have shown to be very general and effective for testing [13, 10].

G∀ST was originally designed for logic-based testing, not for model-based tesing. In this paper we introduce some extensions that make G∀STsuitable for model based testing. We introduce a general format to specify labelled transition systems as a data structure in CLEAN. The specification of an LTS by a function is shown to be more concise and can handle an unbounded number of labels and states.

To test conformance effectively these specifications are used as a basis for test case generation. These test cases are much more effective for this purpose than the systematic generation of all possible inputs, which in its turn is more effective than random generation of inputs. For each deterministic andfinite LTS it becomes possible to prove that the implementation behaves as specified, or to spot an error under the assumption that the implementation is an LTS that does not contain more states than the specification [25].

An advantage of extending G∀STto enable testing of products specified by an LTS is that the original ability to test data types is preserved and can be combined with the new possibilities. The generation of data to test properties involving data types is a weak point of the existing automatic model-based test systems.

Unlike model checkers like SPIN [12], we assume that the given specification is correct. In practice, however, differences between the specification and the actual implementation appear also to be caused by incorrect specifications. So, testing also increases the quality and confidence in the specification.