• Aucun résultat trouvé

A Deterministic Approach for Embedded Human-Machine Interfaces (HMI) Testing Automation

N/A
N/A
Protected

Academic year: 2021

Partager "A Deterministic Approach for Embedded Human-Machine Interfaces (HMI) Testing Automation"

Copied!
9
0
0

Texte intégral

(1)

HAL Id: hal-01708363

https://hal.archives-ouvertes.fr/hal-01708363

Submitted on 13 Feb 2018

HAL

is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire

HAL, est

destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

A Deterministic Approach for Embedded

Human-Machine Interfaces (HMI) Testing Automation

Vincent Rossignol, Francois-Xavier Dormoy

To cite this version:

Vincent Rossignol, Francois-Xavier Dormoy. A Deterministic Approach for Embedded Human-

Machine Interfaces (HMI) Testing Automation. 9th European Congress on Embedded Real Time

Software and Systems (ERTS 2018), Jan 2018, Toulouse, France. �hal-01708363�

(2)

A Deterministic Approach for Embedded Human-Machine Interfaces (HMI) Testing Automation

Vincent Rossignol, Francois-Xavier Dormoy (ANSYS)

vincent.rossignol@ansys.com ANSYS Esterel Technologies

9, rue Michel Labrousse, 31100, Toulouse, France

francois-xavier.dormoy@ansys.com ANSYS Esterel Technologies

9, rue Michel Labrousse, 31100, Toulouse, France

Keywords: Embedded, Software, HMI, Model, Tool, Verification, Testing, Automation, SCADE Abstract

Human-Machine Interfaces (HMI) have become a central piece of most embedded control systems in many industrial domains. New design techniques have aroused, speeding up the development phases, but the thorough testing of embedded HMIs today remains a largely human-based, time-consuming and error-prone task. During the last decade, many improvements in control software testing. have been made, but HMI testing is still an area where companies are looking for industrial solution helping them to reduce their cost thus securing their project.

This paper presents an innovative embedded HMI testing approach, which aims at reducing the time spent on HMI verification phases, by automating test creation and test results comparison, while ensuring a very high level of determinism and reproducibility.

1 Introduction

Human-Machine Interfaces (HMI) have become a central piece of most embedded control systems in many industrial domains as illustrated in Figure 1 bellow.

From pure “displays”, where the objective was to synthetize and display on screen information coming from the system(s) to the operator (pilot, driver, operator), they have also become more and more interactive, with additional means to send commands and orders to the system, via interactive media such as mice, trackballs, keyboards or tactile screens.

While new design techniques have aroused – such as model-based HMI software design – allowing to considerably speed up the development of embedded HMIs, the thorough testing of embedded HMIs, mandated

Figure 1 Airbus A 380 Cockpit

(3)

by most design and safety standards for embedded systems, today remains a largely human-based, time- consuming and error-prone task.

This paper presents an innovative embedded HMI testing approach, which aims at reducing the time spent on HMI verification phases, by automating test creation and test results comparison, while ensuring a very high level of determinism and reproducibility. The innovation lies in the foundation of our approach relying on synchronous clock based approach ensuring determinism of the execution.

2 Current practices and State of the art 2.1 Current practices

HMI verification and validation (V&V) is the process of evaluating HMI software to verify whether they satisfy the requirements, and ensure that they are bug-free.

In the industry, still today most of the HMI V&V activities are done manually on the target platform as it is well described in a recent PHD Thesis [11]. Testing is performed following test procedures with manual interaction with the HMI under test and checking with human eyes that behavior and rendering are correct according to requirements. These practices are very time consuming and not attractive for test people when the same test campaign needs to be done several times for each HMI intermediate release (or standard).

2.2 State of the art

There are several known GUI V&V techniques (e.g., model-based testing, static analysis) to ensure the quality and reliability of GUIs. They have been developed to find failures that affect GUIs or to measure the GUI code quality. The most used technique to measure the quality and reliability of GUIs is testing. Testing GUIs aims at validating whether a GUI has the expected behavior: Does the GUI conform to its requirements specification?

Most of GUI testing approaches focus on automated GUI testing to provide an effective bugs detection [1], [2],[6]

but as stated in [1] “so far none of the academic approaches and tools has been adopted by the industry to test commercial software products”

The main reason is because model-based GUI testing solutions rely on surrogate HMI models of the real HMI to drive the test case generation. Crawler approaches or any other approaches may be used but then the reliability of the testing depends on the reliability of the test model. This Crawler approach is generally an issue, when extracting models at runtime (e.g. Murphy [1] or Guitar [12]) due to problems of the extraction.

Anyway, all approaches requiring building the HMI model are complex [5] and time consuming in operational usage. Furthermore, in industrial usage, maintaining the consistency between this surrogate model and the actual HMI under test is difficult or impossible during all project lifecycle.

3 Our Contribution: Deterministic HMI Testing Automation

To avoid the main drawback of crawler approaches (or any surrogate model usage), we propose in this paper to use the original model (the actual model used to produce HMI) for testing purpose.

The first pre-requisite for that approach is to rely on model-based development approach where the final HMI is automatically generated out of the model. When automatic code generation is complete (no line of code remains to be written to develop the HMI) and is qualifiable according to DO-178B/C, EN 50128, IEC 61508 and ISO 26262, we can then focus activity on the Model.

In addition to review and analysis, safety standards like DO-178C [9] require dynamic verification. Simulation (or model testing) can be used as a means for verifying compliance of models to their higher-level requirements and algorithm accuracy. We propose to use simulation to achieve HMI Testing.

Our approach is relying in three important steps which are explained in more detail in the next paragraphs:

1. Test execution on host (Model Simulation on development computer) (section 4)

2. Model Coverage (Dynamic analysis methods to check completeness, reachability of the model) (section 5)

3. Test execution on target (Testing of the executable object code on target hardware environment) (section 6)

During simulation, testing is about setting inputs of the HMI under test using traditional cyclic approach and observing HMI behavior and rendering to assess its correctness with regard to its requirements as described below in Figure 2.

(4)

When talking about inputs, not only the functional inputs need to be considered, but also all the data relative to user interaction. Indeed, the pointing device(s) data (pointer position, button, pressed/released, etc.) and keyboard device(s) data (keyboard key, pressed/released, etc.) must be considered as HMI SUT inputs.

In a similar way, for HMI systems, the outputs of the HMI SUT not only are the functional outputs. A key output of such HMI SUT is the “rendered image” itself, which is quite specific with respect to other systems. This indeed represents what the operator sees (with his eyes), and this output also must be considered for testing.

Atif M.Memon [3] stated that “Specialized tools are needed to generate and run test cases, models are needed to quantify behavioral coverage, and changes in the environment, such as the operating system, virtual machine or system load, as well as starting states of the executions, impact the repeatability of the outcome of tests making tests appear flaky” showing well the difficulties to have deterministic test case execution.

Because we are addressing safety critical GUI, test case execution determinism is not an option and furthermore it has to be consistent with target execution. So how can we build an approach where test execution will behave in the same way on all machine and for instance system load does not impact repeatability (determinism) of the outcome of tests? This is the condition to rely on the output of the test execution, when it says that the test is passed.

The deterministic testing approach described in this paper takes root in the SCADE® model-based design paradigm for HMIs, and its underlying formally defined language. We decided to bind our testing approach to the cycle-based execution model of SCADE to ensure functional determinism.

The cycle-based execution model of SCADE – SCADE Display combined with SCADE Suite – is a direct computer implementation of the ubiquitous sampling-actuating model of control engineering [4]. It consists in performing a continuous loop as illustrated in Figure 3.

In this loop, there is a strict alternation between environment actions, including user interactions, and program actions. Once the input sensors (or interactions) are read, the cyclic function starts computing the cycle outputs, including the drawing of the rendered image. During that time, the cyclic SCADE function is blind to environment changes. When the outputs are ready, or at a given time determined by a clock, the output values are fed back to the environment, the computed image is rendered on screen, and the oracle checks can then be performed, then the program waits for the start of the next cycle.

An HMI designed as a SCADE model can be executed according to the cycle-based approach. It can thus be tested with the same approach:

HMI SUT

Functional inputs Pointing and

keyboard device(s) data

Functional outputs

Rendered image

Scope of SCADE

Figure 2: The HMI System Under Test

Figure 3: Cycle-based execution model of SCADE

(5)

1. Feed the inputs (including user interaction) 2. Execute the display computation cycle

3. Observe/check the outputs (including the rendered image)

This cycle-based approach for SCADE generated HMI execution is not only key to ensure determinism of the interactive HMI application itself, but also to set up a deterministic and reproducible testing strategy. Unlike most other HMI design frameworks and tools which rely on surrogate models, the same test scenarios will always lead to the same execution sequence and obtained results (both functional outputs and rendered images), whichever the operating system and target platform, given the same HMI model.

Following this approach, we have defined a testing language with the following characteristics:

• Modularity: allowing dedicated initialization or preamble sequences as well as re-useable test scenarios in test procedures

• Readability: offering both tabular and textual formats, which eases test reviews

• Observability: allowing to define expected-results checking including invariant checking and accuracy tolerance customization (per data or data types)

• Implementation naming free: ensuring independence from the SCADE model implementation with alias mechanism for associating requirements names to implementation names; then, reducing expensive rework for interface changes and facilitates test data maintenance.

Root element of the test language is a test procedure. A test procedure identifies the HMI SUT and a list of records containing a list of test cases.

A so called Test Case Record contains a list of test scenarios together with the corresponding test instructions.

All test scenarios in a Test Case Record are executed in sequence. This sequence may start with initialisation section or preamble section.

Test scenario is then a sequence of:

• Input setting (can be device pointer for interactive Display) using [SSM::set] instruction

• Check request (check of expected values is done after cyclic execution) using [SSM::check] instruction]

• Cycle execution (1 or n) using [SSM::cycle] instruction

Figure 4 illustrates an exemplary test scenario which is written in the testing language:

SSM::set Altitude 0.0 SSM::set AltitudeValid 0 SSM::set DisplayMode 3 SSM::cycle 1

SSM::cycle 1

SSM::set Altitude 20.9449462890625 SSM::cycle 1

SSM::set Altitude 10.47003173828125

SSM::check_image {ref=images\references\Ref_Green.bmp}{filter=images\filters\Filter_NEG.png}

SSM::check symbology_layer.AltitudeBox.IsNegative false SSM::cycle 1

SSM::set Altitude -0.0013113416498526931 SSM::cycle 1

SSM::set Altitude -10.472655296325684

SSM::check symbology_layer.AltitudeBox.IsNegative true

4 HMI Host Testing and graphical oracles

Test execution on host is the first step of our proposed approach. The main benefit is early testing in the left side of the V model[13]. The objective of this step is to verify that the HMI model complies with its requirements.

Figure 4: Example of Test Scenario

(6)

To achieve this goal, our HMI testing methodology needs to provide a way to check that the GUI application is working as expected. In the proposed approach, both the “rendered images” and the SCADE model “functional outputs” / graphical properties constitute the expected results of the HMI tests. Because HMI test needs to be requirements based, choice between functional outputs or bitmaps are driven by requirements. When requirements are about interactive functional behavior functional outputs are checked. When requirements are about graphical rendering of an image or part of the image, bitmap checking is done.

For rendering Oracles we propose to focus on some parts of the image relative to the requirements under test as depicted in Figure 5.

Figure 5: Illustration of identified part of image comparison

For this purpose, image filtering capabilites have been integrated into our testing environment in order define areas where image comparison must be done according to the requirements.

This allows to check parts of the HMI images by defining mask images with exclusion and inclusion areas. The mask image has the same dimensions as the original HMI image and uses the following color coding: inclusion areas are drawn in white color and represent zones in the HMI image that have to be checked by image check instruction. In contrast, exclusion areas represent zones in the image that have to be ignored by image check instruction and are drawn in any other color but white.

Image Oracle production

We assume that a "reference" image is given in the requirements but in general images in requirements are far from the actual image rendering. In industrial projects, we noticed that a first version of the GUI application is often used to produce “reference” images. Of course, these captured images are reviewed against requirements but then constitute a well-defined reference image. This is the reason why SCADE Test provide a way for a Tester to record a test scenario while he is executing a test procedure. During this test recording he can pause and add any Oracle (Value comparison or filtered image).

Any graphical property of any graphical object within the SCADE Display model can be checked. When a requirement relates to the appearance of a graphical element on the screen (e.g. the color of a LED), the V&V engineer is able to check, at any cycle of the test scenario, an expected value for the “color” property of the circle representing the LED.

This approach is supported by extending current capabilities of the SCADE Test Environment tool to HMI intensive applications. The SCADE Test Environment is an integrated environment for V&V engineers to automate both the creation and management of test cases, run the test cases created from the High-Level Requirements (HLR), and automate the generation of conformance reports as illustrated below in Figure 6.

(7)

Figure 6: Illustration of conformance (Pass/failed) report

With our approach, we propose to define HMI test scenarios (our tooling is helping in the area) using a simple language we have defined and automate execution on host ensuring execution and results determinism. A conformance (Passed/Failed) report is then automatically produced at the end of execution. In Result column, a Ratio of differences is provided. 0.00 means no differences. In the first line with Failed Red rectangle 0.032 is the ratio of error between Expected and Actual image.

Image Comparison Algorithm

The ratio computed for image comparison results is based on a specific algorithm that computes the average distance (in terms of color) for each pixel of the image. The resulting ratio ranges from 100 (all colors different) to 0 (no color differences).

Let ‘n’ be the number of pixels of an image and ‘i’ the index of those pixels. Let r1i, g1i, b1i (red, green, blue) be the colors of each pixels for the image to compare and r2i, g2i, b2i the ones for the reference image. The following expression is used to compute the difference between two images in percent:

In addition, the image comparison builds the image showing differences based on the differences of each color pixel using the following expressions:

5 Introduction to HMI Model Coverage

Once you’ve run your test cases and checked that all have passed on Host, meaning that the software behaves correctly with regard to requirement you generally have to answer this question: Have I have made enough test cases and caught all existing errors?

Figure 7: Calculation of image comparison result.

Figure 8: Calculation of Image Differences

(8)

Model-level coverage is the known way to measure whether the set of test cases you created covers the whole model according to a defined criterion.

Model coverage analysis should not be confounded with structural [code] coverage analysis. Model coverage analysis concerns the low-level requirements expressed by a model, and is at a higher level of abstraction than structural coverage analysis. Model coverage analysis concerns the functional aspects of the model regardless of the code that implements this model, which structure is highly dependent on the coding technique. Also, the code coverage criteria that are required by standards and supported by tools are limited to control structures and Boolean expressions: they do not address very important functional aspects such as the Definition and Use of data.

As we did for SCADE Suite (Control Software) we are currently defining criteria for HMI model coverage. HMI model coverage measures the activity of a model in terms of graphical object drawing and the following criteria apply:

• All graphical object must be drawn

• Activation of priorities (call to the top-level drawing function)

• Activation of objects (enable/visible/maskActivity)

• Activation of children of conditional container (switch-case)

• Activation of parametrized part of objects: outline, fill, texture

• Activation of parametrized appearance states: halo, modulate, polygon smooth,

Once the test cases are created and executed on the host, the objective of the SCADE Test Model Coverage is to provide the capability to measure the model coverage of all test cases.

Model-level coverage analysis can show how thoroughly the HMI model has been tested by software requirements-based tests. The analysis pinpoints each test case’s role in covering the SCADE model, especially valuable in efficiently correcting errors in the software design. Resolution enables correction or justification for all uncovered parts, pinpointing any shortcomings in requirements-based test procedures, software requirements inadequacies, dead software parts and “deactivated” software parts.

6 Target Testing

Target testing is the final and most important goal for testing activity. Because HMI software can run in a wide variety of target environments, the test execution infrastructure must adapt to a range of hardware targets.

The objective is to run all the test cases that were previously written and executed on the host (see Section 4) on the target hardware environment.

This approach is deployed since many years for control software (SCADE Suite) but innovative for HMI. The difficulty is not in target execution but more on target image rendering checks. Several possibilities exist:

- Target test harness may perform the whole bitmap comparison but then some sizing limits may be reached

- Image comparison is made on host thanks to communication between host and target implemented in TDP (Target Deployment Port)

This last part represents an ongoing work we are currently partnering with tool providers experts in target testing environment. Details of this work cannot be disclosed for now.

7 Conclusion and Future Work

The extension of the SCADE Test Environment for SCADE generated HMIs combines a cycle-based HMI testing approach enabling a deterministic testing execution framework with the automation of a large part of the testing process itself: automatic bitmap production, automatic replay of test scenarios, automatic conformance results generation from both image comparison and functional results checks. This shall result in significant time and cost savings over manual verification strategies for embedded HMI applications.

Future areas of interest for SCADE-based HMI testing automation include: extensive reuse of HMI test scenarios on the target platform, model coverage analysis from the same set of test scenarios, and extension of the test environment to the ARINC 661 HMI design paradigm, for which dedicated SCADE Solutions for ARINC 661 are particularly fitted.

(9)

8 References

[1]: Pekka Aho, Matias Suarez, Teemu Kanstrén, and Atif M. Memon. 2014. Murphy Tools: Utilizing Extracted GUI Models for Industrial Software Testing. In Proceedings of the 2014 IEEE International Conference on Software Testing, Verification, and Validation Workshops (ICSTW '14). IEEE Computer Society, Washington, DC, USA, 343-348.

[2]: Bao N. Nguyen, Bryan Robbins, Ishan Banerjee, and Atif Memon. 2014. GUITAR: an innovative tool for automated testing of GUI-driven software. Automated Software Engg. 21, 1 (March 2014), 65-105.

[3]: Atif M. Memon and Myra B. Cohen. 2013. Automated testing of GUI applications: models, tools, and controlling flakiness. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13).

IEEE Press, Piscataway, NJ, USA, 1479-1480.

[4]: Testing and maintenance of graphical user interfaces: Valéria Lelli Leitão Dantas PhD Thesis Human- Computer Interaction [cs.HC]. INSA de Rennes, 2015 – HAL Id : tel-01232388, version 2

[5]: Valeria Lelli, Arnaud Blouin, and Baudry Benoit. Classifying and qualifying gui defects. In Software Testing, Verification and Validation (ICST), 2015 IEEE Eighth International Conference on, pages X–X, April 2015.

[6]: Leonardo Mariani, Mauro Pezzè, Oliviero Riganelli, and Mauro Santoro. AutoBlackTest: A tool for automatic black-box testing. In Proceedings of the 33rd International Conference on Software Engineering, ICSE ’11, pages 1013–1015. ACM, 2011.

[7]: Arnaud Blouin and Olivier Beaudoux. Improving modularity and usability of interactive systems with Malai. In Proceedings of the 2Nd ACM SIGCHI Symposium on Engineering Interactive Computing Systems, EICS’10, pages 115–124, 2010

[8]: [DO_330/ED 215] Software Tool Qualification Considerations, DO-330, RTCA Inc. ./EUROCAE, 2011 [9]: [DO_178C/ED 12C] Software Considerations in Airborne Systems and Equipment Certification, DO-178C, RTCA Inc./EUROCAE, 2011

[10] [DO-331] “Model-Based Development and Verification Supplement to DO-178C and DO-278A” RTCA, Dec 2011

[11] Visual GUI Testing: Automating High-Level Software Testing in Industrial Practice : Emil Alegroth PhD Thesis for The Degree of Doctor of Philosophy Technical Report No 117D – 2015

[12] Bao N. Nguyen, Bryan Robbins, Ishan Banerjee, and Atif Memon. GUITAR: an innovative tool for automated testing of gui-driven software. Automated Software Engineering, 21(1):65–105, 2014.

[13] Combining Model-Based Testing and Stepwise Formal Development Qaisar Ahmad Malik SBN 978-952-12- 2467-6 September 2010

9 Glossary

GUI: Graphical User Interface LED: Light-emitting diode HLR: High Level Requirements HMI: Human-Machine Interfaces

SCADE: Safety Critical Development Environment SUT: System Under Test

TDP Target Deployment Port V&V: Validation and Verification

Références

Documents relatifs

Test models are then used to guide the test data generation process towards meaningful and valid test input sequences with regard to the specifications.. On the other hand, among

Figure 18 shows the distribution of pressure on the surface of glider, and, in the flow- field away from the glider, from using FLUENT with the "SST k-" turbulence model

We described a distributed test architecture for Wireless Sensor Networks by using observers.. We presented a method- ology to check if some faults occur on a wireless sensor

EXIST proceeds by itera- tively generating candidate paths based on the current distri- bution on the program paths, and updating this distribution af- ter the path has been labelled

A Sierre, l’association des cinéphiles de DreamAgo ont recueilli près de 1'500 signatures pour sauver les cinémas après la fermeture du Casino, en janvier 2012

Its head and neck lie more palmar to the bone ’s corpus (Fig. 4), and the head’s dorsal articular surface blends distally into a broad dorsal de- pression that accommodates the

To generate executable test cases from misuse case specifications we employ some concepts of natural language programming, a term which refers to approaches automatically

from the tests that are blocked or bypassing the firewall, our approach selects tests that exhibit string patterns with high bypassing probabilities (according the machine learning)