2. a refinement of these goals into human actions.
There are many reasons for developing task models. Task model of human activities required to operate an existing system can be created in order to better understand the underlying design, to analyse its potential limitations and how to overcome them (Paternò, 2004). Task models can also be recorded and used in a complementary way with other data collection methods (Law, et al., 2009) such as interviews and scenario based approaches (Rosson & Carroll, 2001). Task analysisandmodelling techniques provide a unique way of understanding and analysing in systematic way users’ roles and activities. They can be useful for supporting analysisand assessment of usability (Greenberg, 2004) (Law, et al., 2009) and resilience (Hollnagel, 2006) ofpartlyautonomousinteractivesystems. As illustrated in Figure 2, the complexity ofthe real work can be analysed through task analysis. The complexity ofthe real work is filtered out through a purple filter that represents a task analysis. This “task analysis” filter allows focusing on specific pieces of information about the real work by using task analysis notation elements to describe real work activities. The aim of this filtering out operation is to obtain a description ofthe assumed real work. The description ofthe real work relies on the chosen notation, i.e. on aspects that are relevant to the analyst. The analyst may then choose between several task modelling techniques to record this description in task models. Each task modelling technique can provide a different way to represent the task. This point is illustrated in Figure 2 where there are 2 purple glasses for representing different task modelling techniques and their specific notations such as Task Architect (Stuart & Penn, 2004) in the upper part, or HAMSTERS 1.0 (Martinie, Palanque, & Winckler, 2011) in the lower part (Section 2.1 presents a state ofthe art on existing notations). Depending on the choice of notation, the task modelling operation will result in a different model ofthe activities since each type of model highlights particular aspects ofthe assumed real work. For this reason, it is important to choose the most suitable task modelling technique. Thanks to the selected task modelling technique, the analyst can highlight the aspects that are relevant to the goals of his/her analysis.
HANDLING AUTOMATION IN THE PROCESS
The case study has presented a re-design opportunity based on changing operator tasks andthe procedures ofthe socio- technical system. However, other opportunities based on automation exist. The operators’ tasks that are resource consuming (e.g. time) would be good candidates for migration in the system. For this case study ofthe ATC clearances, the list of possible clearances could be sent altogether to the ATC via the data link. For this re-design opportunity, modelling phase would be more complex involving the production of new ICO models that would add new functions to the system. In such case no modification would have been made to the user interface but design alternatives could also require changes there (for instance showing progress in the sending of clearances). The stage called “integration ofmodels” would be performed again as well as the downstream stages. The proposed process then provides support for analyzing and re-designing automation at a fine grain level thanks to the integration ofthe 3 types ofmodels (human, system, organization). Each step ofthe process can deal with automation design as even the definition phase can issue an objective related to particular automation levels. More details on how such integration of operators’ tasks, user interfaces and system automations have been presented in [16].
HANDLING AUTOMATION IN THE PROCESS
The case study has presented a re-design opportunity based on changing operator tasks andthe procedures ofthe socio- technical system. However, other opportunities based on automation exist. The operators’ tasks that are resource consuming (e.g. time) would be good candidates for migration in the system. For this case study ofthe ATC clearances, the list of possible clearances could be sent altogether to the ATC via the data link. For this re-design opportunity, modelling phase would be more complex involving the production of new ICO models that would add new functions to the system. In such case no modification would have been made to the user interface but design alternatives could also require changes there (for instance showing progress in the sending of clearances). The stage called “integration ofmodels” would be performed again as well as the downstream stages. The proposed process then provides support for analyzing and re-designing automation at a fine grain level thanks to the integration ofthe 3 types ofmodels (human, system, organization). Each step ofthe process can deal with automation design as even the definition phase can issue an objective related to particular automation levels. More details on how such integration of operators’ tasks, user interfaces and system automations have been presented in [16].
O pen A rchive T OULOUSE A rchive O uverte ( OATAO )
OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible.
This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 13036
The Attack Tree Notation
We selected Attack Tree to address security aspects as they are major tools in analyzing security ofa system [ 41 ]. They can be decorated with expert knowledge, historical data and estimation of critical parameters as demonstrated in [ 22 ] even though early versions of them were lacking formal semantics [ 41 ]. Attack tree notation is a formal method to describe the possible threats or combination of threats to a system. B. Schneier [ 49 ] provided a first description of an attack tree, where the main goal ofthe attack is represented by the top root node and where the combinations of leaves represent different way to achieve that goal. In the original notation, OR nodes refer to alternatives of attacks to achieve the attack whilst AND nodes refer to combination of attacks. Nishihara et al. [ 41 ] proposed to extend the notation with potential effect of attacks and with a logical operator, SAND to represent constraints in the temporal ordering of combined attacks. Other elements ofthe notation include a rectangle to represent an event (such a threat or attack), an ellipse to represent an effect anda triangle to represent the fact that a node is not refined. All elements ofthe attack tree notation are shown at Fig. 4 .
When the virtual objects are not readily available to the user, he/she can select it through a specific interaction technique. One ofthe first techniques designed for inter- action with virtual worlds is the ray-casting technique. This technique was introduced by Bolt 1980 [32] and enriched over the years by other researchers. The Ray-casting technique is based on the virtual ray metaphor. An infinite laser ray from a virtual hand crosses the entire virtual world. The first object intersected in the virtual world is ready to be selected. Zhai et al. 1994 [308] proposed a new interaction technique based on the ray-casting technique. They added a 3D semi-transparent cursor at the end ofthe ray. The objective of this cursor is to distinguish the virtual ray in the scene. Later, De Amicis et al. 2001 [88] replaced the cursor by a spherical volume. Techniques based on the virtual pointer metaphor have the advantage of being cognitively simple and easy to use, but have a major drawback forthe selection of small and distant objects. Liang and Green 1994 [183] proposed to use an icon in- stead ofthe ray to solve this problem. In fact, if distant objects become smaller with distance, then the selection tool must be larger to be able to easily select it. Thereafter, Forsberg et al. 1996 [104] proposed to modify the opening angle ofthe cone as a function ofthe object to be selected and its position in the virtual environ- ment. The selection cone must be wider for distant objects that for close ones. This technique takes advantage of Fitts’law, which says that the selection time decreases with an increasing surface to be selected. During the selection process, the user may face barriers that hide the objects he/she wants to select. To avoid this difficulty, Olwal and Feiner 2003 [212] proposed the virtual flexible pointer technique which is an extension ofthe virtual ray technique. This technique allows a user in a 3D envi- ronment to point more easily to fully or partially obscured objects, and to indicate objects to other users more clearly. The flexible pointer can also reduce the need for disambiguation and can make it possible forthe user to point to more objects than with other presented egocentric techniques (Figure 2.12).
International Test and Evaluation Association Conference (Texas, 2010) Cost drivers for testing unmanned and autonomous systems of systems.. Poster Presentations:.[r]
1 ICS-IRIT, University of Toulouse 3, 118 route de Narbonne,
31062 Toulouse Cedex 9, France {martinie, palanque, navarre, barboni}@irit.fr
Abstract. While a significant effort is being undertaken by the Human- Computer Interaction community in order to extend current knowledge about how users interact with computing devices and how to design and evaluate new interaction techniques, very little has been done to improve the reliability of software offering such interaction techniques. However, malfunctions and failures occur in interactivesystems leading to incidents or accidents that, in aviation for instance, are [22] 80% ofthe time attributed to human error demonstrating the inadequacy between the system and its operators. As an error may have a huge impact on human life, strong requirements are usually set both on the final system and on the development process itself. Interactive safety- critical systems have to be designed taking into account on an equal basis several properties including usability, reliability and operability while their associated design process is required to handle issues such as scalability, verification, testing and traceability. However, software development solutions in the area of critical systems are not adequate leading to defects especially when theinteractive aspects are considered. Additionally, the training program development is always designed independently from the system development leading to operators trained with inadequate material. In this paper we propose a new iterative design process embedding multiple design and modeling techniques (both formal and informal) advocated by HCI and dependable computing domains. These techniques have been adapted and tuned forinteractivesystemsand are used in a synergistic way in order to support the integration of factors such as usability, dependability and operability and at the same time in order to deal with scalability, verification and traceability.
1.1.2 Fault Removal
Fault removal aims at removing faults from the considered system. It can occur during the development phase, where faults are found and corrective actions taken before de- ployment ofthe system; or during its use, where faults that have occurred are removed (corrective maintenance) and actions are taken to avoid the occurrence of new faults (predictive maintenance). In this section, we will focus mainly on the fault removal techniques that can be used during the development phase. Removing faults from a system is done in three steps. The verification step checks that the system conforms to some properties. If the verification reveals a property violation, the diagnosis step identifies what fault led to thethe property violation andthe correction step brings the necessary changes to the system to remove the fault. If the diagnosis and correc- tion steps are very dependent on the considered system, the verification techniques are more generic — we will detail the most common ones in the following. The verification techniques can be classified into two categories: static verification, which does not need to execute the system, and dynamic verification which does. Autonomoussystems are more challenging to verify than classical control systems [Pecheur, 2000]. A full model ofthe system is usually not available or is hard to build, the number of scenarios to be analyzed is infinite, andthe software is non-deterministic (mostly due to concurrency). However, formal methods are widely used with success forthe verification ofautonomoussystems, andthe user can refer to the extensive survey in [Luckcuck et al., 2018] for more references.
Chapter 3. Muti-View Modeling Language for Specifying Systems 57 ccsl is a formal declarative language to specify causal and temporal relationships be- tween events. This language was firstly introduced in marte [ 5 ] to represent func- tional and extra-functional constraints over the time modeling of embedded systems. In marte, it is possible to define Clocks, which are an ordered set of instants. These clocks are used to represent the relevant changes in a system, on which constraints can be specified. For instance, a clock can represent the entering in a state, a function call, a data writing. Based on such clocks, relations can be specified to represent causalities or temporal aspects ofthe system. A clock can be of two types: Chronometric or Logical. Logical clocks represent functional time. For instance, based on clocks we can specify that the execution of an application is caused by touching the screen ofa smart phone. In this example, the clock associated with the screen touching is in a causal relationship with the application execution. It is also possible to specify logical periodicity between clocks. For instance, specifying that a task is started every 100th cycle ofa processor. Depending on the energy management in a computer, the start ofthe task can be pe- riodic or not. When we want to specify something related to a physical dimension like the physical time or a distance, a chronometric clock is used. That is why, it is then possible to state that the CPU cycle is periodic every 3 ms.
In CCSL , it is possible to define Clocks, which are possibly infinite ordered sets of instants. These clocks represent relevant changes in a system, on which constraints can be specified. A clock can be either chronometric or logical. The former is employed to specify a constraint associated with a physical dimension like the physical time or distance. The later defines a terminology referring to events (if events are sequences of event occurrences, as clocks are sequences of clock ticks). Distinct clocks can be independent (fully asynchronous), or partially ordered. The goal is that, after completion of design the specification represents a set of partially ordered clocks and eventually, at runtime, all clocks are mapped onto a single, most fundamental and totally ordered master clock representa- tion simulation/execution step, but before that, designing with independent logical clocks is usually highly beneficial.
Jurriaan M. Peters, Mustafa Sahin, Benoit Macq, Simon K. Warfield November 14, 2013
Abstract
Diffusion tensor imaging (DTI) is unable to repre- sent the diffusion signal arising from multiple crossing fascicles and freely diffusing water molecules. Gen- erative modelsofthe diffusion signal, such as multi- fascicle models, overcome this limitation by providing a parametric representation forthe signal contribu- tion of each population of water molecules. These models are of great interest in population studies to characterize and compare the brain microstructural properties. Central to population studies is the con- struction of an atlas andthe registration of all sub- jects to it. However, the appropriate definition of registration and atlasing methods formulti-fascicle models have proven challenging. This paper proposes a mathematical framework to register and analyze multi-fascicle models. Specifically, we define novel operators to achieve interpolation, smoothing and av- eraging ofmulti-fascicle models. We also define a novel similarity metric to spatially align multi-fascicle models. Our framework enables simultaneous com- parisons of different microstructural properties that are confounded in conventional DTI. The framework is validated on multi-fascicle models from 24 healthy subjects and 38 patients with tuberous sclerosis com- plex, 10 of whom have autism. We demonstrate the use ofthemulti-fascicle models registration and anal- ysis framework in a population study of autism spec- trum disorder.
89
use ofthe real value of wood raw material. Other authors (e.g. Gerwing et al. 1996; Abebe and Holm, 2003; Fomethé, 1997; Akande et al. 2006; Eshun et al. 2012), have reported significant amounts of wood fiber loss along the forest value chain. Meanwhile, Chan et al. (2002) stress the importance of raw material management in supply chain performance. Though different options for tackling environmental concerns exist, the use ofmodelling techniques such as The Supply Chain Operations Reference (SCOR) model appears to be more plausible. It is rated as one ofthe most rigorous and useful modelsfor performance evaluation in the supply chain management literature (Huan et al. 2004; Hwang et al. 2008; Milletia et al. 2009). The GreenSCOR model, which is a component ofthe original SCOR, emphasizes environmental aspects along the supply chain (Bai et al. 2010; Hwang et al. 2010; Bai et al. 2012; Xiao et al. 2012). It is considered as the closest approach to standard supply chain mapping (Gardner and Cooper, 2003; Huan et al. 2004; Lockamy and McCormack, 2004, Ntabe et al. 2015). The model enables users to understand the processes involved in a business organization and identifies the vital features that lead to customer satisfaction. GreenSCOR has been used in other disciplines to achieve supply chain performance and company objectives (Bai et al. 2010; Hwang et al. 2010, Tramarico et al. 2017). It is highlighted by several authors (e.g. Liu, 2009; Wang et al. 2010; Li et al. 2011; Ntabe et al. 2015) as a reference model for environmental performance improvement of business processes and comprises environmental metrics that can provide amodelling foundation for measuring and reporting wood waste generation along the forest value chain.
C1&C2: Agreement between coder 1 and coder 2.
C1&AP: Agreement between coder 1 and arbitrated protocol. C2&AP: Agreement between coder 2 and arbitrated protocol.
However, even with such a large sample, the protocol analysis method is known to be influenced by the interpretation ofthe individual that performs the coding. To re- duce this interpretation effect and tend to a form of objectivity, we relied on a specific coding process, as described in [13]. Therefore, the protocols of both case studies have been coded by two different individuals and then arbitrated within more than ten days after the coding. We then computed a level of agreement between the different protocols based on Cohen’ Kappa coefficient. Table 2 shows the Kappa coefficient we obtained. The coefficients between 0.61 and 0.8 indicate a substantial and good agreement between the coders. Given our context, the data can be considered as relia- ble. All the following analysis has been realized on the arbitrated protocol (AP).
4 Formalism and Creative Design: A Promising Combination
The challenge of our research is to gain a better understanding ofthe benefits of ex- ploring the design space through the use ofa model of MIS. During the last five years, we tested and refined an original concept-driven method dedicated to the de- sign of MIS. Theapproach we developed is called Model Assisted Creativity Session (MACS). The goal ofa MACS is to identify and describe a set of alternative solutions to a design problem. Such creative sessions usually involve between 5 and 7 partici- pants including a facilitator. The core principle of MACS is to collaboratively take advantage ofthe concepts expressed and characterized in an existing model of mixed interaction. The participants thus generate ideas and encode them on the fly in the model’s notation. For now, this method has been successfully used [6] with two dif- ferent interaction models: ASUR [11] and MIM [9]. However, the literature men- tioned a lot of other models that are candidates for use in a MACS: each one is de- scribing different aspects of MIS and adopting a different paradigm, using different concepts and represented through various notations [18, 26, 29].
Philippe Palanque is Professor in Computer Science at University of Toulouse 3. He has been teaching HCI and task engineering classes for 20 years and is head oftheInteractive Critical Systems group at the Institut de Recherche en Informatique de Toulouse (IRIT) in France. Since the late 80 s he has been working on the development and application of formal description techniques forinteractive system. He has worked on research projects to improve interactive Ground Segment Systems at the Centre National d ’Études Spatiales (CNES) for more than 10 years and is also involved in the development of software architectures and user interface modeling forinteractive cockpits in large civil aircraft (funded by Airbus). He is also involved in the research network HALA! (Higher Automation Levels in Aviation) funded by SESAR program which targets at building the future European air traf fic management system. The main driver of Philippe ’s research over the last 20 years has been to address in an even way Usability, Safety and Dependability in order to build trustable safety critical interactivesystems. As for conferences he is a member ofthe program committee of conferences in these domains such as SAFECOMP 2013 (32nd conference on Computer Safety, Reliability and Security), DSN 2014 (44th conference on Dependable Systemsand Networks), EICS 2014 (21st annual conference on Engineering Interactive Computing Systems) and was co-chair of CHI 2014 (32nd conference on Human Factors in Computing Systems) and research papers co-chair of INTERACT 2015.
http://www.irit.fr/recherches/ICS/projectsummary/proj ects.html ).
Philippe Palanque is Professor in Computer Science at
University of Toulouse 3. He has been teaching HCI and task engineering classes for 20 years and is head oftheInteractive Critical Systems group at the Institut de Recherche en Informatique de Toulouse (IRIT) in France. Since the late 80s he has been working on the development and application of formal description techniques forinteractive system. He has worked on research projects to improve interactive Ground Segment Systems at the Centre National d'Études Spatiales (CNES) for more than 10 years and is also involved in the development of software architectures and user interface modeling forinteractive cockpits in large civil aircraft (funded by Airbus). He is also involved in the research network HALA! (Higher Automation Levels in Aviation) funded by SESAR program which targets at building the future European air traffic management system. The main driver of Philippe’s research over the last 20 years has been to address in an even way Usability, Safety and
2 State ofthe art
2.1 Automatic path planning
The automatic path planning issue has been deeply studied in robotics. These works are strongly based on the Configuration Space (CS) model proposed by (Lozano-Perez, 1980). This model aims at describ- ing the environment from a robot’s Degrees of Free- dom (DoF) point of view. The robot is described us- ing a vector where each dimension represents one of his DoF. A value of this vector is called a configu- ration. So, all the possible values of this vector form the CS. This CS can be split into free space and collid- ing space (where the robot collides with obstacles ofthe environment). With this model, the path planning from a start point to a goal point consists in finding a trajectory in the free space between these two points in the CS.
5.1 Parameters of interest
Considering the feedback effect of thermal-hydraulics on core behaviour leads to identify three main parameters of interest: the coolant temperature and/or void fraction, andthe cladding temperature (i.e. boundary condition for heat conduction problem in fuel rods) resulting from the heat transfer coefficient between the fuel rod andthe coolant. These parameters highly depend on the flow: single-phase turbulent flow 1 , nucleate boiling or film boiling (post-CHF conditions). Hence Phase II should provide uncertainties on these parameters, but also on the transition criteria between the three flow regimes. As an example, one can explain the methodology for Critical Heat Flux (or boiling transition) andfor void fraction.
6 Related works
As already mentioned in Section 1, one of our first areas of interest was Autonomic Computing (AC), more precisely self-healing systems, i.e., systems which are able to repair themselves. However, as [22] also points out, many ofthe “hot” issues within AC have been at the core of other disciplines, e.g., fault tolerant computing and artificial intelligence fora long time. The novelty lies in the “holistic aim” of regrouping all relevant research areas in this common project. Focusing on the intersection between AC and fault tolerance computing, which is our main research axis, the same author, in [23], reaches the conclusion that dependability and fault tolerance are not only “specifically aligned to the self-healing facet” of AC but, on a closer view, “all facets of Autonomic Comput- ing are concerned with dependability” (i.e., self-configuration, self-optimization and self-protection as well).