• Aucun résultat trouvé

uMove : a wholistic framework to design and implement ubiquitous computing systems supporting user's activity and situation

N/A
N/A
Protected

Academic year: 2022

Partager "uMove : a wholistic framework to design and implement ubiquitous computing systems supporting user's activity and situation"

Copied!
180
0
0

Texte intégral

(1)

Department of Informatics University of Fribourg (Switzerland)

uMove:

A wholistic framework to design and implement ubiquitous computing systems supporting user’s

activity and situation

THESIS

Submitted to the Faculty of Science, University of Fribourg (Switzerland) to obtain the degree of Doctor Scientiarum Informaticarum

Pascal Bruegger

from

Graben BE (Switzerland)

Thesis N 1711 UniPrint, Fribourg

2011

(2)
(3)

Accepted by the Faculty of Science of the University of Fribourg following the proposal of:

- Prof. Ulrich Ulthes-Nitsche, University of Fribourg (Jury President)

- Prof. B´eat Hirsbrunner, University of Fribourg, Switzerland (Thesis Director) - Dr. Denis Lalanne, University of Fribourg, Switzerland (Expert)

- Prof. Alan Dix, Lancaster university, UK (External expert)

- Prof. Peter Kropf, University of Neuchˆatel, Switzerland (External expert)

Fribourg, 6 June 2011

Thesis Director Faculty Dean

Prof. B´eat Hirsbrunner Prof. Rolf Ingold

2011 by Pascal Bruegger

©All Rights Reserved

(4)
(5)

Acknowledgments

A PhD thesis is long term process full of particular moments which are sometimes emotionally intense, sometimes desperately frustrating but also very motivating to continue. At the end, the result is an extraordinary experience from a personal and a scientific point of view. This type of challenge would not have been possible without the support of valuable people who guided and advised me during this research. My deepest gratitude goes to my supervisor, Prof.

B´eat Hirsbrunner, who gave me the opportunity to study in the Department of Informatics of the University of Fribourg and trusted me for this thesis. I would also like to thank my PhD committee for their expertise and the Swiss National Science Foundation (SFNS) for the financial support during the last four years.

I would like to sincerely thank my two colleagues, Dr. Agnes Lisowska Masson and Dr.

Apostolos Malatras for their constructive comments and advise during the writing of the thesis.

I take this opportunity to also thank also all my colleagues in the department, Elisa- beth Br¨ugger, Silviane Pilloud, Bruno Dumas, Denis Lalanne, Maurizio Rigamonti, Florian Ev´equoz, Amos Brocco, Fulvio Frapolli, Muriel Bowie, Oliver Schmid, Momouh Khadraoui and Nicolas Juillerat for all enjoyable moments we spent together on different occasions.

I wish to also thank Benjamin Hadorn, Samuel Vonlanthen, Adriana Wilde and Sa¨ıd Mechkour for their precious collaboration on this research and their contribution in different projects.

Of course, nothing would have been possible without my beloved family, my wife Prisca, my son Samuel and my mother Yolande. They were always supporting me, especially in stressful moments and I’m deeply grateful and proud to be loved so much.

A chapter ends, a new one starts.

i

(6)
(7)

Abstract

This thesis presents a framework that offers tools for the design and the implementation of Ubiquitous computing systems supporting user motions, activities and situations. With the rapid development of context-aware mobile computing and sensor-based interaction, many new challenges come up, three of which are particularly addressed in this thesis. The first is the need for wholistic tools to develop Ubiquitous computing infrastructures. The second concerns smart applications allowing users to benefit from the distributed computing power in their environment, and the third is the integration of enriched human-computer interaction using motions, activity and situation provided by the increasing sensing capabilities of the user environment or mobile devices. We propose the uMove framework, a comprehensive solution which allows to design and develop Ubicomp systems representing different kinds of physical or virtual environments based on a systemic approach. uMove proposes both theoretical foundations and implementation tools and is divided into three specific facets.

The first facet is the conceptual model describing a Ubiquitous computing system made of entities and observers within their physical or logical environment. The second facet is a system architecture which offers designers and developers the tools to theoretically define a logical system, including the types of contexts taken into consideration. The third facet is development tools that allow programmers to implement their systems, sensors, applications and services. The uMove framework is evaluated and validated in an interactive manner through four projects.

Keywords: Ubiquitous computing, pervasive computing, context-aware computing, mo- bile computing, HCI, middleware.

iii

(8)
(9)

R´ esum´ e

Cette th`ese pr´esente un ensemble d’outils (un framework) qui permettent la d´efinition, la cr´eation et la r´ealisation de syst`emes informatiques ubiquitaires pouvant int´egrer la prise en charge des activit´es des utilisateurs ainsi que de la d´etection de leur situation. Avec le rapide d´eveloppement de l’informatique int´egrant les contextes des utilisateurs ainsi que l’informatique mobile, de nouveaux d´efis sont apparus et parmi ceux-ci, trois d’entre eux sont adress´es dans cette th`ese. Le premier est le besoin d’un ensemble d’outils permettant le d´eveloppement de syst`emes ubiquitaires partant de leur d´efinition th´eorique jusqu’`a leur r´eal- isation. Le deuxi`eme d´efi consiste `a d´evelopper des applications intelligentes qui int´egrantent les nouvelles technologies telles que les senseurs et l’acc`es `a des syst`emes informatiques r´epar- tis. Le troisi`eme d´efi est l’int´egration d’interactions homme-machine enrichies par la prise en compte des mouvements, des activit´es et situations des utilisateurs ceci par le biais de senseurs de plus en plus pr´esents dans nos environnements et sur les dispositifs informatiques mobiles.

Dans cette th`ese, nous d´ecrivons uMove, un ensemble d’outils permettant la d´efinition et le d´eveloppement de syst`eme ubiquitaire repr´esentant diff´erentes sortes d’environnements physiques ou logiques. uMove comporte trois facettes qui d´ecrivent les concepts fondamen- taux ainsi que les outils logiciels n´ecessaires `a leur d´eveloppement. La premi`ere facette est consacr´ee `a la d´efinition du mod`ele conceptuel d´ecrivant des syst`emes ubiquitaires compos´es d’entit´es et d’observateurs et ceci en utilisant une approche syst´emique. La deuxi`eme facette pr´esente une architecture qui permet aux concepteurs et d´eveloppeurs de formaliser leurs syst`emes. La troisi`eme facette d´ecrit les outils logiciels qui permettront d’impl´ementer les projets d´efinis de mani`ere syst´emique et en respectant l’architecture uMove. Finalement, uMove est ´evalu´e et son mod`ele valid´e `a travers quatre projets qui ont ´et´e impl´ement´es avec l’ensemble de ces outils.

Mots-cl´es: Informatique ubiquitaire, informatique pervasive, informatique contextuelle, informatique mobile, interaction homme-machine, plateforme de d´eveloppement.

v

(10)
(11)

Acronyms

ABC : Activity-based computing ACD : Activity-centered Design AI : Artificial Intelligence

API : Application Programming Interface

AT : Activity Theory

GIS : Geographic Information Systems GPS : Global Positioning System

GST : General System Theory

GUI : Graphical User Interface

HCI : Human-Computer Interaction

IDE : Integrated Development Environment iHCI : implicit Human-Computer Interaction JSON : JavaScript Object Notation

KUI : Kinetic User Interface LCD : Liquid Crystal Display

MVC : Model-View-Controler

OWL : Web Ontology Language

PDA : Personal Digital Assistant

SQL : Structured Query Language

SUI : Surface User Interface TUI : Tangible User Interface Ubicomp : Ubiquitous Computing

UCD : User Centered Design

UML : Unified Modelling Language

URL : Uniform Resource Locator

UUI : Ubicomp User Interface

WIMP : Windows, Icon, Menu, Pointer

XML : Extensible Markup Language

vii

(12)
(13)

Contents

Acknowledgments i

Abstract iii

R´esum´e v

Acronyms vii

1 Introduction 1

1.1 Research challenges . . . 2

1.1.1 Tools for developing and deploying Ubicomp systems . . . 3

1.1.2 Smart and adaptive applications and services . . . 3

1.1.3 User interaction . . . 4

1.2 Goals . . . 4

1.3 Focus of the thesis . . . 5

1.4 Contribution . . . 6

1.4.1 System modelling . . . 6

1.4.2 System architecture . . . 6

1.4.3 Implementation tools . . . 6

1.4.4 Validation scenario and applications . . . 7

1.5 Outline of the thesis . . . 7

2 Background and related work 9 2.1 Ubiquitous and pervasive computing . . . 9

2.2 Ubiquitous computing: definition of the paradigm . . . 10

2.2.1 Ubiquitous computing is not nomadic computing . . . 11

2.2.2 From Weiser’s vision to now: where do we stand? . . . 11

2.3 Context-aware computing . . . 13

2.3.1 Context: concept and definitions . . . 13

2.3.2 Context-aware architectures and middlewares . . . 16

2.3.3 Context-aware applications . . . 18

2.3.4 Sensing contexts . . . 20 ix

(14)

2.4 Human-computer interaction in ubiquitous computing . . . 23

2.4.1 Post-desktop paradigm of interaction . . . 23

2.4.2 From GUI to UUI: a new opportunity for human-ubicomp system in- teraction . . . 24

2.5 Activity-based computing . . . 26

2.5.1 Activity Theory: concepts and applications . . . 27

2.5.2 Models and tools . . . 27

2.6 Reasoning on situation: an evolution of activity-based computing . . . 30

2.6.1 Definition of situation . . . 30

2.6.2 Situation theory . . . 31

2.6.3 Application of situation theory . . . 31

2.7 Summary . . . 32

3 Conceptual model 35 3.1 System modelling . . . 36

3.2 General System Theory . . . 38

3.3 System . . . 38

3.4 uMove system . . . 39

3.4.1 Environment and entities . . . 40

3.4.2 Observation . . . 48

3.5 Kinetic dimension . . . 51

3.5.1 Separation between activity and situation . . . 52

3.5.2 Motion . . . 53

3.5.3 Activities . . . 54

3.5.4 Contexts . . . 54

3.5.5 Situations . . . 55

3.6 Summary . . . 57

4 System Architecture, Design and Evaluation 59 4.1 uMove middleware: a multilayer architecture . . . 60

4.1.1 Sensor layer . . . 61

4.1.2 Entity layer . . . 63

4.1.3 Observation layer . . . 64

4.1.4 Message processors . . . 66

4.1.5 Activity and situation manager . . . 67

4.2 Mobile uMove system . . . 68

4.3 Coordination and communication in uMove . . . 68

4.4 Applications and services . . . 69 x

(15)

4.5 IWaT: methods and tools to test the uMove system . . . 70

4.5.1 Using IWaT with uMove . . . 71

4.5.2 How it works . . . 72

4.5.3 Advantages and drawbacks of using IWaT . . . 73

4.6 Summary . . . 74

5 Implementation tools 77 5.1 uMove API . . . 78

5.1.1 UMoveSystem . . . 78

5.1.2 Message processor . . . 81

5.1.3 Activity and situation managers . . . 82

5.1.4 Relation manager . . . 82

5.2 Coordination and communication . . . 82

5.2.1 Coordination manager . . . 83

5.2.2 Communication . . . 84

5.2.3 Services: definition and monitoring . . . 85

5.3 Mobile monitoring . . . 86

5.3.1 Monitoring mobile devices . . . 87

5.3.2 Services list update . . . 87

5.3.3 System service . . . 87

5.3.4 Public services . . . 87

5.4 Mobile uMove system . . . 89

5.4.1 Type of service: local versus global . . . 90

5.4.2 Mobile uMove as a service manager . . . 90

5.4.3 Smart environment finder . . . 91

5.4.4 Mobile service manager . . . 91

5.5 uMove-enabled applications . . . 92

5.6 uMove System Editor . . . 93

5.6.1 Entity management . . . 95

5.6.2 Saving and loading a system configuration . . . 95

5.6.3 System monitoring . . . 96

5.6.4 Application and service loader . . . 96

5.7 Summary . . . 98

6 Prototypes and validation 99 6.1 Methods of evaluation and validation . . . 99

6.2 Smart Heating System . . . 101

6.2.1 User’s activities and contexts . . . 101

6.2.2 Software architecture . . . 101

6.2.3 Hardware . . . 102 xi

(16)

6.3.2 Gathering contextual information . . . 104

6.3.3 Robin architecture . . . 104

6.3.4 IWaT session . . . 105

6.3.5 Session results . . . 106

6.3.6 The prototype . . . 106

6.3.7 Global results . . . 111

6.4 EMS project: Elderly Monitoring System . . . 111

6.4.1 General requirement . . . 112

6.4.2 Setup . . . 112

6.4.3 Server application . . . 113

6.4.4 Mobile application . . . 114

6.4.5 Evaluation of uMove . . . 114

6.5 SMSService: a concrete use case of a uMove service . . . 115

6.5.1 Server part . . . 116

6.5.2 Client part . . . 117

6.6 Summary . . . 118

7 Conclusions and Perspectives 121 7.1 Thesis orientation . . . 122

7.2 uMove framework: a promising wholistic tool . . . 122

7.2.1 Conceptual model . . . 123

7.2.2 uMove system architecture . . . 124

7.2.3 Integration of a mobile server-based uMove . . . 125

7.2.4 Implementation tools . . . 125

7.2.5 Validation projects . . . 126

7.3 Perspectives . . . 127

7.3.1 uMove API . . . 127

7.3.2 uMove System Editor . . . 128

7.3.3 Mobile uMove middleware . . . 129

7.3.4 Development of services . . . 129

7.3.5 Activity and situation management . . . 130

7.3.6 Full evaluation . . . 130

7.4 Epilogue . . . 131

Bibliography 135

Terms and definitions 147

List of Tables 148

xii

(17)

List of Figures 149

List of Listings 153

Curriculum Vitae 157

xiii

(18)
(19)

Chapter 1

Introduction

Contents

1.1 Research challenges . . . . 2

1.2 Goals . . . . 4

1.3 Focus of the thesis . . . . 5

1.4 Contribution . . . . 6

1.5 Outline of the thesis . . . . 7

Ubiquitous Computing (Ubicomp) is radically changing our everyday activities by bring- ing computing power into our living environment. Computers are more and more distributed throughout the environment and tend to disappear into everyday objects. They are enhanced by technologies able to sense the environment, communicate and provide information to a user any time and any where.

In the late 80’s, Mark Weiser put forward the idea of invisible computing. He predicted that in the near future, we would see a shift in computer systems: from the concept of ”one computer one user” we would move to ”one user many computers” [Weiser, 1991]. That the desktop computer would be replaced by many specialised computing devices scattered in the space around us, able to sense our environment and provide help in our everyday lives. As Weiser wrote in the fundamental article in Scientific American:

”The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it”

Weiser’s vision of Ubiquitous computing is partly becoming a reality and there exist plenty of Ubicomp applications, components and infrastructures such as GPS navigation in cars, electronic agendas synchronised with computers, or mobile communication systems including laptops, netbooks, netpads, mobile and smart phones and PDAs. During the last decades, we have seen a shift from the traditional desktop computer toward heterogeneous

1

(20)

technologies from small and mobile interconnected devices to large wall-sized displays as well as car embedded computing systems.

These technological examples show that Ubiquitous Computing has evolved in the last twenty years. However, many questions about Ubicomp remain open: What really evolved?

Concepts or technologies? Where do we stand with Weiser’s concept of invisible computing?

[Bell and Dourish, 2007, Rogers, 2006] Have we fundamentally changed our way of interacting with computing systems? What research challenges are still relevant after thirty years of Ubicomp development?

As mentioned by Schmidt in [Schmidt, 2002, ch. 2], research in the domain of pervasive or ubiquitous computing is diverse and this field is still not properly defined. Ubicomp includes aspects of Distributed Systems, Mobile Computing, Software Architecture, Human- Computer Interaction (HCI), and Artificial Intelligence (AI) as well as engineering as it deals with hardware such as sensors (Fig. 1.1).

Figure 1.1: Panel of the different computing domains that are included in the concept of Ubicomp

1.1 Research challenges

Because it is a multidisciplinary domain, Ubicomp has no formal nor unique definition and includes several research approaches. Many challenges are addressed in Ubicomp and among them, there are three aspects which we found particularly important to explore: 1) wholistic tools to develop Ubicomp infrastructures, 2) smart applications allowing users to benefit from the distributed computing power in their environment and 3) the integration of enriched human-computer interactions using motions, activity and situation provided by the increasing sensing capability of the user environment or mobile devices.

(21)

1.1: Research challenges 3 1.1.1 Tools for developing and deploying Ubicomp systems

There are at least two problems in the development of Ubicomp systems. The first one is the generic modelling tools to properly define complete systems (from the sensors to the applica- tion). Projects such as CAMUS [Hung et al., 2004] or MUSIC [Reichle et al., 2008] propose frameworks for context modelling and ontology-based representation. They are predominately oriented towards functional aspects of the system and, even if they adopt a user-centered and context-aware approach, they do not necessarily propose the tools for designing the system including users, environments (places) and relations between entities which constitute the system.

The second problem is the integration of heterogeneous hardware and software technolo- gies in order make them work together and provide coordinated services to users. Due to the distributed and dynamic nature of Ubicomp systems, this challenge includes dynamic coordination and communication of devices and applications active in the user environment as well as service discovery and security [Coulouris et al., 2001, p.6-7] as proposed in the GAIA project [Roman et al., 2002] or HP’s CoolTown project [Kindberg and Barton, 2001].

There is a strong need for standardised platforms, frameworks and middlewares allowing to develop infrastructure and connect several kinds of sensors and computational devices, and to run contextualised applications and services.

There exist different tools and environments to support the development of systems.

Among them, are Georgia Tech’s Context Toolkit [Dey et al., 2001], MIT’s Oxygen1, Carnegie Mellon’s Aura [Garlan et al., 2002] and the ActiveCampus [Griswold et al., 2003]. These projects focus on the implementation of systems integrating different types of interacting physical devices including smart phones, sensors and large scale public displays.

The challenge consists of developing integrated tools including three aspects of the de- velopment process which are 1) the definition of the Ubicomp system (the model), 2) the evaluation of the model and its validation and 3) the implementation of the system modelling the user’s environment and applications.

1.1.2 Smart and adaptive applications and services

Ubicomp system interfaces should adapt their behaviour according to the situation, be aware of the context and not require much user attention. Recently, we have witnessed an increasing number of mobile devices such as smartphones and academic applications such as CyberGuide [Abowd et al., 1997], GUIDE [Cheverst et al., 2000] or UbiCicero [Ghiani et al., 2008] which use information about user contexts gathered through sensors in order to do the right things at the right time. There are also commercial applications such as Google AdSense2 which contextualise information and services according to a user’s location. Context-awareness and

1http://oxygen.lcs.mit.edu/Overview.html

2https://www.google.com/adsense/

(22)

context-aware systems have been extensively explored and many articles have been written about since the ’90s [Shilit and Theimer, 1994].

Generally, context-aware applications use contexts such as location, time, temperature, light intensity or, nowadays, accelerometers to trigger events. For example, in some smart- phones, the silent mode can be enabled when they are placed in a given position or, in the best case, in a given location. In the first case, the result is obtained by direct user interaction with his device (turning the phone face down) and in the second through the location context.

However, we believe that applications could be smarter if more user’s characteristics such as motions and activity were combined with other contexts.

1.1.3 User interaction

As a new paradigm, Ubicomp has also changed the way the user interacts with computing devices. Computers systems and devices offer different modes of interaction with sometimes only a minimal portion of the ordinary desktop interface (screen, keyboard, mouse). Also, the growing number of devices surrounding users no longer allows each of them to capture the user’s full attention. Following the idea of ”computer everywhere”, also called Everyware by Greenfield [Greenfield, 2006], Weiser proposed the concept of calm technology or computing [Weiser and Brown, 1996] which suggests that users should not be overloaded by information and that some of it can be put at the periphery, leaving user attention for the main user activity. For Weiser, ”A calm technology will move easily from the periphery of our attention, to the center, and back”.

This also influences the mode of interaction with computer systems and creates a need for more implicit interaction rather than the current explicit one. Consequently, the user- computer interfaces and, more importantly, the interaction mode must be adequately designed in order to reach this goal. With the development and miniaturisation of sensors, we tend to a more sensor-based and implicit interaction [Dix, 2002] and therefore move from the commonly used Graphical User Interface (GUI) toward a Ubicomp User Interface (UUI) [Krumm, 2010].

In order to reach this goal in the next generation of Ubicomp and context-aware systems, there is a need to enrich the contexts by taking into consideration user behaviour and in- tention. There is already an increasing interest in the integration of new parameters such as user motions, gestures activities and situation in order to make applications smarter and more adaptive.

As pointed out by Sparacino in [Sparacino, 2005]: ”[...] computation and sensing are moving from computers and devices into the environment itself. The space around us is instrumented with sensors and displays, and this tends to reflect a widespread need to blend together the information space with our physical space”.

(23)

1.2: Goals 5

1.2 Goals

The first goal of this research was to explore an interaction paradigm we called Kinetic User Interface (KUI) which includes user activity and situation as input modalities for context- aware systems, and to propose a platform to support this paradigm. However, before reaching this goal, we found two aspects which need to be primarily addressed as they are often missing in the Ubicomp development process. The first one is the lack of tools for the modelling of (user’s) environments in which the Ubicomp systems are set. In many projects or research, the proposed solutions include all the components from the sensors to the application in one concept instead of clearly separating the environment, the technologies which allow to gather information about the environment and the applications (which can be heterogeneous and specialised). The second aspect concerns the development of integrated tools that designers and programmers can use to develop systems which allow a seamless integration of user activity and situation.

1.3 Focus of the thesis

In this thesis, we focus on two particular aspects. The first aspect concerns the development of a comprehensive framework to design and develop Ubicomp systems representing different kinds of physical or virtual environments. This framework, called uMove, proposes both theoretical foundations and implementation tools, and is divided in three specific facets. The first facet is the conceptual model which describes a Ubicomp system made of entities and observers within their physical or logical environment (Fig. 1.2). The second facet proposes the system architecture which allows designers and developers to theoretically define a logical system, including the types of contexts taken into consideration. The third facet is the development tools allowing programmers to implement their system and applications.

Figure 1.2: The three facets of the uMove development framework: semantic modelling, archi- tecture of the system and the implementation.

The second aspect focuses on the way to integrate, within the framework, the management

(24)

of the kinetic properties (motions and activity) of entities in order to enrich the interaction with context-aware computing systems and to allow the development of applications and services which adapt their behaviours to the situation of the moving entity.

1.4 Contribution

The main contribution of this thesis is the creation of a comprehensive development framework for Ubicomp systems and context-aware applications. The specific contributions are:

ˆ The definition of a semantic model for the modelling of physical or virtual environments (A city, a university campus, computer games, web sites) into logical representations.

ˆ The definition of an architecture model for Ubicomp systems and a set of tools to support designers in the validation of their model before the implementation phase.

ˆ A set of programming tools for the implementation of systems and applications.

ˆ Proof-of-concept applications integrating the activity, motion and situation implemented with the framework.

1.4.1 System modelling

The approach chosen for our model of environment follows the systemic concepts and the semantic model is based on Von Bertalanffy’sGeneral System Theory(GST) [von Bertalanffy, 1969]. Von Bertalanffy was a biologist who developed a theory generalising the definitions of systems used in specific scientific disciplines such as physics, (bio-)chemistry, mathematics, biology, economics or social sciences. A modelled environment becomes a system, in the systemic sense.

A system models the physical or virtual world where objects (entities), possibly living things capable of motion, interact naturally with their environment and are observed by agents (observers).

1.4.2 System architecture

Based on the semantic model, we propose an architecture which allows to define different layers of abstraction including a system made of interacting entities, the sensors gathering the different entity contexts, the system observation and the context-aware applications which handle the events received by the sensors. We also present a methodology to evaluate the design and components architecture of a Ubicomp system and application to ensure that the various algorithms, strategies, inferences (of activities or context) and sensors operate together smoothly, satisfy user requirements, take into account technical and infrastructure limitations and form a coherent and comprehensive system.

(25)

1.5: Outline of the thesis 7 1.4.3 Implementation tools

Once modelled and validated, a system can be implemented with a set of Java-based pro- gramming tools. We developed APIs that offer the necessary classes and methods to build the middleware on which the system will run. These APIs allow to connect sensors and context-aware applications which interact with the entities, and they offer functionality for the monitoring and integration of mobile devices running on the Android platform. We also propose a graphical user interface which can instantiate and monitor a system and dynami- cally load services for mobile devices.

1.4.4 Validation scenario and applications

We propose a set of validation projects that use the uMove framework, implement the concepts of systems and test the capability of the proposed concepts to adequately address the research goals. Through these projects, we also experiment with the concept of Kinetic User Interface (KUI) using scenarios which imply a mode of interaction where location and motion tracking, including user activity, can be used as first order input modalities to a Ubicomp system. The goal of a KUI is to allow users to interact with Ubicomp systems in a more implicit way using their kinetic properties to trigger events at the level of applications.

The first project, called Robin, focuses on the observation of a rescue team helped by a semi-autonomous robot. The robot is sent ahead of the team and gathers contextual information in a building (in case of fire for instance) to send back to the server for situation analysis and activity recommendation or possibly alarms. The second project provides a smart environment for a nursing home. It focuses on the activity tracking of elderly persons who are still independent but monitored by medical staff in case of problems. Finally we describe an activity recognition module which can be plugged into a KUI system in order to track and analyse predefined categories of activities.

1.5 Outline of the thesis

This dissertation is organised as follows:

Chapter 2 We present the related work and research on ubiquitous and pervasive com- puting, context-awareness, context-aware computing systems and middlewares for pervasive systems. We also present the state of the art in the field of human-computer interaction with Ubicomp systems including mobile computing. In particular, we give an overview of activity-based and motion-based interaction as well as the situation reasoning perspective.

Chapter 3 This chapter presents the uMove conceptual model and the approach we have chosen to represent and describe the moving entities interacting with the environment. It describes General System Theory, the fundamental theory on which we have based our model

(26)

of uMove systems. The semantic model focuses on 1) the entities populating a system and all the properties of the entities including their contexts, the activities and the relations between the entities and their environment and 2) the concept of system observers and viewers which analyse the situation of entities. We also present a functional model of activity and situation representation and integration in uMove.

Chapter 4 This chapter presents the architecture of the uMove system and tools which designers and developers can use to theoretically define and design their system and all of its components (users, physical spaces, sensors, contexts, activities and situations).

Chapter 5 We present a programming tool that allows the implementation and the setup of a uMove-enabled system in which the physical (or logical) world is virtually represented.

These APIs allow to create a middleware with which uMove-enabled applications or services can interact and can gather contextual information such as location or activity of active users or objects in order to trigger appropriate events and actions. This chapter also describes the prototype of the uMove System Editor allowing to 1) set up a uMove system by using a graphical user interface and 2) to load uMove enabled services for Android smartphones.

Chapter 6 . Applications implementing the uMove concept are presented in this chapter.

We describe in detail the projects and also the issues we wanted to test with each application.

Chapter 7 . We draw conclusions and present future perspectives for this research.

Figure 1.3 shows how the whole thesis is structured around the two aspects, three facets and the integration of the kinetic properties within the uMove framework.

Figure 1.3: Structure of the thesis: the three facets and the integration of the kinetic properties

(27)

Chapter 2

Background and related work

Contents

2.1 Ubiquitous and pervasive computing . . . . 9 2.2 Ubiquitous computing: definition of the paradigm . . . . 10 2.3 Context-aware computing . . . . 13 2.4 Human-computer interaction in ubiquitous computing . . . . 23 2.5 Activity-based computing . . . . 26 2.6 Reasoning on situation: an evolution of activity-based computing 30 2.7 Summary . . . . 32

In this chapter, we analyse existing research, concepts, paradigms, middlewares and tech- nologies used in ubiquitous or pervasive computing projects. We present some aspects and components which generally constitute Ubicomp systems and which are related to the uMove project and model. We also review the evolution of Weiser’s vision and see where we stand now. The chapter also includes the HCI aspect in the context of Ubicomp, as well as current research in User-Ubiquitous Computing Interfaces and, in particular, we study some theories and models that support the integration of user motions, activity and situations as possible interaction paradigms.

2.1 Ubiquitous and pervasive computing

Thirty years ago, the concept of ”computer” almost exclusively referred to mainframes pro- cessing input data and producing output results. At that time, the interaction between users and computers was extremely limited and users were more often considered as operators than clients. That was the first computing era with the concept of ”one computer-many users”.

Then, with the development of personal computers (PCs), the second era of computing began:

9

(28)

”one computer-one user”. At that time, we saw not only the user operating the computer but also clearly interacting with it through graphical user interfaces, keyboards and mice. In the late ’80s, Mark Weiser, working at Xerox PARC, proposed a new computing concept called ubiquitous computing (Ubicomp) [Weiser, 1991]. This new way of understanding computer technology came from the fact that computers became smaller and could be embedded into everyday things. It was a paradigm shift between ”one computer-one or many users” to ”many computers-one user”, and the beginning of the third computing era (Fig. 2.1).

Figure 2.1: The three modern computing eras (source: [Krumm, 2010, ch.1])

The term ”Pervasive Computing” was proposed by IBM in the mid-90s (IBM Mobile and Pervasive Computing), and had almost the same meaning as ubiquitous computing.

Ubiquitous computing tends to integrate disparate technologies to meet a design goal and pervasive computing tends to develop wireless and mobile platforms running standardised operating systems deployed in the form of smart phones (IBM WebSphere or J9, Android and Java platforms). However, Want states in [Want, 2010, p.11]: ”more than 10 years later, any unique position described by either party has been slowly integrated into the shared vision and by the mid-2000s any publications that set out to describe this topic presented fundamentally the same position”. From now on, we will use the term Ubicomp to describe both paradigms.

2.2 Ubiquitous computing: definition of the paradigm

What fundamentally changed between desktop computing and Ubicomp? Ubicomp is a sub- area of Distributed Systems and its main focus is research on how heterogeneous, networked computing devices can be embedded in objects of daily use to enable applications to create new user experiences. Weiser’s vision of Ubicomp was that computing will be embedded in everyday artifacts, used to support daily activities, applicable to our work, managing our homes, and for play [Want, 2010, p.4]. In other words, the computer, as we know it now, will disappear into our environment and the computing power will fade inside the network

(29)

2.2: Ubiquitous computing: definition of the paradigm 11 infrastructure. Consequently, we will have more heterogeneous computing devices ranging from small, specialised and interconnected devices to high-performance servers scattered in our environment.

Weiser’s vision of ubiquitous computing has been largely adopted by the scientific commu- nity [Dey et al., 2001, Loke, 2007, Abowd and Mynatt, 2000, Abowd et al., 2002, Greenfield, 2006, Bellotti et al., 2002] and Weiser is probably the most cited author in the field of Ubicomp and Pervasive Computing.

2.2.1 Ubiquitous computing is not nomadic computing

Ubiquitous computing is often confused with nomadic computing and mobile computing [Kleinrock, 1997]. Nomadic computing is the form of computing where the user can access his data any where and any time while on the move. In the past fifteen years, we have witnessed a substantial increase of mobile computing devices such as laptops, personal digital assistants (PDAs) and smartphones. The main trigger of this change was the significant evolution of the networking and (tele)communication capabilities. The Internet became part of our lives and wireless communication in now available almost everywhere in different forms such as GPRS, UMTS or WIFI. Other factors that have boosted the development of these technologies are the need for mobility in our societies, and the decreasing cost of hardware and services.

Access to information (private or professional) and the ability to work everywhere or to be reachable at all times have entirely become part of our lives. However, and even if we consider the paradigm shift between desktop computing and nomadic computing, it is still a kind of (mobile) desktop computing: there is still a direct and explicit interaction between the human and computers.

Ubiquitous computing goes beyond this concept. In Ubicomp, as we will see later on, the user does not necessarily interact explicitly with the computer through a screen, keyboard and/or mouse. As stated by Weiser1:

”[ubiquitous computing] is different from PDAs, dynabooks, or information at your fingertips. It is invisible, everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere.”

2.2.2 From Weiser’s vision to now: where do we stand?

Rare are the papers on Ubiquitous or Pervasive Computing that do not refer to Mark Weiser’s fundamental article [Weiser, 1991]. Twenty years of research and a substantial amount of projects have taken their inspiration from the central idea of ”computer everywhere” and ”calm technology” [Weiser and Brown, 1996]. The questions now are: Is this vision of computer everywhere a reality? Do we have concrete projects that implement the concept of calm

1http://www.ubiq.com/hypertext/weiser/UbiHome.html

(30)

technology or ubiquitous computing? Did we reach the goals of calm technology? For the first question, it is undoubtedly the case. For the other two, the answer is rather negative.

As mentioned by Rogers in [Rogers, 2006], considerable efforts have been made to realise Weiser’s idea by developing frameworks, technologies and infrastructure to support people in their daily life, but ambitious projects such as HP’s cooltown, IBM’s BlueEyes or MIT’s Oxygen are still far from reaching the goal. And, even if our environment is augmented and sensed by several sensors creating smart homes or smart environments, they do not match up to a world of calm computing.

Bell and Dourish in [Bell and Dourish, 2007] argue that a gap exits between Weiser’s vision of Ubicomp, particularly in the technological development, and the present time. The invocation of this vision ”neglects the significant difference between then and now, and [the]

changing techo-social contexts”. In other words, they say that ”today’s technological land- scape is quite radically different than that of the late 1980s”.

Bell, Dourish and Rogers’ arguments highlight two main problems. The first one is situ- ated at the interaction level. Calm technology is supposed to help users release unnecessary cognitive load while interacting with computing systems and carrying out other human ac- tivities at the same time. Because users are surrounded by several computing devices simul- taneously, those devices should not take a user’s full attention. It means that the interaction mode with Ubicomp systems must be smooth and implicit and the technology should be put at the periphery of user attention [Weiser and Brown, 1996] if any action is required. The level of attention required by the different computing systems is adaptable and it moves back and forth. The problem we face today is that computer systems are intrusive and capture the attention of the user (e.g. large public displays, mobile phones, GUIs on laptops or desktops) or require direct interaction too often.

The second problem is situated at the technological level. When Weiser published his arti- cle in 1991, his vision was already implemented in lab conditions, ”complete with photographs of devices that had already been designed and built, and reports on their use” [Bell and Dour- ish, 2007]. But technology did not follow exactly the same way that Weiser predicted and we moved from desktop computing to nomadic computing. With the miniaturisation of desktops which were transformed into portable computers, laptops, PDAs and netbooks, we do not reach the idea that computers will fade into the infrastructure but rather they physically surround us and are part of our lives.

However, Want argues that smartphones are not so far from the original Xerox ParcTabs2 [Want, 2010, p.30]. Telecommunication has also completely changed our way of living with exponential development of mobile phones and in particular smartphones. As mentioned by Bell and Dourish, the first cellular service begun 1983 in the US and in 1988 there were approximately 1.6 million subscribers. According to Want, in 2008, 1.2 billion cell phones were shipped and we reached the level of 3.3 billion subscribers around the globe.

2http://sandbox.xerox.com/parctab/

(31)

2.3: Context-aware computing 13 The market and the development of mobile technology have clearly driven the research on Ubicomp. Instead of experiencing Weiser’s original idea of calm technology, we have rather desktop applications adapted to mobile computers (with a small portion of screen) that allow users to communicate or get contextual information and services while moving. Even if the surrounding user environment is enriched with sensors and computing power appears in everyday life objects such as proposed in the MediaCup project [Beigl et al., 2001], we are far from the idea of a user being surrounded by technology observing and acting in an unobtrusive manner according to user needs. In this thesis, we explore the possibility for Ubicomp systems to take advantage of this new generation of mobile, interconnected and sensor equipped technologies for the development of applications that stick to the original idea of calm technologies and interfaces. We propose an architecture which uses recent technologies such as smartphones or smart environment and supports the development of context-aware applications able to provide useful and non-intrusive services for users.

2.3 Context-aware computing

Since the ’90s, there has been a growing interest in context-aware computing. A significant number of journals and conferences have published articles in this area [Hong et al., 2009].

The fact that the computing paradigm has changed from personal computing toward mobile and distributed computing has initiated the development of a new type of application, smarter and more adaptive to a user’s contextual needs. Context-awareness started in 1992 with the ActiveBadge project of Want et al. [Want et al., 1992] which is considered as the first context- aware application [Baldauf et al., 2007]. The ActiveBadge project was an indoor location system based on a badge transmitting a signal each 15 seconds. This signal was recognised and located within a building, giving the physical position of the person carrying the badge.

The initial application of this system was intended to be an aid for a telephone receptionist to locate people and automatically transfer incoming calls to the nearest phone extension. In this example, the main contextual information was the location, but context-awareness includes more than one context. As mentioned by Shilit and Theimer, context-aware applications adapt according to the location of use, neighbouring people, hosts, accessible devices and, can examine the computing environment and react to contexts changes [Shilit and Theimer, 1994].

2.3.1 Context: concept and definitions

Even though many researches have studied ”context”, there is still no common definition. For Shilit and Theimer [Shilit and Theimer, 1994] context is defined by the location, identity and changes of nearby entities (people or objects). For Brown et al. [Brown et al., 1997], as for Ryan et al. [Ryan et al., 1998], context is also identities of the people around the user, the season, the time and temperature, for instance.

(32)

There is also a different approach proposed by Schmidt, Beigl and Gellersen [Schmidt et al., 1998]. They define a working model for context in which context has a unique name and defines a situation that a user or device is in. The context contains a set of relevant features. For example, a physical environment context has conditions (e.g. light, pressure, audio), infrastructure and location as important features.

For Chen and Kotz [Chen and Kotz, 2000], context is a set of user relevant environmental states and settings that determines an application behaviour. They consider five classes of contexts: 1) computing context, 2) user context, 3) physical context, 4) time context and 5) context history.

Korkea-aho [Korkea-Aho, 2000] considers context as situational information and states that ”Almost any information available at the time of an interaction can be seen as context information”. Some examples are :

ˆ identity

ˆ spatial information - e.g. location, orientation, speed, acceleration

ˆ temporal information - e.g. time of day, date, season of the year

ˆ environmental information - e.g. temperature, air quality, light or noise level

ˆ social situation - e.g. who you are with, people that are nearby

ˆ resources that are nearby - e.g. accessible devices, hosts

ˆ availability of resources - e.g. battery, display, network, bandwidth

ˆ physiological measurements - e.g. blood pressure, heart rate, respiration rate, muscle activity, tone of voice

ˆ activity - e.g. talking, reading, walking, running

ˆ schedules and agendas

Lieberman and Selker [Lieberman and Selker, 2000] propose an approach where context is an implicit input and output for an application and is used with the explicit input to affect the computation and the output (Fig. 2.2).

Dix et al. in [Dix et al., 2000] focus on the use of context in the design of mobile systems. They consider four different types of contexts. The infrastructure context concerns the environment in which the mobile device and the application runs. The variability in the infrastructure (wireless communication quality, service availability) can dramatically affect interaction, and it is essential that interaction styles and interfaces also reflect the state of the infrastructure. The system context covers two aspects. The first one is the possibility to have applications distributed within several computing devices. The second aspect is the capability for devices to be aware of other devices and to some extent, applications to be aware

(33)

2.3: Context-aware computing 15

Figure 2.2: Context is an implicit input that influences the computation and the output of a context-aware application, [Lieberman and Selker, 2000]

of other applications. Thedomain context considers the semantics of the application domain.

It concerns the relationship between the application and the user, and the determination of the appropriate interfaces. The physical context is the surroundings the mobile computing system is aware of or embedded into. For instance, embedded computing systems are running into application-specific devices and they may need to know their environmental context (e.g.

speed of the car). This information may be used to modify the interfaces, the behaviour, the light intensity or simply be delivered to the user.

As mentioned by Loke in [Loke, 2004], the work of Shilit et al. provides a generic definition of what context is Dey et al. [Dey and Abowd, 1999] propose a more operational and broader definition of context:

[Context] is any information that can be used to characterise the situation of entities (i.e. whether a person, place or object) that are considered relevant to the interaction between a user and an application, including the user and the application themselves. Context is typically the location, identity and state of people, groups and computational and physical objects.

Context can also be seen as a way to represent a problem. As mentioned by Dourish in [Dourish, 2004], software systems being representational, a concern with context naturally leads to a concern with how contexts can be represented and encoded. Taking into account the different definitions of Shilit, Dey and Ryan [Shilit and Theimer, 1994, Dey and Abowd, 1999, Ryan et al., 1998], Dourish regroups four assumptions that underlie the notion of context.

First, context is a form of information. It can be known, represented and encoded. Second, context is delineable. It is possible to define in advance for some set of applications what

(34)

counts as the context of activities the application supports. Third, context is stable. Context does not vary from instance to instance of an activity or event although it may vary from application to application. Fourth, context is separable from activities. Activities happen within a given context.

Even if there is no universal definition of context, we have pointed out four contexts, or type of contexts, that are presented by the selected authors and with which we work in this thesis:

1. Location: an entity (user or object) is always in a location. It is one of the most used contexts.

2. Identity: the entity must be identified in order for an application to adapt its behaviour.

3. People nearby: who or what is around the entity and what are the relations between them. It can help to deal with privacy issues for instance.

4. Environmental context: what are the physical conditions around the entity.

With these four categories of contexts, we can answer the ”where”, ”who”, ”who’s around”

and ”in which condition” questions. These types of context are used in this thesis to charac- terise the situation of an entity (Fig. 2.3).

Figure 2.3: Types of context characterising the situation of an entity

2.3.2 Context-aware architectures and middlewares

The following review refers to work and projects that might seem to be outdated but they represent the fundamental principles of the context-aware computing and are still mentioned several times in recent papers and books [Krumm, 2010, Cipriani et al., 2011]. Context- aware architectures present interesting aspects for the development of Ubicomp systems where

(35)

2.3: Context-aware computing 17 1) user’s environments are enriched by (possibly sensed) contexts, 2) mobile devices are integrated within the environment and 3) the user-computer interactions tend to be as implicit as possible. Context-aware architectures are frameworks and middlewares that support the development of context-aware applications. Among the most known architectures developed in the past twenty years, we find Shilit et al.’s PARCTAB [Shilit and Theimer, 1994] which is considered as the first architecture (software and hardware) a context-aware application could exploit. PARCTAB is a small handheld device using an infrared-based cellular network for communication and acting as a graphics terminal. The applications run on remote hosts and information is sent to the portable device. Applications adapt their content only according to the location of the device. This system can be considered as a ”location-based” system rather than a context-aware system as it deals with a limited number of contexts. Recent context-aware systems include many other contexts such as the user’s profile, the time and the people nearby.

The Context Toolkit, developed at the Georgia Institute of Technology [Dey and Abowd, 1999, Dey et al., 2001], is another well known context architecture that supports context- aware applications such as the Conference Assistant or the Intercom, which keeps track of the locations of people and enables people to send messages to other people using voice commands. The Context Toolkit provides a strong formalism for describing contexts at different levels of abstraction and contains three types of objects: 1) Widgets, implementing the sensor abstraction, 2) Servers, responsible for aggregation of contexts and 3) Interpreters, responsible for interpretation of context (Fig. 2.4). The interesting aspect of Context Toolkit is the clear separation of data gathering from single or multiple sensors through widgets3, the fusion of those data and the high-level description of context. But, it does not provide user environment modelling which is also important in the process of context-aware application development.

The Easyliving project developed at Microsoft Research [Brumitt et al., 2000] proposes an architecture able to support and dynamically aggregate heterogeneous I/O devices (TV, video, audio) within a single room. They focus on a middleware facilitating distributed computing using asynchronous messaging, geometric knowledge (relations between people, places, devices and things), detection of the people (in the room) and the service description.

The interesting aspect of this project is the management of the relation between users (people) and between a user and the smart environment. However, Easyliving is strongly focused on multimedia technologies and no downloads are available.

The University of Illinois at Urbana-Champaign has developed GAIA [Roman et al., 2002], ”a distributed middleware infrastructure that coordinates software entities and het- erogeneous networked devices contained in a physical space. GAIA is designed to support the development and execution of portable applications for active spaces”. An active space is a physical boundary containing objects, heterogeneous networked devices, and users per-

3Usually widget stands for windows gadget but in the Context Toolkit, it represents a sensor abstraction

(36)

Figure 2.4: Context Toolkit architecture

forming a range of activities. An active space is coordinated by a responsive context-based software infrastructure that enhances a mobile user’s ability to interact with and configure their physical and digital environments seamlessly. GAIA is a meta-operating system and an implementation of a CORBA4 middleware. The interesting aspect of this project is the coordination of multiple situated devices and distributed applications. The limitation is given by a proprietary scripting language and no downloads are available.

Reichle et al. [Reichle et al., 2008] propose a context model called MUSIC (Self-Adapting Applications for Mobile Users in Ubiquitous Computing Environments). The authors de- scribe the project as ”a comprehensive open-source computing infrastructure and an asso- ciated software development methodology that facilitates the development of self-adapting, context-aware applications in ubiquitous and pervasive computing environments”. MUSIC is built on three layers of abstraction. The conceptual layer allows the definition of context artifacts such as elements, scopes, and entities and their representation based on standard specification languages like UML [Fowler, 2004] and OWL [OWL]. The exchange layer con- cerns the representation of context (e.g. XML, JSON). Finally, the functional layer is the actual implementation of the context model and can use different platforms such as Java or .Net. The model is interesting for the context management, including the users, but seems to not include the concept of activity associated with the users.

2.3.3 Context-aware applications

As mentionned by Dey [Dey, 2010], context-aware applications look at the who’s, what’s, where’s, when’s. Context-aware applications adapt their behaviour according to the context

4CORBA (Common Object Request Broker Architecture) uses an interface definition language (IDL) to specify the interfaces that objects will present to the outside world.

(37)

2.3: Context-aware computing 19 in which they run (user, activity, location, time). Here we review some applications of two types in domains where contexts are useful for automatic behaviour adaptation: tour guides and healthcare. As we are interested in location context and activity, the following appli- cations propose interesting solutions for tracking users in their environment and providing adapted services. As mentioned above, the first context-aware application was Want et al.’s ActiveBadge [Want et al., 1992]. Nowadays, this application would be put into the category of location-based services as it deals only with the user’s location. Location (physical and symbolic) is probably the most used context and many projects and devices were developed for this purpose. As mentioned by Hightower and Boriello [Hightower and Borriello, 2001],

”to serve us well, emerging mobile computing applications will need to know the physical location of things so that they can record them and report them to us”.

Tour guides are typically a type of application that uses location context and user profile such as the language choice to provide relevant information to the user.

The Cyberguide of the Georgia Institute of Technology [Abowd et al., 1997] proposes a stand-alone application preinstalled on a portable device with all information. The Cyber- guide receives beacons with a location ID and retrieves locally stored relevant information.

The main limitation of this application is the static behaviour of the system due to the local storage of information and it does not take context other than location into consideration.

The GUIDE5 project of Lancaster University [Cheverst et al., 2000] proposes a context sensitive tourist guide for visitors to the city of Lancaster. Users carry a laptop connected via WIFI to retrieve information. Based on user preferences and the user’s environment (location), the user obtains 1) broadcasted information about the region or 2) specific user requested information. GUIDE is an evolution of the Cyberguide but still does not use contexts other than the user profile and location.

Ghiani’s UbiCicero [Ghiani et al., 2008] proposes an environment which aims at supporting multi-device interaction and games, integrated with a location-aware support exploiting RFID technology. Their goal is to improve a user’s experience while visiting a museum by facilitating access to information available and increasing the interactivity of the environment (Fig. 2.5).

In this application, the concept of user-environment interaction is important and reinforces the user experience in a specific environment through a Ubicomp system. Also the RFID- based tracking system corresponds to our approach of user tracking. This application does not include user activity.

Another active research field is context-aware applications in health care [Bardram, 2004, Gong et al., 2005, Catarinucci et al., 2009]. The information needs of hospital workers are highly dependent on contextual information such as location, role, time of day, and activity [Favela et al., 2007]. There exist commercial solutions such as Cisco Context-Aware Health- care [Cisco, 2009] which enables hospitals to integrate real time contextual information such as location and status of medical equipment and staff into the workflow. They propose a

5http://www.guide.lancs.ac.uk

(38)

Figure 2.5: UbiCicero environment: the Museum Mobile Guide (Courtesy Ghiani et al.)

complete architecture supporting zone inventory and management, presence applications and condition monitoring. Those applications are interesting as they include the user activity in addition to other contexts such as the location and time of day. However, they often consider static or predefined activities and process real-time motion in order to infer the user’s physical activity.

There exist many other domains where contexts are used and the increasing popularity of smartphones running platforms such as Android or iOS (iPhone OS), equipped with different sensors have boosted the development of applications such as Aloqa6 or Jigsaw Beta Context Aware App7. On Android, ”Aloqa solves both the search and discovery issues by utilising a user’s context - their location, time, preferences and relationships - to notify them in real time of friends, places, events and entertainment opportunities around them without delays” and

”Jigsaw can determine our location and our actions no matter where we are and it would store the data for other applications to use”. This is now possible in mobile computing because of the rapid development of several miniaturised sensors embedded in the handheld devices.

2.3.4 Sensing contexts

Another important issue in context-aware computing is of course the acquisition and the processing of contexts in a broad sense. Integrating contexts into an application also means acquiring contextual data in different ways. We consider two main ways that are relevant in this thesis. One way uses the static description of the context such as the user profile,

6http://www.aloqa.com/

7http://www.bukisa.com/articles/405690 download-jigsaw-beta-context-aware-app-for-symbian-and- iphone

(39)

2.3: Context-aware computing 21 for instance, using the Microsoft Active Directory8 or even, at a certain level, social profiles available on Facebook or Twitter. The profile of a user can provide administrative or social information that can help a context-aware application to adapt its behaviour.

The second way is by sensing the environment and getting real-time environmental in- formation. We have identified two types of sensing. The first one consists in using sensors placed in the user environment and the second one is the use of body or mobile device em- bedded sensors. Body sensors can provide personal information such as heart rate or blood pressure and are typically used in healthcare context-aware applications. Mobile devices are also equipped with many sensors such as a GPS, a light detector, an accelerometer, a compass and a thermometer, but also Bluetooth and WIFI used to locate people [Benford et al., 2005].

There is still work to be done at the level of data processing and representation [Gellersen et al., 2002, Hung et al., 2004]. In this thesis, we focus on location sensors such as RFID locator or wireless location techniques such as the one presented in the RedPin project9, an indoor positioning system providing room-level accuracy, developed at the ETH in Zurich [Bolliger, 2008]. RedPin is a finger-print system providing symbolic identifiers such as, for example, the number or name of a room. This project is very interesting because it is a zero configuration Java-based project running on Android and iOS mobile devices.

We also consider the accelerometer as a main sensor for the activity recognition with algorithms and classifiers such as k-nearest neighbour (KNN) or Hidden Markov Models (HMMs), na¨ıve Bayes networks, decision trees and Support Vector Machines (SVM) [Mathie et al., 2003, Ravi et al., 2005, Long et al., 2009].

2.3.5 Ambient intelligence and smart environments

User environments are also benefiting from the rapid development and miniaturisation of sensors: they are becoming smart. The so-called ”smart environments” are combinations of network-enabled devices and applications capable of adapting their behaviour in order to provide context-aware services and to make the life of users more comfortable. As proposed by Das and Cook [Das and Cook, 2006], a smart environment can be defined as ”one that is able to autonomously acquire and apply knowledge about the environment and adapt to its inhabitant’s preferences and requirements in order to improve their experience”. They point out four main components that constitute smart environments and which are part of this thesis: smart devices and embedded systems, wireless mobile communication, a computing paradigm and a middleware.

Smart environments have become a dynamic field of research and many projects have been developed around this topic. Based on the survey by Endres et al. [Endres et al., 2005], some projects related to this research are taken into consideration here. Among them, there is the EasyLiving project from Microsoft [Brumitt et al., 2000] presented in 2.3.2 as well as

8http://technet.microsoft.com/en-us/library/bb727067.aspx

9http://www.redpin.org/

(40)

the Aware Home from Georgia Tech [Lesser et al., 1999]. The project provides a three-story, 5040-square-foot (470 m2) home that functions as a living laboratory for interdisciplinary design, development and evaluation of applications such asAging in place,Technology coach, Family Video Archive,PowerLine positioning or Event detection,Baby steps and others. The interesting part of this project is its implementation in real conditions and not only in a lab, and also the development of user’s ”activity” detection which is one concern of our research.

The Aura10 [Sousa and Garlan, 2002] project from Carnegie Mellon University provides a digital ’halo’ of computing and information while trying to reach goals such as maximizing the use of available resources while minimizing the distraction of the user.

HP’s CoolTown11 represents real world objects (people, places, devices) through web pages. The web pages automatically update themselves when new information about the real world entity they represent becomes available. Web servers are used for the representation of real world entities, and sensing mechanisms (bar code reader, infrared, etc.) for obtaining URLs from real world objects and accessing their web representation. This project is probably the closest to our goal in the sense that it deals with the virtual representation of people and the relations with their environment. However, it is limited to web-based technology and seems to not integrate the concept of activity.

MoCA [Viterbo et al., 2007], proposes a service oriented solution for smart environments and is focused on applications which need to find appropriate services (such as a printing service) and which do not necessarily involve a user. Metaglue [Phillips, 1999, Coen et al., 1999] from MIT is part the Oxygen project12 and is a framework which seems to be partic- ularly suitable for developing distributed information systems spread over many devices and users, using agents as the basic underlying paradigm. Different aspects within those topics are investigated such as security, authentication [Abdallah et al., 2007] and monitoring of user activity [Hussain et al., 2009].

2.3.6 Discussion

In this section we have presented a concept which has changed the way of developing appli- cations. Context-awareness has become a major trend in computer science since the ’90s and we have reviewed three aspects that are related to this research.

The first aspect is the definition of context. We have seen that there is no universal definition of context but we have identified types of contexts that are commonly used in the computer science community. We believe that the first context to consider is the location.

There are no context-aware applications, to our knowledge, that do not integrate the entity (user, place or object) location as context. The second one concerns the identity of the entity. The identity context helps to adapt the application behaviour according to the entity

10http://www.cs.cmu.edu/ aura/

11http://www.hpl.hp.com/techreports/2001/HPL-2001-22.pdf

12http://oxygen.lcs.mit.edu/Overview.html

Références

Documents relatifs

Following expert opinion from tick researchers within VBORNET, a network of entomologist and pub- lic health experts supported by the European Centre for Disease Prevention and

The research approach adopted by RAS offers three advantages for addressing this overarching goal: (i) the collection of synchronized and co-located ecological and socioeconomic

Other intended task types inclu- de various cognitive tasks, such as learning and searching (11), transport and naviga- tion (10), monitoring and self-reflection (9) and sports

Three main challenges are discussed: the accessibility of the modelling stage; support for expressing relevant properties; the need to provide analysis results that are

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

CD4 + T lymphocytes isolated from wild-type C57Bl/6 PENK +/+ mice were stimulated with both anti-CD3 and anti-CD28 mAbs for 6 days and intracellularly stained with control

A first analysis of the instantaneous images (figure 10) indicates that the structure of the film near the injector exit remains still present beyond this zone