• Aucun résultat trouvé

Automatic monitoring for health care

Dans le document The DART-Europe E-theses Portal (Page 60-63)

S TATE O F THE A RT

2.2 Health care Monitoring

2.2.1 Automatic monitoring for health care

Over the last several years much effort has been put into developing and employing a variety of sensors to monitor activities in the health care domain. These sensors include camera networks for people tracking [Sidenbladh and Black, 2001], cameras and microphones for activity recognition [Clarkson et al., 1998], and embedded sensors for activity detection [Wang et al., 2007], [Zouba et al., 2009], [Biswas et al., 2010b].

The change in the manner of doing the activities of daily living is a good indicator of de-clining health, thus patient monitoring is receiving a big interest from medical experts. Video technology can be used in this context to recognise users’ activities. Nait-Charif and McKenna [Nait-Charif and McKenna, 2004] have introduced a method to recognize activity by tracking the user’s position using an overhead camera. They tracked a person using an ellipse and they infer the ’unusual inactivity‘ of a person when the targeted person is detected as inactive out-side a normal zone of inactivity like sofa. Diet is also an important factor affecting health. Kim et al. [Kim et al., 2010] have proposed a design concept of an image-based dietary assessment tool implemented in a mobile phone. Food images are recorded using a built-in camera on a mobile phone and sent to a server where the portion size of the meal is estimated (fig.2.22).

This information is subsequently used to keep personal dietary records as a means of monitor-ing energy and nutrient intakes. However many issues are not addressed in this work and the images recording step is very constrained as many recommendations are done to the users:

- full image acquisition: foods should be captured large enough and around the center area in the image so that they can easily be recognized during analysis.

- Highlight and shadow condition: Highlights and shadows hinder the automatic analysis by making the recognition of food areas difficult.

- Spatial consistency: When several foods exist in the images, keeping them at the same

po-Figure 2.22: Overview of an image-based dietary assessment system in a server-client architecture [Kim et al., 2010].

sition helps the analysis part to automatically recognize foods in different images of the same eating occasion.

Second, users are very involved in the process of recognition as they are engaged to confirm and adjust the food tags in case that the analysis results contain errors.

Currently fall detection is a well researched topic and sensor based solutions are available and commercialized. The majority of such solutions are based on accelerometers, but one of the drawbacks of these systems is that users always need to wear the sensor. The system will not work if the user forgets to wear the device. Alternative solutions are suggested, such as passive fall detection which uses floor vibration sensors (fig.2.23), sound [Alwan et al., 2006]

or video based monitoring [Foroughi et al., 2008]. In [Foroughi et al., 2008], the authors uses Neural Network for motion classification but still they recognize short actions (e.g. sit down) and they do not deal with complex behaviors involving temporal relation.

In [Biswas et al., 2010a] the authors discuss an approach toward building a system for as-sisting people with dementia in their home. They use the concept of micro-context which is information about objects and activities in a smart space, the information are generated through sensors (e.g. RFID, accelerometers) in the ambient environment. In further work [Biswasa et al., 2011], the authors present a design of a prototype to deploy a sensor network for obtaining micro-context information. The sensors incorporated in the network are: pressure sensors, RF Tags, Reed switches, acoustic, motion and inertial sensors. For activity recognition,

Figure2.23: Schematic Representation of the Working Principle of the Floor Vibration Based Fall Detec-tor [Alwan et al., 2006].

the authors adopt Dynamic Bayesian Networks to infer high level activities.

In [Tolstikov et al., 2008] the authors have worked on activity recognition and assistance of activities of daily living (ADLs) of elderly. The approach relies on multi-modal information fusion and primitive activity recognition at the low level. They have choosen the model of the Dynamic Bayesian Network (DBN) for the detection of activities. However, they only have been dealing with primitive activities which are very short in duration and they have not addressed high level activity recognition.

The information related to activity, physiological data and diet detected by the systems described above can be subsequently fed back to users to promote health behaviour changes or sent to carers. These technologies can be part of a mechanism to support self-management of people suffering from chronic problems such as cognitive deficiency.

Many health care applications are typically based on solutions employing sensors either embedded in the environment or body worn. The most widely used sensors for health care applications are reported in Table 2.3, along with their domains of application. To reach the level of analysis required for health care applications (e.g recognising activity), a large network of embedded sensors is generally required, therefore systems are usually costly to maintain, are relatively obtrusive (e.g. sensors set up on every cupboard door) and are highly sensitive to the performance of the sensors. Another approach is to use body worn sensors, however, it is recognised that user compliance with wearable systems is poor.

Recently a number of researchers and companies have been looking at developing solutions based on video cameras and computer vision approaches. A monitoring system based on video cameras has potential advantages. In principle a single camera in a room could pick up most of the activities performed in the room and, consequently, could replace a large number of sensors.

Remarkably, the last five years have shown a large interest in video based solutions. One

Sensor Applications domains Embedded/Body worn Motion detector Fall detection, Activity monitoring, Security Embedded

Door open Activity monitoring, Security Embedded

Electrical appliance Activity monitoring, Safety Embedded

Microphone Fall detection Embedded

Accelerometer Fall detection, Activity monitoring Body worn

RFID Activity monitoring Body worn/Embedded

Temperature Safety Embedded

Smoke sensor Safety Embedded

Camera Fall detection, Activity monitoring, Security, Health care Embedded

Table2.3: Commonly used sensors and their domains of application.

of the major reasons for this gain of interest is the cost of video cameras. The price of video cameras has drastically lowered within the last decade and good quality video cameras are now available at very low cost. Another important factor in favour of video technology is the maturity of computer vision based technologies. Latest research in computer vision, motivated by needs in domains such as security and surveillance, provides new means of interpreting rich information provided by video cameras.

Dans le document The DART-Europe E-theses Portal (Page 60-63)