• Aucun résultat trouvé

Integrating human / machine interaction into robot control Integrating human / machine interaction into robot control

N/A
N/A
Protected

Academic year: 2022

Partager "Integrating human / machine interaction into robot control Integrating human / machine interaction into robot control "

Copied!
24
0
0

Texte intégral

(1)

MINISTÈRE DE LA DÉFENSE

Integrating human / machine interaction into robot control Integrating human / machine interaction into robot control

architectures for defense applications architectures for defense applications

D. Dufourd (DGA/SPART), A. Dalgalarrondo (DGA/CEV)

(2)

Outline of the presentation Outline of the presentation

z Overview of existing HRI mechanisms within robot control architectures (defense applications)

z Description on our work concerning HARPIC

z Perspectives and open issues about HRI within software architectures

z Conclusion

z Introduction

z Operational context for HRI

(Human / Robot interaction)

(3)

Introduction Introduction

Given recent advances in robotics

technologies, the range of operational missions for robots is getting wider :

z reconnaissance and scout missions

z surveillance, target acquisition and illumination

z demining, breaching, security missions (EOD, IED neutralization…)

z supply delivery, mule applications, obstacle clearing, retrieval of injured people, telemanipulation

z communication relays, diversion...

(4)

French Defense robotics projects French Defense robotics projects

(+ PEAs Minidrones, SUAV, Evolution, Action, OISAU...) AMX30B2DT

AMX30B2DT

Mini-RoC

PEA ROB

TAROT / BOA MMSR SYDERA

SYRANO

(5)

Why introduce HRI into military operations ? Why introduce HRI into military operations ?

z Basic teleoperation may induce a heavy workload on the operator (e.g. TMR robots at WTC disaster)

-> need for autonomy

z However, technology is not mature enough to enable full autonomy during complex missions + autonomy is not desirable in some operational situations

-> need to keep humans in the loop

z Humans and robots have orthogonal strengths

-> the best solution seems to be a good collaboration

between humans and robots

(6)

Existing

Existing control modes control modes

Behavior-based teleoperation

Mixed-Initiative Collaborative

control Supervisory control

Shared control Traded control

Adjustable autonomy Teleoperation

Operator workload

Autonomy

(7)

Existing

Existing demonstrators demonstrators (defense ( defense applications) applications)

z Most of them include various control modes

z Mostly based on well-known architectures

z Examples :

„

Sweden small UGV : sliding autonomy - SSS ;

„

TMR (INEEL-ARL) : safeguarded teleoperation / sliding autonomy subsumption architecture (at the lower level)

„

PRIMUS-D (UBM-Dornier) : teleoperation + behaviors 4D/RCS

„

Demo III (ARL-NIST) : sliding autonomy - 4D/RCS

„

MARS (CMU) : cooperative control message-based architecture

„

TMR (Georgia Tech) : multi-robot schema-based control - AuRA (+

Mission Lab)

z But difficult to compare - more feedback needed

(8)

Description

Description of of our our work work on HARPIC on HARPIC (CEP Arcueil)

(CEP Arcueil)

Goals :

„

Investigate different ways to control a mobile robot, using various levels of autonomy

„

Integrate these various modes into an operator control unit suited for PDA

„

Demonstrate the potentialities of the control strategies

„

Get feedback from operational forces

„

Express requirements for future systems

(9)

The selected control modes The selected control modes

We have selected the following control modes for our application:

Î manual control (total control) - teleoperation

Î assisted control (anti-collision and obstacle avoidance ensured by the robot) - shared control (safeguarded teleoperation)

Î waypoints or goal-directed navigation - supervisory teleoperation

Îbehavior-based autonomy (wall following, corridor following...) - behavior-based teleoperation

Î sequences of behaviors - full autonomy

Î different modes within the same system allowing dynamic

swapping - adjustable autonomy

(10)

Constraints Constraints

Î Software adapted to multi-robot applications Î Allow simple access to robot perceptions

Î Small size interface (fit to the PDA screen): 320x240 pixels Î Using 2 laptops with wireless lan (1 on the robot and 1 for the operator)

Î The agents of the multiagent control architecture can be executed on either laptop

Î The interface must be compatible with our simulator and

work on a robot equipped with a color camera, sonar range

sensors and a laser range finder

(11)

Underlying control architecture: HARPIC Underlying control architecture: HARPIC

z Harpic is a multi-agent hybrid architecture which allows

communication and task allocation between operators and robots.

z The interface itself is an agent of the architecture.

(12)

HARPIC implementation HARPIC implementation

z Multi-agent formalism + object-oriented language (C++) -> modularity, encapsulation, scalability

z POSIX threads -> parallel execution

z Common structure for all agents (communication)

„

A special Adminstrator agent to record information about the others -> modularity

„

Two specific agents to bind architecture to hardware (interface software/physical robot + image acquisition) -> modularity

„

Perception and action agents

„

Attention agent

„

Behavior selection agent

(13)

speed commands

Safeguarded Teleoperation (collision avoidance)

Reflexive teleoperation (using contextual information)

Mode: assisted remote control Mode: assisted remote control

SLAM

speed and direction

commands

(14)

Automatic planning and trajectory following

Localization and mapping still running

The goal point can be changed anytime

goal point in the map

goal point in the image

Mode: goal

Mode: goal - - oriented navigation oriented navigation

(15)

The operator selects, activates or stops behaviors:

- obstacle avoidance - wall following

- corridor following - ...

The operator selects, activates or stops

behavior sequences

Mode: autonomous

Mode: autonomous behaviors behaviors

(16)

Mode : multirobot Mode : multirobot

Î The operator can connect to

and control different robots.

(17)

Experimentations Experimentations

JNRR’03 :

•Navigation in a machine-tool hall about 60 x 60 m large

• Trajectories with loops and abrupt heading changes.

CEP, Eurosatory, Le Bourget,

RND, colloque AAT…

(18)

Perspectives concerning HARPIC Perspectives concerning HARPIC

z Improvement of existing modules, implementation of new behaviors (both for navigation and mission purposes) and multi-robot cooperation

z Development of semi-autonomous transition mechanisms between modes:

„

Using a priori-evaluation (fixed rules for transition)

„

Using on-line evaluation of the quality of perception or behaviors, learning

z Introduction of HRI at other levels of the architecture :

assistance to perception, attention and action…

(19)

General

General open issues about HRI open issues about HRI

z How can we modify existing control architectures to introduce efficient HRI ? What kind of HRI is supported by existing architectures ? Are some architectures

more adapted to HRI than others ?

z Is it possible to build general standard architectures allowing any kind of HRI ? Do we need to distinguish different classes of architectures dedicated to different HRI modalities ?

z What kind of software technologies could support HRI

development ?

(20)

HRI within robot control architectures HRI within robot control architectures

z Any function, any level may be concerned by HRI:

since many architectures are oriented towards full autonomy, they may not allow the whole panel for HRI ?

z Various modes (8 modes above, ANS, ALFUS…)

z Constraints and limitations for HRI design:

„

Security

„

Communication bandwidth and real-time

„

Ergonomics and human capabilities

„

Hardware

„

Dependence from missions, platforms, payloads and interface

devices

(21)

New software technologies for HRI New software technologies for HRI

z Ubiquitous, ambient and pervasive computing for soldiers z Multi-agent systems:

„

Autonomy, interaction, organization, emmergence…

„

Combine several emerging technologies :

Distributed computing (grid-computing), AI, constrained programming, data mining, planning, scheduling, web semantics…

„

Decentralized, open, pro-active (and bottom-up) conception

-> extensibility, portability, robustness, fault tolerance…

-> interesting, especially for network-centric warfare

z Multi-threading and object-oriented languages

z Integration platforms: specification, validation, simulation…

(22)

HRI scalability and standardization HRI scalability and standardization

z Scalability (extension capabilities) – cf. NCW :

„

Specialization, communication and hierarchy : globally OK for software systems

„

Adaptation, negociation, re-organization : more challenging (cf. communication services, “Autonomic Computing”…?)

„

What kind of information, representations should be used ?

„

Future network centric systems ≡ ≡ Web services ???

z Standardization :

„

Beyond using standard technologies: modular conception, scalability of components…

„

Is it possible to build generic logical views (components +

communications / relationships) for architectures including any

kind of HRI ?

(23)

Conclusion Conclusion

z Open and challenging issues for HRI:

„

How to develop efficient and scalable HRI ?

„

How to introduce HRI into existing systems ?

„

How to support HRI development for future systems (software technologies, standardization…) ?

-> Even though today's research and new technologies are promising, there is still a major effort to be done in order to meet tomorrow needs in defense applications

z Various and challenging contexts in the field of defense robotics (variety of

platforms, missions, environments)

(24)

Thank you for your attention !

Any questions ?

Références

Documents relatifs

For example, to plan the first move- ment of a pick and give task, the planner must firstly choose an initial grasp that takes into account the double-grasp needed to give the

The second step is the tracking performed under ROS (Robot Operating System) which was created to facilitate writing software for robotic task. This one will set up a human

Visual control, also named visual servoing, refers to the use of computer vision data as input of real-time closed-loop control schemes to control the motion of a dynamic system,

The challenges for implementing assistive robots are the same at all levels of planning and action execution: 1) How can a robot show legible behavior? It has to make its

Reciprocal Kinematic Control: using human-robot dual adaptation to control upper limb assistive devices Mathilde Legrand, Étienne de Montalivet, Florian Richer, Nathanaël

Based on two samples of French and Swiss respondents, con Wrmatory tests of SVS provide little support for its quasi-circumplex structure, mainly due to problems of construct

Nous avons noté des expressions extrêmes comme « je préfère éviter tout le monde c’est mieux comme ça» (le cas de Naima) ; « je préfère rester seul, je n’ai pas d’amis »

Com- pared to ABAC, our approach provides a systematic mechanism to evaluate a subset of policy rules which are determined based on the user’s roles, yet retain- ing the