• Aucun résultat trouvé

MaD: Mapping by Demonstration for Continuous Sonification

N/A
N/A
Protected

Academic year: 2021

Partager "MaD: Mapping by Demonstration for Continuous Sonification"

Copied!
2
0
0

Texte intégral

(1)

HAL Id: hal-01061228

https://hal.archives-ouvertes.fr/hal-01061228

Submitted on 5 Sep 2014

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

MaD: Mapping by Demonstration for Continuous

Sonification

Jules Françoise, Norbert Schnell, Frédéric Bevilacqua

To cite this version:

(2)

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author.

ACM 978-1-4503-2961-3/14/08

MaD: Mapping by Demonstration for Continuous Sonification

Jules Franc¸oise, Norbert Schnell, Fr´ed´eric Bevilacqua∗ STMS Lab, IRCAM–CNRS–UPMC, Paris, France.

1

Introduction

Gesture has become ubiquitous as an input modality in computer system, using multi-touch interfaces, cameras, or inertial measure-ment units. Whereas gesture control often necessitates the visual modality as a feedback mechanism, sound feedback has been sig-nificantly less explored. Nevertheless, sound can provide both dis-crete and continuous feedback on the gestures and the results of their action. We present here a system that significantly simplify and speedup the design of continuous sonification. Our approach is based on the concept of mapping by demonstration. This method-ology enables designers to create motion-sound relationships intu-itively, avoiding complex programming that is usually required in common approaches. Our system relies on multimodal machine learning techniques combined with hybrid sound synthesis. We be-lieve that the system we describe opens new directions in the design of continuous motion-sound interactions. We developed prototypes related to different fields: performing arts, gaming, and sound-guided rehabilitation.

2

Technology

Our system is innovative on two different levels. The first level concerns the user and/or designer: the system allows for rapid pro-totyping of motion-to-sound mapping by demonstration. The map-ping is automatically learned by the system when movement and sound examples are jointly recorded (see Figure 1). In particular, our applications focus on using vocal sounds - recorded while per-forming action - as primary material for interaction design. The second level is technical and concerns the integration of specific probabilistic models of the mapping (Multimodal and Hierarchi-cal Hidden Markov Models [Franc¸oise et al. 2013; Franc¸oise et al. 2012]) with hybrid sound synthesis models. In particular, the sys-tem supports the simultaneous use of granular, concatenative and physical modeling synthesis, which offer a diverse set of sound qualities. Importantly, the system is independent of the type of mo-tion/gesture sensing devices, and can be used with different sensors such as cameras, contact microphones, and inertial measurement units.

3

Applications

Our system is currently applied in different fields, which demonstrates its versatility. First, it is used in applications that require -learning specific movements, such as performing arts. As shown in the videos1, the use of vocalization, is particularly interesting since it is inherently used in these activities to support action or describe movement. Second, the system is currently evaluated for rehabili-tation that could benefit from auditory feedback. In this case, the use of either vocal sounds or physically-inspired models allows for

{jules.francoise, norbert.schnell, frederic.bevilacqua}@ircam.fr

performance training Probabilistic Model Training Set movement

examples examplessound

estimate parameters input movement output parameters Synth

Figure 1: Mapping by Demonstration Workflow

the design of motion-sound interaction metaphors that should fa-cilitate movement training. Third, we believe that our system can provide vocally impaired persons with alternative ways to express themselves sonically through movement.

4

Presentation at Siggraph

We propose two different gaming scenarios that will allow atten-dees to try the system in a playful manner. In the first game (Sonic Stories), players first record vocal sketches that can then be used to communicate messages through their gestures. A set of cards that symbolize a variety of characters, actions, and objects suggest players to invent stories together. The goal of the second game (Sonic Choreographies), is to challenge the players’ ability to learn complex movements with the help of auditory feedback. The play-ers are sonically guided through movement sequences that become more and more challenging from one level to another! How fast can you learn?

Acknowledgements

We acknowledge support from the ANR project Legos 11-BS02-012 and the Labex SMART (ANR-11-LABX-65).

References

FRANC¸OISE, J., CARAMIAUX, B.,ANDBEVILACQUA, F. 2012. A Hierarchical Approach for the Design of Gesture–to–Sound Mappings. In Proceedings of the 9th Sound and Music Comput-ing Conference, 233–240.

FRANC¸OISE, J., SCHNELL, N.,ANDBEVILACQUA, F. 2013. A Multimodal Probabilistic Model for Gesture–based Control of Sound Synthesis. In Proceedings of the 21st ACM international conference on Multimedia (MM’13), 705–708.

1Additional material and examples can be found on:

Références

Documents relatifs

Each subject i is described by three random variables: one unobserved categorical variable z i (defining the membership of the class of homogeneous physical activity behaviors

Introduced in [7, 8], hidden Markov models formalise, among others, the idea that a dy- namical system evolves through its state space which is unknown to an observer or too

The Evidential EM (E2M) [5] is an interative algorithm dedicated to maximum likelihood estimation in statistical models based on uncertain observations encoded by belief

This paper develops a new contrast process for parametric inference of general hidden Markov models, when the hidden chain has a non-compact state space.. This contrast is based on

:فرﺎﻌﻤﻝا ﺞﻤد ،تﺎﻴﺎﻔﻜﻝا ﺎﻴﺠوﻏادﻴﺒ ﻲﻓ رﺼﺎﻨﻌﻝا مﻫأ نﻤ ﻲﻫو ﻲﻓ تﺎﻤوﻠﻌﻤﻝا ءﺎﻨﺒ وﺤﻨ مﻴﻠﻌﺘﻝا ﻪﻴﺠوﺘ مﺘﻴ ثﻴﺤ ﺞﻤدﻨﻤ رﺎطإ ، ،(داوﻤﻝا ﺔﻴﻝﻼﻘﺘﺴا) ﻲﻝازﻌﻨا لﻜﺸ ﻲﻓ سﻴﻝو

We have proposed to use hidden Markov models to clas- sify images modeled by strings of interest points ordered with respect to saliency.. Experimental results have shown that

Furthermore, the continuous-state model shows also that the F 1 motor works with a tight chemomechanical coupling for an external torque ranging between the stalling torque and

In section 2 the WVD is applied to nonparametric estimation of the stationary and transition densities, f (x) and π(x, y) respectively, of a Markov chain based on noisy observations: