• Aucun résultat trouvé

Gesture-based control of physical modeling sound synthesis : a Mapping-by-Demonstration Approach

N/A
N/A
Protected

Academic year: 2021

Partager "Gesture-based control of physical modeling sound synthesis : a Mapping-by-Demonstration Approach"

Copied!
3
0
0

Texte intégral

(1)

HAL Id: hal-01061312

https://hal.archives-ouvertes.fr/hal-01061312

Submitted on 5 Sep 2014

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Gesture-based control of physical modeling sound

synthesis : a Mapping-by-Demonstration Approach

Jules Françoise, Norbert Schnell, Frédéric Bevilacqua

To cite this version:

(2)

Gesture–based Control of Physical Modeling Sound

Synthesis: a Mapping-by-Demonstration Approach

Jules Françoise

STMS Lab IRCAM–CNRS–UPMC 1, Place Igor Stravinsky 75004

Paris, France

jules.francoise@ircam.fr

Norbert Schnell

STMS Lab IRCAM–CNRS–UPMC 1, Place Igor Stravinsky 75004

Paris, France

norbert.schnell@ircam.fr

Frédéric Bevilacqua

STMS Lab IRCAM–CNRS–UPMC 1, Place Igor Stravinsky 75004

Paris, France

frederic.bevilacqua@ircam.fr

ABSTRACT

We address the issue of mapping between gesture and sound for gesture-based control of physical modeling sound synthe-sis. We propose an approach called mapping by demonstra-tion, allowing users to design the mapping by performing gestures while listening to sound examples. The system is based on a multimodal model able to learn the relationships between gestures and sounds.

Categories and Subject Descriptors

H.5.5 [Information Interfaces And Presentation]: Sound and Music Computing; J.5 [Arts and Humanities]: Music

Keywords

music, gesture, sound synthesis, physical modeling, HMM, multimodal

1.

INTRODUCTION

Gestural interaction with audio and/or visual media has become ubiquitous. Many applications, including music performance, gaming, sonic interaction design, or rehabil-itation, involve mapping from physical gestures to sound, requiring solutions for quick prototyping of gesture–based sound control strategies. The relationship between gesture and sound, often called mapping, has been recognized one of the crucial aspects of such interactive systems, as its de-sign influence the interaction possibilities. In this paper, we address the issue of mapping for gesture–based control of physical modeling sound synthesis. This is a companion paper of a short-paper presented at ACM Multimedia 2013 [3] that will demonstrate concrete examples of a general a multimodal probabilistic model for gesture–based control of sound synthesis.

Physical modeling sound synthesis aims at simulating the acoustic behavior of physical objects. Gestural control of

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, and that copies bear this notice and the full ci-tation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Copyright is held by the author/owner(s).

MM’13,October 21–25, 2013, Barcelona, Spain. ACM 978-1-4503-2404-5/13/10.

http://dx.doi.org/10.1145/2502081.2502262.

such physical models remains difficult, since the captured gestural parameters are generally different from the physical input parameters of the sound synthesis algorithm. For ex-ample, sensing gestures using accelerometers might pose dif-ficulties for controlling the physical model of a bowed string where force, velocity, and pressure are the expected input. The design of the mapping between gesture and sound is therefore complex, and would hardly be realized by direct relationships between input and output parameters.

To tackle such an issue, we propose an approach we call mapping by demonstration, based on machine, allowing users to craft control strategies by demonstrating gestures associ-ated with sound examples. Therefore, the system supports a design of the mapping driven by listening and interaction, as in many cases the training examples are defined by gestures performed while listening to sound examples, as proposed by Caramiaux [1] or Fiebrink [2].

Our approach places the user at the center of an inter-action loop integrating training and performance. During training, sounds can be designed using a graphical editor. By performing gestures while listening to the sounds, the user feeds the system with examples of the mapping he in-tents to create, therefore translating his intentions through the direct demonstration of specific control strategies. Dur-ing performance, the learned mappDur-ing can be used for sound control, allowing the user to explore the control strategies defined by the training examples.

2.

APPLICATION OVERVIEW

2.1

Gesture capture

We use the Modular Musical Objects (MO) for gesture capture [5]. These wireless devices include an accelerometer and a gyroscope, and can be integrated to various objects, or extended with additional sensors, for example piezo-electric sensors.

Figure 1: MO: Modular Musical Objects

(3)

2.2

Modal synthesis

Our system uses Modalys, a software dedicated to modal synthesis, i.e. that simulates the acoustic response of vi-brating structures under an external excitation. It allows to build virtual instruments by combining modal elements – e.g. plates, strings, membranes – with various types of con-nections and exciters – e.g. bows, hammers, etc. Each model is governed by a set of physical parameters – e.g. speed, po-sition and pressure of a bow. Specific sounds and playing modes can be created by designing time profiles combining these parameters.

2.3

gesture–sound mapping

Our goal is to learn the mapping between gestures, cap-tured with accelerometers, and specific time profiles of the control parameters of the physical models. We adopt a mul-timodal perspective on gesture–sound mapping based on a probabilistic multimodal model. This approach is inspired by recent work in other fields of multimedia, such as speech– driven character animation [4]. The system is based on a sin-gle multimodal Hidden Markov Model (HMM) representing both gesture and sound parameter morphologies. The model is trained by one or multiple gesture performances associated to sound templates. It captures the temporal structure of gesture and sound as well as the variations which occur be-tween multiple performances. For performance, the model is used to predict in real-time the sound control parameters associated with a new gesture. Additional details about the model and its implementation can be found in [3].

2.4

Workflow

The workflow of the application is an interaction loop in-tegrating a training phase and a performance phase. It is illustrated in figure 2, and a screenshot of the software is depicted in figure 3. In the training phase, the user can

Physical Model Multimodal HMM Graphical Editor Reference sound control Parameters Reference gesture Training

(a)Training: sounds are designed using a graphical editor, and reference gestures can be recorded while listening.

Physical Model Multimodal HMM Predicted sound control Parameters Live gesture Prediction

(b) Performance: the model is used to predict the sound pa-rameters associated with a live gesture.

Figure 2: Application workflow.

draw time profiles of control parameters of the physical mod-els to design particular sounds. Each of these segments can be visualized, modified, and played using a graphical editor

Training Playing (1) (2) (3) (4) Sound Gesture

Figure 3: Screenshot of the system. (1) Graphical Editor. (2) Multimodal data container. (3) Multimodal HMM: control panel and results visualization. (4) Sound synthesis.

(top left of figure 3). Then, the user can perform one or several demonstrations of the gesture he intents to associate with the sound example (figure 2a). Gesture and sound are recorded to a multimodal data container for storage, visual-ization and editing (bottom left of figure 3). Optionally, seg-ments can be manually altered using the user interface. The multimodal HMM representing gesture–sound sequences can then be trained using several examples. During the per-formance phase, the user can gesturally control the sound synthesis. The system allows for the exploration of all the parameter variations that are defined by the training exam-ples. Sound parameters are predicted in real-time to provide the user with instantaneous audio feedback (figure 2b). If needed, the user can switch back to training and adjust the training set or model parameters.

3.

ACKNOWLEDGMENTS

We acknowledge support from the French National Re-search Agency (ANR project Legos 11 BS02 012).

4.

REFERENCES

[1] B. Caramiaux. Etudes sur la relation geste-son en performance musicale. Phd dissertation, Universit´e Pierre et Marie Curie, 2012.

[2] R. Fiebrink, P. R. Cook, and D. Trueman. Play-along mapping of musical controllers. In In Proceedings of the International Computer Music Conference, 2009. [3] J. Fran¸coise, N. Schnell, and F. Bevilacqua. A

Multimodal Probabilistic Model for Gesture-based Control of Sound Synthesis. In Proceedings of the 21st ACM international conference on Multimedia (MM’13), Barcelona, Spain, 2013.

[4] G. Hofer. Speech-driven animation using multi-modal hidden Markov models. Phd dissertation, University of Edimburgh, 2009.

[5] N. Rasamimanana, F. Bevilacqua, N. Schnell, E. Fl´ety, and B. Zamborlin. Modular Musical Objects Towards Embodied Control Of Digital Music Real Time Musical Interactions. Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction, pages 9–12, 2011.

Références

Documents relatifs

Brogna, D, Vincke, C & Dendoncker, N 2014, 'Quel impact des écosystèmes forestiers sur les services de régulation de l'eau.. Mieux comprendre les fonctions pour mieux évaluer

We describe below first the learning mode, necessary to build the hierarchical gesture models from templates recorded by the user, and second, the performance mode, where the

To analyse the cross-modal relationship between gesture and sound, a multi- variate analysis method is used in two ways: first for the selection of pertinent gesture features, then

Baptiste Caramiaux, Fr´ ed´ eric Bevilacqua and Norbert Schnell IRCAM, CNRS-UMR STMS, 1 Place Igor Stravinsky, 75004 PARIS, France..

Technically, the divergence measure is defined based on a Hidden Markov Model (HMM) that is used to model the time profile of sound descriptors.. We show that the divergence has

The models will be compared to state of the art methods regarding their ability to learn a mapping incrementally and to model temporal dynamics and expressive gesture variations..

The wave propagation in each resonator is computed using linear models, whereas interactions involves nonlinear relations between the force and the displacement or velocity.. The aim

A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer–string model.. In fact, these parameters