• Aucun résultat trouvé

Modeling multimodal processing in Simultaneous Interpreting

N/A
N/A
Protected

Academic year: 2022

Partager "Modeling multimodal processing in Simultaneous Interpreting"

Copied!
2
0
0

Texte intégral

(1)

Poster

Reference

Modeling multimodal processing in Simultaneous Interpreting

SEEBER, Kilian

Abstract

Simultaneous conference interpreting is a complex languageprocessing task combining language comprehension and language production in real time. One of the factors with the potential to compound the complexity of this task is the integration of visual information in the process. Whether it is facial expressions, gestures, images or text, the processing of visual information has been shown to interact with auditory information when processed simultaneously. The question about the extent to which the availability of such visual information helps or hinders comprehension in simultaneous interpreting is as yet unclear.

This is an attempt to model these parameters.

SEEBER, Kilian. Modeling multimodal processing in Simultaneous Interpreting. In: 75e anniversaire de la FTI , Genève (Suisse), 2016

Available at:

http://archive-ouverte.unige.ch/unige:135393

Disclaimer: layout of this document may differ from the published version.

1 / 1

(2)

www.postersession.com

Simultaneous conference interpreting is a complex language- processing task combining language comprehension and language production in real time. One of the factors with the potential to compound the complexity of this task is the integration of visual information in the process. Whether it is facial expressions, gestures, images or text, the processing of visual information has been shown to interact with auditory information when processed simultaneously. The question about the extent to which the availability of such visual information helps or hinders comprehension in simultaneous interpreting is as yet unclear. This is an attempt to model these parameters.

Modeling multimodal processing in Simultaneous Interpreting

Kilian G. Seeber

FTI – University of Geneva

.

Summary

FACULTÉ DE TRADUCTION ET D’INTERPRÉTATION

References

Groh, J.M. & Werner-Reiss, U. (2002). Visual and auditory integration. InEncyclopedia of the human brain(Vol.

4., pp. 739-752). San Diego: Academic Press.

Jesse A., Vrignaud N., Cohen M.M., & Massaro, D.W. (2000/01). The processing of information from multiple sources in simultaneous interpreting.Interpreting 5(2), 95-115.

Lu, S.A., Wickens, C.D., Sarter, N.B., Thomas, L.C., Nicolic, M.I., & Sebok A. (2012). Redundancy gains in communication tasks: a comparison of auditory, visual, and redundant auditory-visual information presentation on NextGen flight decks. Proceedings of the human factors and ergonomics society. 56thannual meeting. 1476 – 1480.

Maragos, P., Gros, P., Katsamanis, A. & Papandreau G. (2008). Cross-modal integration for performance improving in multimedia: A review. In Maragos P., Potamianos A. & Gros P. (Eds.),Multimodal processing and interaction: Audio, video, text(pp.3-48). NY: Springer.

Massaro, D.W. (1998).Perceiving talking faces: From speech perception to a behavioral principle. Cambridge, MA: MIT Press.

Seeber, K.G. (2007). Thinking outside the cube: Modeling language processing tasks in a multiple resource paradigm. Interspeech 2007, Antwep, Belgium, (pp. 1382-1385).

Seeber, K.G. (2011). Cognitive load in simultaneous interpreting: Existing theories – new models.Interpreting 13(2), 176-204.

Seeber, K.G. (2015c). Cognitive load. In Pöchhacker, F., Grbic, N., Mead, P. & Setton, R. (eds.),Encyclopedia of Interpreting Studies(pp. 60-61). New York: Routledge.

Van Wassenhove, V., Grant, K. W., & Poeppel, D. (2007). Temporal window of integration in auditory-visual speech perception.Neuropsychologia 45, 598-607.

Wickens, C.D. (2002). Multiple resources and performance prediction.Theoretical Issues in Ergonomics Science 3(2), 159-177.

Background and Theory

§ In natural conversation settings auditory speech is normally accompanied by visual speech information provided by the speaker

§ Visual information supports and facilitates the perception of auditory speech

§ Speech reading is robust both in the comprehension of spoken and sung text

§ The human processor seems capable of integrating different cues within one and the same modality (->multicue integration) as well as between or among several modalities (-> multimodal integration) without much effort.

§ Single-resource theory predicts early overload during multicueand multimodalintegration

§ Multiple-resource theory predicts early overload only during multicue integration

Model

SI with visual (spatial) input

Model

SI with visual (verbal) input

Audio-visual information can be integrated even when the information coming from the two channels is not perfectly synchronous.

Audio-visual information presented between a 40ms audio lead and a 240ms audio lag, in other words within a window of just under 300ms.

Leading auditory information hinders integration while leading visual information enhances integration.

ØA multimodal facilitation effect during SI with text is contingent on the synchronicity of the two signals (e.g., text and speech)

ØAsynchrony of signals on different channels is expected to increase cognitive load

ØInterpreter’s inability to keep this synchrony during SI with text would explain the perceived increase in task difficulty as compared to SI without text

Discussion and Conclusion

Références

Documents relatifs

To have attended the following modules of the first year: Techniques of interpretation and contextualization of the discipline; Consecutive Interpreting; Simultaneous Interpreting,

Each sentence was thoroughly assessed by the four in- terpreters who recorded the number of rhythmic groups, the order number of the rhythmic group with the antici- patory nucleus,

First, it helps reduce the semantic gap between high- level models – expressed, for example, using Architecture Description Languages (ADL) like AADL or UML profiles for system

In this article, we surveyed a wide range of approaches integrating typological informa- tion, derived from the empirical and systematic comparison of the world’s languages, and

This paper aims at emphasizing that, even relaxed, the hypothesis of compositionality has to face many problems when used for interpreting natural language texts.. Rather

Two popular approaches we con- sider here are language transfer with selective sharing, where the parameters of languages with similar typological features are tied together (§

Furthermore, the logical background knowledge provided to the prover will never completely cover the actual background knowledge of a person who reads the text. Finally, the

This module searches a file of code or documentation for blocks of text that look like an interactive Python session, of the form you have already seen many times in this book.