• Aucun résultat trouvé

When nonmanuals meet semantics and syntax: towards a practical guide for the segmentation of sign language discourse

N/A
N/A
Protected

Academic year: 2021

Partager "When nonmanuals meet semantics and syntax: towards a practical guide for the segmentation of sign language discourse"

Copied!
2
0
0

Texte intégral

(1)

RESEARCH OUTPUTS / RÉSULTATS DE RECHERCHE

Author(s) - Auteur(s) :

Publication date - Date de publication :

Permanent link - Permalien :

Rights / License - Licence de droit d’auteur :

Bibliothèque Universitaire Moretus Plantin

Institutional Repository - Research Portal

Dépôt Institutionnel - Portail de la Recherche

researchportal.unamur.be

University of Namur

When nonmanuals meet semantics and syntax: towards a practical guide for the

segmentation of sign language discourse

Gabarro-Lopez, Silvia; Meurant, Laurence

Publication date:

2014

Document Version

Peer reviewed version

Link to publication

Citation for pulished version (HARVARD):

Gabarro-Lopez, S & Meurant, L 2014, 'When nonmanuals meet semantics and syntax: towards a practical guide

for the segmentation of sign language discourse', 6th Workshop on the Representation and Processing of Sign

Languages: Beyond the Manual Channel, Reykjavik, Iceland, 31/05/14.

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners

and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.

• You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal ?

Take down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately

and investigate your claim.

(2)

We would like to thank Christophe de Clerk, Susana Sánchez, Aurélie Sinte and Bruno Sonnemans for their collaboration as well as Gemma Barberà for her valuable comments. This research is funded by a F.R.S-FNRS Research Fellow Grant, a F.R.S-FNRS Outward Mobility Grant for Short Stays and the F.R.S – FNRS Incentive Grant for Scientific Research n. F.4505.12.

When nonmanuals meet semantics and syntax: a practical guide for the segmentation of sign

language discourse

Sílvia Gabarró-López & Laurence Meurant

silvia.gabarro@unamur.be laurence.meurant@unamur.be

Introduction

Previous studies about the segmentation of signed productions (Crasborn, 2007; Fenlon et al., 2007; Hansen & Heβmann, 2007; Herrmann, 2009; Hochgesang, 2009; Jantunen, 2007; Nicodemus, 2006; 2009) take a prosodic perspective to work out how manual and non-manual cues participate in the syntactic organisation of SLs.

Our question:Is there a steady set of cues that can be taken and shared as boundary indicators of discourse segments?

Our goal: Create a set of guidelines for discourse segmentation that can be shared among researchers of different SLs, among different SL corpora and within the same SL corpus.

How?

Check if a steady set of criteria can be extracted from the spontaneous

segmentation performed by LSFB signers.

Methodology

- 1-hour corpus of 1 signer containing 2 argumentative (A1 & A2), 2 explicative (E1 & E2), 2 metalinguistic (M1 & M2) and 2 narrative (N1 & N2) discourses.

- 3 deaf (2 natives and 1 non-native) and 2 hearing non-native LSFB signers involved in the study as segmenters

- Two-stage process:

1. Copy test: a 3-minute sample of each genre that the 3 deaf had to watch and repeat

stopping the clip whenever they wanted to an experimenter who did not see the video and who coded their fragments in ELAN.

2. Cut test: the 1-hour corpus segmented into discourse units using ELAN by both hearing and

deaf segmenters according to their intuitions.

References

Crasborn, O. 2007. How to recognize a sentence when you see one. Sign Language & Linguistics 10: 103–111.

Fenlon, J., Denmark, T., Campbell, R., and Woll, B. 2007. Seeing sentence boundaries. Sign Language and Linguistics 10(2): 177–200. Hansen, M., and Heβmann, J. 2007. Matching propositional content and formal makers. Sentence boundaries in DSG text. Sign Language and Linguistics 10(2).

Herrmann, A. 2009. Prosody in German Sign Language. Workshop on Prosody and Meaning, 17–18 September 2009, Frankfurt am Main, Germany. Herrmann, A. 2010. The interaction of eye blinks and other prosodic cues in German Sign Language. Sign Language and Linguistics 13(1), 3-39. Hochgesang, J. A. 2009. Is there a ‘sentence’ in ASL? Insight on segmenting signed language data. Talk presented at Sign Language Corpora: Linguistic

Issues Workshop, 24 July, London, UK.

Jantunen, T. 2007. The equative sentence in Finnish Sign Language. Sign Language and Linguistics 10(2): 113–143.

Nicodemus, B. 2006. Prosody and utterance boundaries in ASL interpretation. Paper presented at the DGfS [Deutsche Gesellschaft für Sprachwissenschaft] 2006 workshop “How to recognize a sentence when you see one: methodological and linguistic issues in the creation of sign language corpora”, 23–24 February, Bielefeld, Germany.

Nicodemus, B. 2009. Prosodic markers and utterance boundaries in American Sign Language interpretation. Gallaudet University Press, Washington D.C. 2002.

Eye blinks layered with head nods: caution is needed!

- This cue can either mark a boundary or be a linker within a discourse unit in the middle of temporal syntactic structures or at the end of a parenthetical comment.

- As a non-boundary cue, it is easily isolable: 1. It is close after a boundary 2. There is no other associated cue

3. The chin and the eyebrows go up (ce-up) in the first part of the segment before the eye blink layered with a head nod occurs (only in temporal structures as in the example below).

COMMUNICATION-SUPPORT-WORKERS SIGN-WRONG blink + head nod OUT ---ce-up

PEOPLE SEE GOOD IT GOOD

Cue Number of appearances

%

Pause (1) 64 67

Eye blink layered with head nod (3)

38 40

Sign hold (2) 23 24

Change in head position layered with a change in eye gaze (4)

19 20 Eye blink (8) 17 18 Role shift (5) 14 15 Palm-up (9) 11 12 Head nod (10) 5 5 Bracketing repetition (6) 4 4 Head movement (11) 4 4 Change in eyebrow position (13) 3 3

Buoy (14) 3 3

Rhetorical question (7) 2 2 Change in eye gaze (12) 1 1

A1 A2 E1 E2 M1 M2 N1 N2 S1 124 38 93 63 69 163 123 88 S2 122 35 128 32 64 114 139 69 S3 26 S4 67 S5 116 Overlaps L+R S1-S2 129 30 100 13 50 100 154 94 Average S 123 36,5 110,5 40,3 45,75 115 131 78,5 0 20 40 60 80 100 120 140 160 180 S1 S2 S3 S4 S5 Overlaps L+R S1-S2 Average S Cue Number of appearances % Pause (1) 304 51.4

Eye blink layered with head nod (3)

266 45

Change in head position layered with a change in eye gaze (4)

187 31.6 Sign hold (2) 142 24 Role shift (5) 137 23.2 Eye blink (8) 81 13.7 Palm-up (9) 77 13 Head movement (11) 43 7.3 Head nod (10) 27 4.6

Change in eyebrow position (13) 21 3.6 Bracketing repetition (6) 18 3 Rhetorical question (7) 17 2.9 Change in eye gaze (12) 13 2.2

Buoy (14) 12 2

Repetition of a sign (AA or AAA) (15)

2 0.3

Copy test

Manual & non-manual cues at discourse units’ boundaries spotted by any of the 3 deaf segmenters

Cut test

Manual & non-manual cues at discourse units’ boundaries spotted by at least 2 segmenters

- Cue 1 and 2: very common and similar in their function.

- Cue 3: the most usual non-manual boundary marker in line with Herrmann (2010).

- Cue 5: commonplace in narratives and indicator of boundaries.

- Cue 9: found in all discourses, even if not so common as a boundary.

Inter-segmenter agreement and distribution of boundary cues (copy test)

- 3 segmenters (32.63%): 31 boundaries (30 pauses and 1 sign hold).

- 2 segmenters (9.47 %): 9 boundaries (8 pauses and 1 role shift).

- 1 segmenter (57.89%): 55 boundaries but 33 boundaries were also found by at least 2 segmenters in the cut test, i.e. 60%.

- These 33 boundaries include 18 pauses, 12 role shifts and 5 combinations of cues including blinking.

- Total: 591 segments

- Results are not divergent with the copy test table: 1. Similar top seven

2. Similar percentages

Proposed guidelines for discourse segmentation

I.

As a general rule, segment at every pause and at every sign hold.

II. For narrative discourses, segment at the end of every constructed dialogue and

role shift.

III. Segment systematically at every eye blink layered with a head nod (or at every

combination of a blink in the close context of a change in eye gaze and head

position).

IV. Remove all the eye blinks layered with head nods acting as discourse unit linkers.

Open issues

- We want to test these guidelines on a larger LSFB sample containing different discourses and signers.

- We would really appreciate if other SL researchers tested these guidelines with their data on other SLs, so please do it and give us feedback!

- Different number of segments due to the length of the video and the situation (monologue vs dialogue).

- High agreement between at least 2 segmenters in all videos.

Inter-segmenter agreement (cut test)

Extract of the annotation scheme in ELAN. One tier per segmenter for each test. Common_units: the common boundaries for the copy test. Common_cues: the cues spotted by at least two segmenters in the cut test.

Conclusions of both tests

- The same cues influence the segmentation regardless of the instruction given.

- Both tests are highly consistent.

Références

Documents relatifs

We model types as intervals of partial équivalence relations, managing to produce a model for the second order lambda calculus with subtyping, bounded quantification, recursive

The correspondence between a semantic representation (= a predicate-argument structure) and a syntactic representation (= a surface syntactic dependency tree) of a

Moreover, while discourse parsing is a seman- tic task involving a large range of information, the annotation guidelines reflect that segmentation is merely based on syntax:

Figure 3: Rate of disfluencies and discursive markers (Y-axis) for the 57 samples of the Rhapsodie corpus (X-axis), along with a description of the content of each sample in terms

This paper describes a proposed new syntax that can be used to write and read OWL ontologies in Controlled Natural Lan- guage (CNL): a well-defined subset of the English

Although this example is quite simple, it reveals the necessary usage of dependent types for an appropriate specification of categorical constructions as well as the requirement

On one hand, it combines the (purely synchronous) clock operators of the CCSL specification language [9], adding support for time tags and time scales relations found in the

We will attempt to make Hypothesis 1 plausible in Sections 2 – 3, where we consider various cases in which sign languages provide overt evidence on crucial aspects of the Logical