• Aucun résultat trouvé

AVT studies: SDH

Dans le document Subtitling in immersive media: (Page 30-35)

Chapter 1. Introduction

1.3. Theoretical framework

1.3.3. AVT studies: SDH

AVT scholars have argued that AVT modes help users overcome barriers when accessing audiovisual content (be they linguistic or sensorial barriers). According to this understanding, subtitling could be aimed at any type of audience. However, both scholars and the industry have traditionally differentiated between subtitles (for hearing viewers) and SDH (for viewers with hearing loss). From a theoretical point of view, we are aiming for a universal definition of subtitling as stated in subsection 1.3.1. However, it would be an oversight to exclude the concept of SDH from the present study, as it has been a key concept in its development.

Usually, as stated before, subtitles for hearing viewers have been referred to as interlinguistic, and SDH as intralinguistic. Gambier (2003, p. 174) defines SDH as follows: “[i]ntralingual subtitling (or closed caption) is done for the benefit of the deaf and hard of hearing (HH), or sometimes to help migrants, for instance, to learn or to improve their command of their new language.” In this definition, the author refers to the target audience as a definitory trait of this type of subtitling, as well as stating that they are “intralingual.”

Díaz-Cintas (2003, p. 199-200) also refers to this differentiation between subtitling for hearing users and SDH:

Traditionally, and grosso modo, two types of subtitles have come to the fore:

interlingual subtitles, which imply transfer from a source language (SL) to a TL and intralingual subtitles (also known as captioning) where there is no change of language and, as far as television is concerned, where transmission is by means of an independent signal activated by accessing page 888 of teletext. Intralingual subtitling is intended to meet the needs of the deaf and hard of hearing, and

30 Chapter 1 | Introduction

involves turning the oral content of actors' dialogues into written speech, without loss of all the paratextual information which contributes to the development of the plot or setting the scene, which deaf people are unable to access from the soundtrack, for example telephones ringing, knocks on the door, and the like.

In this definition, Díaz-Cintas once again makes a distinction between interlingual and intralingual subtitles, emphasizing that the target audience of the latter are persons with hearing loss. Additionally, he includes a reference to the importance of any extralinguistic information that is intended to help viewers with hearing loss understand the plot. However, Díaz-Cintas (2003) also points out that this differentiation does not reflect reality. The author argues that this assumption would imply that persons with hearing loss only watch programs produced in their mother tongue, which is certainly not true. According to the author, viewers with hearing loss are therefore forced to consume interlingual subtitles that are not suitable for their needs.

The definition by Pereira (2005) is broader in that sense, and it considers the possibility of interlingual SDH when the author states that SDH “sometimes occurs between languages” (Pereira, 2005, as cited in Utray, Pereira & Orero 2009, p. 249):

subtitling for the deaf and hard of hearing (SDHH) could be defined as a modality of mode transaction (from oral to writing), which sometimes occurs between languages. It consists of displaying a written text on screen which offers a semantic account of what is happening during a given programme: not only what is said but also how it is said (emphasis, tone of voice, accents, foreign languages, different noises, etc.) and who says it. It includes contextual sounds such as environmental music and noises, and also the numerous discursive elements which appear on the screen (letters, words on posters, etc.).

The topic of interlingual SDH has been further studied by Szarkowska (2013), contributing to the definition of SDH. The author defines this concept as follows:

While intralingual SDH is sometimes thought to consist merely in transcribing a text and supplementing it with information on sounds and speakers, interlingual subtitling consists in actually translating a text. Interlingual SDH would thus combine translation skills with the knowledge of how to cater for the needs of hearing-impaired spectators. This points to the process of merging of the two

well-Chapter 1 | Introduction 31

established AVT modalities − regular interlingual subtitling and intralingual SDH

− and the emergence of a new one: interlingual SDH. (Szarkowska, 2013, p. 69) In this study, the author presents the differences between interlingual subtitles, intralingual SDH and interlingual SDH (in English and Polish) based on a limited descriptive study, and encourages the AVT scholar community to further research this topic. Differences are encountered not just in the form (for example, SDH includes sound representation or character identification) but also in the content (SDH includes more information than interlingual subtitles).

More recently, Neves (2018b, p. 82) defined SDH as “[s]uch subtitles are designed for people with hearing impairment because, in addition to rendering speech, they identify speakers and provide extra information about sound effects and music.” The author introduces the concepts of “enriched content” and “responsive design,” both of which are common in digital environments, in relation to subtitling. Neves (2018b, p. 83) states that the new terms:

[…] speak for a convergent and user-centred reality in which subtitles are part of a complex yet flexible multi-layered media landscape. ‘Enriched’ speaks for all the added elements that make subtitles relevant to specific users and ‘responsive’

for the standardized properties enabling subtitles to travel across platforms and media. The term also accounts for the growing interaction between the person and technology, at both ends of the process: production and reception.

Neves’ user-centered approach to subtitling is in line with this Ph.D. dissertation.

We consider the term “enriched responsive subtitling” to be adequate in the context of this study because it allows for new subtitling features that are present in VR environments. Interaction with VR content leads to new levels of enriched content, for example, the need to include directions in subtitles to guide the users to the speakers or sounds. This new element is not present in previous definitions. Moreover, the author delves deeper into the concept of user-centeredness arguing that:

the effectiveness of SDH revolves around three main criteria: readability, understanding and enjoyment. Achieving these will guarantee a fulfilling ‘user experience’—a concept used in reference to human-computer interaction but that can also be applied to the active consumption of subtitled audiovisual content.

(Neves, 2018b, p. 91)

32 Chapter 1 | Introduction

Although the main criteria could be further discussed, we agree with Neves in the definition of subtitling as a ‘user experience’ that needs to be fulfilled. Neves (2018b) states that in order to achieve better-quality subtitles, it is necessary to better understand the profile and needs of the audience.

So far, there are two common denominators in all SDH definitions: the target audience and additional information. Regarding the target audience, in the abovementioned definitions, some authors have referred to SDH recipients as the deaf and hard of hearing (Gambier, 2003; Díaz-Cintas, 2003; Pereira, 2005), hearing-impaired spectators (Szarkowska, 2013), or people with hearing impairment (Neves, 2018b). Some AVT scholars have tackled the topic of describing and identifying the different groups of people with hearing loss (Neves, 2005; Báez Montero & Fernández Soneira, 2010). One of the issues highlighted is a lack of homogeneity among recipients (Neves, 2008). Neves (2008, p. 131) identified several groups among SDH users:

− deaf and hard of hearing viewers;

− pre-lingually and post-lingually deaf;

− oralising and signing deaf;

− deaf who feel they belong to hearing majority social group and (capital D) Deaf who assume themselves as a linguistic minority;

− deaf for whom the written text is a second language;

− deafened viewers who have residual hearing and/or hearing memory.

Regarding the additional information found in SDH, the cited authors identified:

“paratextual information […] for example telephones ringing, knocks on the door, and the like” (Díaz-Cintas, 2003, pp. 199-200), “not only what is said but also how it is said (emphasis, tone of voice, accents, foreign languages, different noises, etc.) and who says it” (Pereira, 2005, as cited in Utray et al., 2009, p. 249), “contextual sounds such as environmental music and noises, and also the numerous discursive elements which appear on the screen (letters, words on posters, etc.)” (Pereira, 2005, as cited in Utray et al., 2009, p. 249), and “they [SDH] identify speakers and provide extra information about sound effects and music” (Neves, 2018b, p. 83).

For a more systematic analysis of SDH features, we adopted the taxonomy proposed by Arnáiz-Uzquiza (2012) for SDH, based on the subtitling taxonomy by Bartoll (2008). The author proposes six categories (Arnáiz-Uzquiza, 2012):

Chapter 1 | Introduction 33

− Linguistic: related to language and density.

− Pragmatic: related to author, recipient, moment of creation, and intention.

− Aesthetic: related to location, position, font, color, and justification.

− Technical: related to method of creation, format, storage, distribution, and medium.

− Aesthetic-technical: related to speed, integration, and optionality.

− Extralinguistic sound parameters: related to all non-verbal sound information included in the audiovisual work, such as character identification, paralinguistic information, information on sound effects, and musical information.

Taxonomies are a useful tool for analysis, especially in descriptive studies (see Chapter 3). The distinction between subtitling for the hearing and subtitling for persons with hearing loss has been imposed by the industry and academia. However, from a theoretical point of view, we understand there should be no differences between subtitling and SDH, and the user-centered definition suggested in subsection 1.3.2 also proves valid when referring to SDH:

Subtitling is an instrument that serves the purpose of supporting users in understanding any type of audiovisual content by displaying fragments of written text (integrated in the images and conveying the relevant linguistic and extralinguistic aspects of the content for its understanding) that are legible, readable, comprehensible and accessible for the intended users.

This user-centered approach to subtitling can be enabled by new technologies and the customization possibilities they bring. The media industry could shift towards a highly customizable subtitling approach where users can choose between subtitles with script only, or script plus character identification, or script plus sound information, etc. The possibilities would be endless, and subtitling would become a unique product with different layers that could be activated or deactivated depending on the users’ preferences and needs. Returning to the concept of “enriched responsive subtitling,” Neves (2018b, p. 92) concludes that:

As user-centred technological environments become ever more ubiquitous, viewers will be able to choose specific formats that suit their personal needs. The traditional disability-oriented ‘subtitling for the deaf and hard of hearing’

paradigm will thus shift towards a more encompassing framework characterized by the use of ‘enriched (responsive) subtitles’. Adopting a non-discriminating

34 Chapter 1 | Introduction

terminology will signal a greater degree of respect for diversity, consistent with the spirit of the 2001 UNESCO Universal Declaration on Cultural Diversity and the 2006 UN Convention on the Rights of Persons with Disabilities. This should not mark ‘the end’ of SDH, but rather the incorporation of SDH standards as a subtitling variety to be made available to every viewer on demand.

According to the author, we should move away from the distinction between subtitles for the hearing and subtitles for persons with hearing loss towards a more user-centered approach to subtitling, which can only be enabled thanks to technological advances. In our study, we have taken a similar approach based on a user-centered idea of subtitling, which will be put into context in the following section discussing the UCT model.

Dans le document Subtitling in immersive media: (Page 30-35)