• Aucun résultat trouvé

Identification Reaction Times of Voiced/Voiceless Continua: A Right-Ear Advantage for VOT Values near the Phonetic Boundary

N/A
N/A
Protected

Academic year: 2021

Partager "Identification Reaction Times of Voiced/Voiceless Continua: A Right-Ear Advantage for VOT Values near the Phonetic Boundary"

Copied!
11
0
0

Texte intégral

(1)

HAL Id: hal-01453985

https://hal.archives-ouvertes.fr/hal-01453985

Submitted on 22 Nov 2017

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Identification Reaction Times of Voiced/Voiceless

Continua: A Right-Ear Advantage for VOT Values near

the Phonetic Boundary

V Laguitton, J de Graaf, C Chauvel, C Liegeois-Chauvel

To cite this version:

V Laguitton, J de Graaf, C Chauvel, C Liegeois-Chauvel. Identification Reaction Times of

Voiced/Voiceless Continua: A Right-Ear Advantage for VOT Values near the Phonetic Boundary. Brain and Language, Elsevier, 2000, 75, pp.153 - 162. �10.1006/brln.2000.2350�. �hal-01453985�

(2)

Brain and Language 75, 153–162 (2000)

doi:10.1006/brln.2000.2350, available online at http://www.idealibrary.com on

Identification Reaction Times of Voiced/Voiceless Continua:

A Right-Ear Advantage for VOT Values near the

Phonetic Boundary

V. Laguitton, J. B. De Graaf, P. Chauvel, and C. Liegeois-Chauvel

INSERM E 9926, Laboratoire de Neurophysiologie et Neuropsychologie, Marseille, France

We explored the degree to which the duration of acoustic cues contributes to the respective involvement of the two hemispheres in the perception of speech. To this end, we recorded the reaction time needed to identify monaurally presented natural French plosives with varying VOT values. The results show that a right-ear advan-tage is significant only when the phonetic boundary is close to the release burst, i.e., when the identification of the two successive acoustical events (the onset of voicing and the release from closure) needed to perceive a phoneme as voiced or voiceless requires rapid information processing. These results are consistent with the recent hypothesis that the left hemisphere is superior in the processing of rapidly changing acoustical information. 2000 Academic Press

Key Words: speech; hemispheric specialization; categorical perception; reaction

time; VOT; temporal order.

INTRODUCTION

Voice onset time (VOT), defined as the time between the release burst and laryngeal pulsing (Abramson & Lisker, 1970), is probably the most in-tensively studied of phonetic features, reflecting its important role in the perception and discrimination of various consonantal sounds in the majority of the world’s languages. While there are considerable differences in its pro-duction and perception across different languages, one aspect of VOT per-ception appears universal: the perper-ception of a difference in VOT between two speech stimuli generally occurs only when the stimuli belong to different phonetic categories. For example, the phonetic boundary for the French voiced–voiceless contrast is located at shorter time lags, close to zero. Lis-teners cannot discriminate between bilabial stop consonants with VOT

val-Address correspondence and reprint requests to Virginie Laguitton, Laboratoire de Neuro-physiologie et Neuropsychologie, Inserm E 9926, UFR de Me´decine, Universite´ de la Me´diter-rane´e, 27, boulevard Jean Moulin, 13385 Marseille cedex 5, France. Fax: 33 (0)4 91 78 99 74. E-mail: virginie.laguitton@medecine.univ-mrs.fr.

153

0093-934X/00 $35.00 Copyright2000 by Academic Press All rights of reproduction in any form reserved.

(3)

154 LAGUITTON ET AL.

ues of⫺100- or ⫺20-ms, perceiving both stimuli as /ba/. Stimuli with ⫹20-and⫹40-ms VOT are both identified as /pa/ and subjects fail to discriminate between the⫹20- and ⫹40-ms stimuli. They discriminate between and as-sign different labels to stimuli with VOT values of ⫺10 and ⫹10 msec. These stimuli are from different phonetic categories (/b/ versus /p/). Listen-ers can only discriminate between those sounds on the VOT continuum to which they can assign unique labels. For this reason, the perception of a difference in VOT is said to be categorical (Liberman, Cooper, Shank-weiler, & Studdert-Kennedy, 1967).

Given that VOT is an important cue in speech perception and that speech is thought to be primarily under the control of the left hemisphere, it has been assumed that the perception of the VOT is similarly under the control of left-hemisphere mechanisms (Liberman, Delattre, & Cooper, 1952; Studdert-Kennedy, 1976; Pastore, Ahroon, Baffuto, Friedman, Puleo, & Fink, 1977; Stevens, 1981; Miller, 1977; Kuhl & Padden, 1983; Lane, 1965, Fujisaki & Kawashima, 1971; Ades, 1977; Macmillan, 1987). Several studies with brain-damaged patients, however, have reported right-hemisphere-related effects of VOT (Blumstein, Baker, & Goodglass, 1977; Miceli, Caltagirone, Gainotti, & Payer-Rigo, 1978; Oscar-Berman, Zurif, & Blumstein, 1975), whereas others seem to indicate either a slight left-hemisphere advantage or an ability of both hemispheres to discriminate VOT categorically (Basso, Casati, & Vignolo, 1977; Blumstein & Cooper, 1977). Similarly, electro-physiological data suggest that the perception of the VOT is controlled by several cortical processes—some of which are restricted to the right hemi-sphere and others of which are common to both hemihemi-spheres (Molfese, 1978).

Recent studies, conducted with normal adults and with language-impaired children, suggested that it may be the rate of change of acoustic cues rather than the linguistic nature of speech stimuli per se that underlies the left-hemisphere superiority for speech processing: the left left-hemisphere is superior in the processing of rapidly changing acoustical information (Schwartz & Tallal, 1980; Tallal & Piercy, 1973, 1974; Tallal, Stark, & Mellits, 1985). It might be that the variability in the results concerning the lateralization of VOT is related to the variability in the duration of the VOT in the different studies. Indeed, for a phoneme to be correctly perceived as voiced or voice-less, the time at which one event occurs (i.e., the onset of voicing) must be judged with respect to the temporal attributes of other events (i.e., the release from closure). These events are ordered in time.

The aim of this study was to explore the degree to which the duration of VOT contributes to the respective implication of the two hemispheres. To this end, we measured reaction times (RTs) in a task involving the identifica-tion of monaurally presented natural speech sounds with varying VOT dura-tions. In a previous study (Laguitton, Cazals, & Lie´geois-Chauvel, 1998), we created three natural speech continua (/ba/– /pa/, /da/ – /ta/, and /ga/ –

(4)

REA AT THE PHONETIC BOUNDARY 155

FIG. 1. Selection (a) and removal (b) of the VOT performed with help of the spectrogram of a prevoicing stop (/ba/). The software allows us to remove in a precise manner the low frequencies without touching the higher frequencies before as well as after the release burst.

/ka/) in which VOT varied from negative to positive values with help of a speech-analysis software that allowed precise manipulation of the VOT of naturally produced French plosive-consonants. We were, thus, able to obtain precise values of the phonetic category boundaries of stop consonants. The stimuli used in the present study were selected from these continua.

METHODS

Stimuli

Original stimuli consisted of three natural-speech voiced stop-consonant/vowel syllables pronounced by a French speaker in a sound-attenuated room and recorded with a Beyer M69 TG microphone at 44.1 Hz. To create the continua, we used a speech-analysis software (‘‘Oedipe’’) which allowed for the precise control of the VOT of natural-speech plosive-consonants. The duration of voicing was shortened by progressive steps of 5 ms by removing the low frequencies (under 500 Hz) (see Fig. 1). In this way, voicing could also be removed

after the explosion, keeping intact the higher frequencies.

The resulting three continua (/ba/ –/pa/, /da/– /ta/, and /ga/ –/ka/) had VOT values varying from negative to positive. For each continuum, 11 VOT values were selected: the lowest negative value (i.e., the initial voiced stop-consonant), 9 other values near the release burst (⫺25, ⫺20, ⫺15, ⫺10, ⫺5, 0, ⫹10, ⫹15, and ⫹20 ms), and the highest positive VOT value (i.e., a voiceless stop-consonant).

Speech syllables were presented monaurally, using a Sennheiser HD250 headset. Each speech stimulus was preceded (during 1 s) and was followed (during 520 ms) by a white noise delivered at the intensity of 30 dB.1Through the contralateral channel, the white noise was

delivered at the same intensity, simultaneously and continuously (during 1900 ms). This con-tralateral noise was intended to mask combination tones (Plomp, 1976).

1

In fact, the white noise always ended 110 ms before the explosion. So, for the syllables of which no prevoicing was removed, the white noise was immediately followed by the syllable. However, for the other syllables it was followed by a silence of which the duration corre-sponded to the duration of the removed prevoicing.

(5)

156 LAGUITTON ET AL.

Subjects

Twenty-two right-handed subjects (9 men and 13 women, ages 20–50 years) participated in the study. They were native French speakers and had no known hearing loss.

Procedure

Subjects were tested individually in a sound-attenuated room in a single session. For each continuum, the 11 selected VOT values were presented 10 times in a pseudorandom sequence with a standard identification paradigm, 5 times to the right and 5 times to the left ear. Response and RT data were collected using a push-button system with two labeled keys. Subjects posi-tioned the index finger and thumb of the dominant hand on the upper and lower keys respec-tively. They were instructed to identify the stimuli as rapidly as possible by pressing one of the two keys. Each trial started 1300 ms after the end of the white noise from the previous trial.

Data Analysis

RTs were recorded from the beginning of the speech sound (prevoicing or explosion). The phonetic boundary (corresponding to the VOT value for which stimuli were identified as voiced or voiceless 50% of the time) was determined on the basis of the linear regression line through the points in the slope. The intracategorical variability corresponds to the average over all VOT values of the distance between the actual identification percentage and the ceiling value (i.e., 100% identification as voiced for VOT values at the left side of the phonetic boundary and 0% at the right side)(Klose, 1994).

RESULTS

Identification functions for each ear for each continuum are shown in Fig. 2. Each point represents the mean of the five responses averaged over all subjects. The solid circles and the open triangles show the percentages of voiced and voiceless responses to each of the 11 stimuli in the continuum. It can be seen that perception is categorical for both ears.

Table 1 shows the mean VOT values of the phonetic boundary for the three continua. In no case is the phonetic boundary significantly different between the left and right ears.

Table 2 shows the intracategorical variability for the three continua. Intra-categorical variability was calculated for the identification function for each of the three continua and for the two ears. Although the variability appeared slightly higher on left- than on right-ear trials, statistical testing (Student’s t test) showed no significant difference for this parameter between the two ears.

These results together show that for both ears of presentation, a stop-consonant can be accurately classified as voiced or voiceless.

Figure 3 shows the mean identification RTs for each of the 11 VOT values in a continuum. Results for stimuli presented to the right ear are represented by solid circles; those obtained for the left ear by open circles. One observes that, for most of the values of VOT, the RTs are similar for both ears of presentation. An analysis of variance applied to the global data showed no

(6)

REA AT THE PHONETIC BOUNDARY 157

FIG. 2. Mean percentage of identification as a function of VOT duration for the right and the left ear and for the three continua (labial /ba,pa/; dental /da,ta/; and alveolar /ga,ka/). Open triangles indicate percentages of voiceless consonant responses; solid circles indicate percentages of voiced consonant responses.

TABLE 1

VOT Values [Means (Standard Deviation) in Milliseconds] Corresponding to the Phonetic Boundaries for Each of the Three Continua for Right- and Left-Ear Presentations and Results of Statistical Testing (Student’s t Test) of the Difference between the Two Earsa

Difference between right

Phonetic boundary and left ears

Right ear Left ear t df p

/ba/–/pa/ ⫺10.57 (3.79) ⫺11.54 (3.92) 1.37 13 .19

/da/–/ta/ ⫺1.76 (2.61) ⫺2.09 (6.98) 0.09 3 .90

/ga/–/ka/ ⫹15.14 (6.38) ⫹12.05 (3.99) 1.44 3 .24

aNote that for none of the three continua the phonetic boundary differs significantly between

(7)

TABLE 2

Intracategorical Variability (in Percentages) Calculated for the Identification Function for Each of the Three Continua and for the Two Earsa

Difference between right Intracategorical variability and left ears

Right ear Left ear t df p

/ba/–/pa/ 3.89 6.35 ⫺1.76 13 .10

/da/–/ta/ 6.8 7.2 ⫺0.39 3 .72

/ga/–/ka/ 4.99 7.2 ⫺1.12 3 .34

a

Statistical testing (Student’s t test) showed no significant difference between the two ears.

FIG. 3. Mean reaction time of identification as a function of VOT duration for the right ear (solid circles) and left ear (open circles) and for the three continua. Note the difference in reaction time between the two ears for VOT values close to the phonetic boundaries as found in Fig. 1.

(8)

REA AT THE PHONETIC BOUNDARY 159

significant difference between the left and right ear [for /ba/ – /pa/:F(10, 13) ⫽ 0.84; for /da/–/ta/: F(10, 3) ⫽ 0.95; for /ga/–/ka/: F(10, 3) ⫽ 0.22]. However, for VOT values close to the phonetic boundary, the time needed to identify a speech sound is significantly shorter for right- than for left-ear trials. RTs were shorter on right-ear trials at the VOT value of ⫺10 ms (t ⫽ ⫺2.9, df ⫽ 13; p ⬍ .01) for the /ba/–/pa/ continuum and at VOT values of ⫺5 and 0 ms (respectively t ⫽ ⫺8.1, df ⫽ 3; p ⬍ .003, and t⫽ ⫺4.1, df ⫽ 3; p ⬍ .02) for the /da/–/ta/ continuum. For the /ga/– /ka/ continuum, the time needed to identify a speech sound appared shorter for right- than for left-ear trials at VOT values of 10 and 15 ms, but the difference failed to reach statistical significance.

DISCUSSION

In the present study, we have investigated the degree to which the duration of acoustic cues contributes to the respective implication of the two hemi-spheres by recording the RT needed to identify monaurally presented speech sounds with varying VOT durations. Two main results were obtained. First, the identification RTs were shorter on right-ear than on left-ear trials only for VOT values near the phonetic boundary, and this difference was signifi-cant for the labial and dental continuum. Second, a stop-consonant can be accurately classified as voiced or voiceless irrespective of the ear of entry. Previous studies have reported increases in identification RTs near the category boundary between two speech items (Pisoni & Tash, 1974; Repp, 1981) but these studies did not distinguish the ear of presentation. Pisoni and Tash (1974) concluded that RT is a positive function of uncertainty, increasing at the phonetic boundary where identification is least consistent and decreasing where identification is more consistent. Our data support this idea. But more importantly, they specify this phenomenon: the identification RTs increasing near the category boundary does only exist when stimuli are presented to the left ear.

Several recent studies suggest that it may be the specificity of acoustic cues rather than the linguistic nature of speech stimuli per se that underlies the left-hemisphere superiority for speech processing. Studies using dichotic listening procedures, for example, have suggested a more pronounced right-ear advantage (REA) for right-handed subjects with speech stimuli char-acterized by rapidly changing formant transitions than with speech stimuli without such transitions (Studdert-Kennedy & Shankweiler, 1970; Dwyer Blumstein & Ryalls, 1982). Other studies have shown that readily codable nonverbal stimuli containing changing frequency information also yield an REA (Papoun, Krashen, Terbeek, Remington, & Harshman, 1974; Halperin, Nachshon, & Carmon, 1973; Blechner, 1976). Such evidence has led to the postulate that an enhanced capacity to analyze and encode successive tempo-ral changes in the acoustic signal may underlie the left hemisphere’s contri-bution to speech processing (Tallal & Piercy, 1973; Tallal & Newcombe,

(9)

160 LAGUITTON ET AL.

1978). Tallal and colleagues (Merzenich, Jenkins, Johnston, Schreider, Miller, & Tallal, 1996; Tallal, Miller, Bedi, Byma, Wang, Nagarajan, Schren-ier, Jenkins, & Merzenich, 1996) have suggested that a deficit in the pro-cessing of the rate of change of acoustic cues (such as formant transitions), rather than a deficit in the linguistic nature of the stimuli, could explain some of the phonological deficits observed in language-learning disorders. Tallal, Miller, and Holly Fitch (1993) also put forward the hypothesis that left-hemispheric dominance in speech processing may arise out of a specializa-tion in the processing and producspecializa-tion of rapidly changing (sensorial and mo-tor) events. This hypothesis is based, in part, on studies with language-im-paired children who show difficulties in the processing of rapidly changing sensory information as well for the perception of nonverbal (Tallal & Piercy, 1973) and verbal sounds (Tallal & Piercy, 1974) and visual or tactile stimuli (Tallal, Stark, & Mellits, 1985). With regard to the verbal stimuli, the same authors showed that the lengthening of the rapidly occurring formant transi-tions in consonants (from 40 to 80 ms) led to an enhancement in speech perception for a group of language-impaired children (Tallal & Newcombe, 1978), suggesting that certain difficulties observed in language-learning dis-orders may be linked to a deficit in the processing of rapidly changing infor-mation. And since the results of many studies show the implication of the left hemisphere in the processing of formant transitions (Oscar Berman, Zurif, & Blumstein, 1975; Blumstein, Baker, & Goodglass, 1977; Miceli, Caltagirone, Gainotti, & Payer-Rigo, 1978; Schwartz & Tallal, 1980), this processing of rapidly changing (acoustical) information seems to be localized in the left hemisphere.

Our present results, although obtained with another acoustic cue, are con-sistent with this hypothesis. We reported that the identification RTs increas-ing near the category boundary only exist when stimuli are presented to the left ear. This REA in the perception of stop consonants at VOT values near the phonetic boundary is significant for the labial and the dental continuum; these two continua are characterized by phonetic boundary VOT values in-ferior to 20 ms for every subject. This means that identification of stimuli requires processing temporally short frequencies to yield an REA. On the contrary, we did not find this for the velar continuum. However, one should note that the categorical perception of this latter continuum is characterized by a rather late phonetic boundary, which varies according to listeners (i.e., close to 20 ms after the release burst). This means that when the identification of two successive acoustical events does not require rapid information pro-cessing, we do not observe an REA. These two results support the hypothesis that, in right-handed subjects, the left hemisphere plays a primary role in the analysis of rapidly changing acoustical information.

In conclusion, we found that the increase in identification RTs for voiced and voiceless speech sounds near the category boundary exists when stimuli are presented to the left ear and not when presented to the right ear. This

(10)

REA AT THE PHONETIC BOUNDARY 161

result is consistent with the hypothesis that the left-hemisphere speech-processing mechanism may be particularly specialized for the speech-processing of rapidly changing acoustic events.

REFERENCES

Abramson, A., & Lisker, L. (1970). Discriminability along the voicing continuum: Cross lan-guage tests. In Proceedings of the Sixth International Congress of Phonetic Sciences,

Prague, 1967 (pp. 569–573) Prague: Academic Press.

Ades, A. E. (1977). Vowels, consonants, speech and nonspeech, Psychological Review, 84, 524–530.

Basso, A., Casati, G., & Vignolo, L. A. (1977). Phonemic identification defect in aphasia.

Cortex, 13, 85–95.

Blumstein, S. E., Baker, E., & Goodglass, H. (1977). Phonological factors in auditory compre-hension in aphasia. Neuropsychologia, 15, 19–30.

Blumstein, S. E., & Cooper, W. (1972). Identification versus discrimination of distinctive features in speech perception. Quarterly Journal of Experimental Psychology, 24, 207– 214.

Dwyer, J., Blumstein, S. E., & Ryalls, J. (1982). The role of duration and rapid temporal processing on the lateral perception of consonants and vowels. Brain and Language, 17, 272–286.

Fukisaki, H., & Kawashima, T. (1971). A model for the mechanisms of speech perception: Quantitative analysis of categorical effects in discrimination. Annual Report of the

Engi-neering Research Institute, 30, 59–68.

Halperin, T., Nachshon, I., & Carmon, A. (1973). Shift of ear superiority in dichotic listening to temporally patterned nonverbal stimuli. Journal of the Acoustical Society of America,

53, 46–50.

Klose, K. (1994). Perception cate´gorielle et troubles du langage e´crit. Travaux neuchaˆtelois

de linguistique, 21, 143–154.

Kuhl, P. K., & Padden, D. M. (1983). Enhanced discriminability at phonetic boundaries for the place feature in macaques. Journal of the Acoustical Society of America, 73, 1003– 1010.

Laguitton, V., Cazals, Y., & Lie´geois-Chauvel, C. (1998). Construction et validation d’un continuum voise´/non-voise´ utilisant des sons de parole naturelle. In Carilang (pp. 97– 127). Editions du Ce´ralec.

Lane, H. (1965). Motor theory of speech perception: A critical review. Psychological Review,

72, 275–309.

Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Percep-tion of the speech code. Psychological Review, 74(6), 431–461.

Liberman, A. M., Delattre, P. C., & Cooper, F. S. (1952). The role of selected stimulus vari-ables in the perception of the unvoiced stop consonants. American Journal of Psychology,

65, 497–516.

Macmillan, N. A. (1987). A psychophysical approach to processing modes. In S. Harnad (Ed.),

Categorical perception (pp. 53–85). Cambridge, UK: Cambridge Univ. Press.

Merzenich, M. M., Jenkins, W. M., Johnston, P., Schreider, C., Miller, S. T., & Tallal, P. (1996). Temporal processing deficits of language-learning impaired children ameliorated by training. Science, 271, 77–81.

(11)

162 LAGUITTON ET AL.

Miceli, G., Caltagirone, C., Gainotti, G., & Payer-Rigo, P. (1978). Discrimination of voice versus place contrasts in aphasia. Brain and Language, 6, 47–51.

Miller, J. L. (1977). Nonindependence of feature processing in initial consonant. Journal of

Speech and Hearing Research, 20, 519–528.

Molfese, D. L. (1978). Left and right hemisphere involvement in speech perception: Electro-physiological correlates. Perception and Psychophysic, 23, 237–243.

Oscar-Berman, M., Zurif, E. B., & Blumstein, S. (1975). Effects of unilateral brain damage on the processing of speech sounds. Brain and Language, 2, 345–355.

Papoun, G., Krashen, S., Terbeek, D., Remington, R., & Harshman, R. (1974). Is the left hemisphere specialized for speech, language and/or something else? Journal of the

Acoustical Society of America, 55, 319–327.

Pastore, R. E., Ahroon, W. A., Baffuto, K. J., Friedman, C., Puleo, J. S., & Fink, E. A. (1977). Common-factor model of categorical perception. Journal of Experimental Psychology:

Human Perception and Performance, 3, 686–696.

Pisoni, D. B., & Tash, J. (1974). Reaction times to comparisons within and across phonetic categories. Perception and Psychophysics, 15, 285–290.

Plomp, R. (1976). Aspects of tone sensation. New York: Academic Press.

Repp, B. H. (1981). Perceptual equivalence of two kinds of ambiguous speech stimuli. Bulletin

of the Psychonomic Society, 18, 12–14.

Schwartz, L., & Tallal, P. (1980). Rate of acoustical change may underlie hemispheric special-ization for speech perception. Science, 207, 1380–1381.

Stevens, K. N. (1981). Constraints imposed by the auditory system on the properties used to classify speech sounds: Data from phonology, acoustics and psycho-acoustics. In T. F. Myers, J. Lavers, & J. Anderson (Eds.), The cognitive representation of speech. Amster-dam: North-Holland.

Studdert-Kennedy, M. (1976). Speech perception. In N. J. Lass (Ed.), Contemporary issues

in experimental phonetics (pp. 243–293). New York: Academic Press.

Studdert-Kennedy, M., & Shankweiler, D. (1970). Hemispheric specialisation for speech per-ception. Journal of the Acoustical Society of America, 48(2), 579–594.

Tallal, P., & Newcombe, F. (1978). Impairment of auditory perception and language compre-hension in dysphasia. Brain and Language, 5, 13–24.

Tallal, P., & Piercy, M. (1973). Defects of non-verbal auditory perception in children with developmental aphasia. Nature, 241, 468–469.

Tallal, P., & Piercy, M. (1974). Developmental aphasia: Rate of auditory processing and selec-tive impairment of consonant perception. Neuropsychologia, 12, 83–93.

Tallal, P., Stark, R., & Mellits, E. D. (1985). Identification of language-impaired children on the basis of rapid perception and production skills. Brain and Language, 25, 314–322. Tallal, P., Miller, S., & Holly Fitch, R. (1993). Neurobiological basis of speech: A case for the preeminence of temporal processing. In P. Tallal, A. M. Galaburda, R. R. Llina´s, & C. von Euler (Eds.), Temporal Information Processing in the Nervous System: Special

Reference to Dyslexia and Dysphasia (pp. 27–47). New York: New York Academy of

Sciences.

Tallal, P., Miller S., Bedi, G., Byma, G., Wang, X., Nagarajan, S. S., Schrenier, C., Jenkins, W. M., & Merzenich, M. M. (1996). Language comprehension in language-learning im-paired children improved with acoustically modified speech. Science, 271, 81–84.

Figure

FIG. 1. Selection (a) and removal (b) of the VOT performed with help of the spectrogram of a prevoicing stop (/ba/)
FIG. 2. Mean percentage of identification as a function of VOT duration for the right and the left ear and for the three continua (labial /ba,pa/; dental /da,ta/; and alveolar /ga,ka/).

Références

Documents relatifs

Relative rate coefficients for the reactions of ozone with 2- and 3-carene were determined by comparing their rates of decay with that of a reference compound (cyclohexene

For example, the possible variability of the production of the vowel /I/ will depends on the quantity of information brougth by other linguistic fields (see figure 3)..

To sum up, Chalmers’s proposal of an early mandatory support by a qualified majority of national parliaments would result in greater domination by governments in the legislative

Sulawesi cocoa smallholders’ capacity to respond the rocketing price of maize.. Pourquoi cette réactivité sur

The rate with which the random fluctuations drive the island radius over the barrier determines the kinetics of the initial stage of the surface covering and the first passage

Some tonal languages also have an intermediary strategy to emulate the voice in whistles: in the Amazon language called Surui for example the influence of the formant distribution

Table 2 shows that in participants with hydrocephalus associated with spina bifida, the hyperactivity scores are significantly greater than those in normally developing controls..

Thus, from the hydrogen peroxide concentration and pH time-courses, which were correlated to the ozone mass-transfer rate assuming a pseudo-first order absorption regime, the