• Aucun résultat trouvé

5. Discussion

5.3 Issues concerning the specific results

5.3.2 Nonverbal reasoning skills (Hypothesis 2)

For methodological reasons it was important to gain a baseline of the nonverbal reasoning abilities of the children of this study. However, considering the substantial research reporting that children with intellectual or learning difficulties as well as CLD children receive lower scores on static tests compared to dynamic measures (Güthke & Al-Zoubi, 1987 in Hessels, 1997), we assumed that the differences between the two groups would be nonexistent if measured by means of a dynamic measure of nonverbal reasoning (DNVR). Conversely, we expected that there would be some minor differences regarding their respective scores on the static CPM.

This was confirmed by our findings, whereby there was no difference between the two groups’ score at the DNVR, which is an indication that this instrument avoids underestimation of the CLD children’s nonverbal reasoning abilities. Furthermore, the small differences (effect sizes) that were observed between the two groups’ scores on the CPM further support the idea that static instruments may cause underestimation of CLD children’s skills due to the fact that they measure factors that should not be measured.

In other words, our findings confirm previous research that support the usefulness of DA when it comes to the measurement of the reasoning abilities of CLD children (Hessels, 1997).

116 5.3.3 The relationship between PDSS static vocabulary tests and Dynamic Vocabulary

performance (Hypothesis 3).

Our study confirmed the assumption that static vocabulary performance, as measured through the PDSS subtest, would correlate moderately to dynamic vocabulary performance. The moderate correlation is an indication that both tests measure a similar construct, i.e., lexical ability, albeit in different formats (Static vs dynamic), which explains tha magnitude of the observed relationships. The lack of a stronger association between the static test of vocabulary and caseload performance on the last subtests of the dynamic vocabulary test (i.e., Receptive retention) is surprising because both tests target receptive vocabulary.

A closer inspection of these results by means of a scatterplot revealed that different patterns could be distinguished. Despite that in many cases a low Receptive Retention score was followed by a low static vocabulary score, there was a cluster of cases (N = 6) of ‘medium’ Receptive Retention scorers (score: 3 out of 6) that did not follow this ‘rule’ as regards their static vocabulary scores. In other words, these “medium” Receptive Retention scorers did not earn any respectively “medium” scores at the Static vocabulary test. More specifically, three of the children of this cluster were some of the youngest children who generally underperformed at the static test but responded relatively well to the (Dynamic) Receptive Retention test. On the other hand, there was a couple of cases of “high scorers”

at the static test who only managed to retain half of the presented items at the Receptive Retention task. In this case, their performance on the Retention test might be the result of the order of

presentation of this item, coupled with intrinsic factors. The receptive retention is the last task of the battery, and these specific children were reported to be quite shy and having difficulties attending several other tasks until the end.

Finally, another reason for this discrepant score might be the different type of linguistic difficulty; our sample included both children with phonological or articulation difficulties, as well as those who had additional or exclusively language difficulties. It is possible that due to the big variability of

underlying difficulties, some of the children might have responded differently at each task, as for some the receptive static task might have been easier to start with but retaining a word form for a longer period might be harder, while for others the other way around.

Ideally, we would have wanted a clearer diagnosis for all children, however, the children’s bilingual background and young age complicated this. For instance, many children did not have enough language already to be able to complete a full phonological or linguistic evaluation. In addition, many children presented with a mixed profile (phonological and linguistic difficulties), which further complicated the issue of the diagnosis.

117 5.3.4 The differentiating power of Lise-DaZ (Research Questions 4a, 4b)

With regards to the Lise-DaZ subtests, our results clearly indicate that the two groups were not sufficiently differentiated as regards their performance on most test items.

An exception was the performance on a) the productive subtest “Conjunctions”, and b) on the subtest

“Sentence Assembly”. Regarding the performance on the subtest Conjunctions, it provided some differentiation, albeit non-significant, with the control group children performing better on average than the caseload. Mastery of conjunctions is paramount for a child’s ability to produce subordinate clauses.

In other words, it appears that the ability to produce subordinate clauses was the sole clearly differentiating subtest of this instrument in our study. This result adds to the existing evidence (section 2.3.6.1), that mastery of subordinate clauses, as included in the “left sentence bracket”, as well as the whole sentence bracket/ assembly system is considered especially hard for LI children (Schulz & Tracy, 2011).

Based on these findings, it is not surprising that performance on the ordinal subtest Sentence Assembly provided some differentiation, albeit non-significant, of the abilities of both groups as indicated by their mastered level and type of sentence structure (lack of significance was quite likely due to the small sample size of participants with appropriate verbal responses to warrant this type of analysis). More specifically, as reflected by the higher median performance of the control group, the difference in sentence type structure is substantial from a qualitative point of view. As presented in section 3.5, Level II that was the median performance obtained by the caseload group corresponds to two-word combinations. On the other hand, Level III that was obtained by many children of the control group, typically appears later, is marked by the use of verbs at the V2 position, and is considered as a main sentence or clause.

However, overall performance on many of the other Lise-DaZ subtests did not confirm our initial assumption as it did not differentiate the performance of both groups. The lack of any significant difference between the performance of both groups in addition to their respective very low raw scores were especially notable in the case of the subtests “Focal parts” and “Prepositions”. Moreover, when comparing these scores with the ones of the Lise-DaZ manual (derived by groups of different age ranges and with different times of exposure to German), they correspond to the lowest reported values and to the youngest group with the minimal possible exposure.

This type of low performance is very likely due to a floor effect, which might be due to the different type and frequency of linguistic exposure of the participants of this project compared to the original Lise-DaZ sample (Schulz & Tracy, 2011, p.85). This factor is directly linked to the particularity of our sample: apart from their significant linguistic diversity (also present in the original Lise-DaZ

118 standardization sample), the children that participated in this study must face an additional linguistic

“difficulty”. The informal “bilingualism” that is often present in the Swiss-German educational settings, whereby the Swiss-German dialect, employed in everyday situations, is often mixed with High-German that is often used at school or other formal settings. This “dual” linguistic condition was controlled for as much as possible during the administration process as the test examiners used the child’s preferred version. Nonetheless, this factor might have affected several children of this study as many of them likely need to face a “dual L2” of sorts in their everyday lives, which might hinder their L2 acquisition compared to children having to tackle just High-German as their L2 (as was the case with the Lise-DaZ sample).

Furthermore, the level of family education might have been another contributing factor to our sample’s generally low expressive scores at Lise-DaZ. More specifically, in the Lise-DaZ manual it is reported that only 13% of the CLD children’s mothers did not finish school, whereas in the present study this percentage was 27% (even though this percentage only covers 77% of our sample, it is still indicative of a trend). In addition, in the case of Lise-DaZ manual there is no reference to the need to translate the parental questionnaire that was used to gather parental information. This means that quite likely most parents in the Lise-DaZ sample had a better understanding of German compared to the parents of our study, whereby the need for a translation was necessary on many occasions.

Lastly, it is important to remember that in several cases children were reported to be shy and only to revert to the shortest possible answers. Although the examiners were advised to use breaks and employ a playful manner, the children’s very young age might have negatively affected their attitude towards a test that requires answering a long series of questions presented in a rather formal format. In that sense, it appears that the use of Lise-DaZ has not been very informative as a means of comparison for this specific group of children. Nonetheless, it remains to our knowledge the only test that has been validated with CLD children of the age group in question, which justifies its selection as a comparative measure.

In the future, other types of measures might be necessary to validate this kind of research. For instance, informal measurement of children’s performance on the same areas targeted by the DA, taken at a later point in time, might be a good way to look into their predictive validity.

5.3.5 Dynamic vocabulary Scores and Lise-DaZ subtests Comprehension of verbs and Production of modal verbs (Hypothesis 4).

Our hypothesis regarding the expected positive, low to moderate correlations among the different Lise-DaZ subtests and (Dynamic) vocabulary tasks, namely among Comprehension of verbs and (dynamic) receptive vocabulary tasks, and among production of modal verbs and expressive dynamic vocabulary tasks, was overall confirmed both at whole group as well as between group level as regards Immediate Recall and Expressive Retention. There is a similar pattern between whole group and between the two

119 groups’ correlations per subtest, albeit the frequent lack of significance noted in the case of the caseload group results, which may be explained partly to the smaller sample size.

The generally lower and less significant between group correlations among Mean Mediation and Lise-DaZ performance might be attributed to the different nature of these tasks: the Mediation task was dynamic, whereas the Lise-DaZ task was static. Conversely, the other vocabulary tasks (Immediate Recall, Expressive retention) entail few dynamic elements and share a closer link when it comes to the examined construct (the Modal verbs subtest as well as Immediate Recall and Expressive Retention all examine expressive language). Probably the most noticeable issue appears to be the lack of any association between the Comprehension task and Receptive retention of the Control group.

This might be attributed to the relatively mean high score at the Receptive Retention task in the presence of lower Immediate recall and Expressive retention tasks, even though their respective Lise-Daz Comprehension score was not equally “high”.

Upon closer inspection of the specific relationship, it becomes evident that in several cases children who earned some of the lowest scores at the Lise-DaZ task Comprehension of verbs, earned high vocabulary scores at the Receptive retention task. In other words, for those children, it was easier to identify the previously unknown words (nouns) compared to the Lise-DaZ task, which included verbs.

The opposite pattern was also true as for some other cases in which high performance on the Lise-DaZ task was coupled with relatively low receptive retention.

The fact that both tasks examined slightly different linguistic categories (verbs versus nouns) might partly explain these varied scores in this group but apparently this was not so much the case in the caseload group, nor did this affect the expressive tasks (Immediate recall and Expressive retention).

It is possible that the said linguistic difference (nouns vs. verbs) did not affect the performance of the caseload because both categories were equally difficult for them, thus presenting less of a discrepancy in both tasks. When it comes to the expressive parts of the vocabulary test, the fact that both involve expressive mechanisms might explain the relatively “balanced” associations among the different tasks across groups. The moderate association among the expressive tasks and the Comprehension task of Lise-DaZ could be attributed to that both group’s responses followed a more predictable pathway. In other words, children who performed less well on the expressive tasks (immediate recall and Expressive retention) in many cases they did so at the Lise-DaZ subtest, too.

Another possibility for this lack of correlation might be that for some children the sequence of presentation might have affected their performance. The Lise-DaZ comprehension task was the first task to complete during the third session, whereas the Receptive Retention task was the last of a series of similar tasks during the initial session. For some of the younger children or those with slight

120 attentional issues this might have affected their performance positively and negatively, respectively, on each task.

Lastly, it is also probable that the Receptive Retention task had a better differential effect compared to the Lise-DaZ Comprehension task, which is static in nature, as evidenced by the respective differences in effect size. This might have also contributed to the lack of observed correlations among the two tasks.

5.3.6 Dynamic vocabulary Scores and Lise-DaZ subtests Conjunctions, Sentence Assembly and Prepositions (Hypothesis 5).

Our hypothesis that there would be a positive relationship between Dynamic Scores (Vocabulary) and the -expressive- Lise-DaZ subtests Conjunctions, Sentence Assembly and Prepositions was confirmed for the Control group. As regards the Caseload group, although in some cases, such as the

performance on the Receptive retention task, there were some moderate significant correlations, overall, this was not the case with the other Vocabulary subtests. This refers not only to the lack of significant relationships which might well be put down to the small sample size but mainly to the fact that said correlations were also quite low.

An important reason for this might be the variability of the nature of the linguistic difficulties of our group, namely the fact that our sample included both children with phonological difficulties as well as those with semantic or morphological deficits. It is natural to expect that the latter group would have a less homogeneous performance on these tasks than the former group. The said Lise-DaZ subtests examine mainly morphological and linguistic performance. This might be an area of significant underperformance as regards the part of the Caseload group who might have primarily morphological and/or phonological difficulties, but not so much for those with primarily semantic deficits, if we follow the classification of LI proposed by Leonard (2000).

5.3.7 AldeQ questionnaire and NWR tasks (Research question and Hypothesis 6)

Regarding the specific results of the AldeQ questionnaire, the responses of the parents differentiated the two groups well. Interestingly, the mean of the control group was the same as the overall standard mean of the group mentioned in the study of Paradis, Emmerzael & Duncan (2010), whereas the caseload mean performance corresponds to a score two standard deviations below the mean. As explained in the AldeQ manual, such a performance is consistent with the profile of children showing language impairment/delay.

121 The children’s ability to repeat strings of nonsense words through the NWR task also differentiated the two groups quite well and confirms previous findings indicating that this specific instrument (Mottier test) is an effective differential diagnostic tool regarding language impairment of bilingual children (Kiese-Himmel & Risse, 2008). Also, the use of NWR tasks generally prove useful in the differential diagnosis of LI.

In relation to the expected low to moderate, positive relationships between these instruments and the dynamic scores, our results confirmed our initial hypothesis (6), especially regarding NWR performance and less so for the AldeQ questionnaire. More specifically, NWR was moderately associated with almost all Dynamic tasks, including both Vocabulary and Phonology. These results are in line with findings in previous studies (Sections 1.11.1, 1.11.2) regarding the positive relationship between NWR tasks and vocabulary acquisition and language comprehension in English, as well as German monolingual children (Gathercole & Baddeley, 1990; Hasselhorn & Körner, 1997).

Furthermore, as NWR evaluates, at least partially, phonological memory and speech output, the positive correlations with the Dynamic Phonological measures indeed confirm our initial assumptions (Windsor, Kohnert, Lobitz, & Pham, 2010). Unexpected findings were the lack of any strong, positive relationships between NWR and the two last Phonology measures, i.e., Inconsistency and Stimulability.

These relationships would need to be further investigated with larger samples both with and without LI.

To our knowledge there is very little research that specifically examines the relationship between stimulability and NWR. One of the scarce studies that examined the relationship between NWR, stimulability in children with Speech Sound Disorders and intra-word production variability (which is largely the same as our Inconsistency measure), corroborates the results of our study (Macrae, 2009).

The study showed that the relationship between NWR and stimulability tasks was negligible, but unfortunately, the specific relationship between NWR and Inconsistency was not examined. The main explanation for the lack of relationship is that probably Stimulability and NWR tasks tap on slightly different domains of the Underlying Phonological Representations. Difficulties with Stimulability tasks might reveal a deficit as regards the correctness of the phonological representations, whereas NWR difficulties may reflect difficulties concerning the distinctness of these representations.

The described whole group correlations between the Dynamic measures and both for AldeQ and NWR are in line with most of the reported correlations found in the Bianco study (2015). Unfortunately, this study did not examine children’s inconsistency rate and stimulability (Phonology) as such, so these cannot be directly compared.

Finally, as regards the between group relationships, the fact that the control group’s Vocabulary performance was more frequently significantly correlated with NWR performance compared to the caseload, could be partially explained by the difference of both group sizes (as the absolute correlations

122 do not differ very much). Also, the relatively stronger correlations among Dynamic Vocabulary scores and NWR compared to Dynamic Phonology performance might be attributed to the fact that NWR tasks have been argued to not only tap on phonological abilities but, also, on general linguistic knowledge (Windsor, Kohnert, Lobitz, & Pham, 2010). For instance, measures of receptive lexical knowledge have been found to be significant predictors of NWR performance both in children with and without phonological difficulties, indicating that mastery of adult-like phonological representations is affected by progress in the lexical domain (Munson, Edwards & Beckmann, 2005).

The lack of positive, significant associations between the vocabulary scores and the AldeQ responses might be the result of within group variability (particularly for the LI children). This variability could be caused by both “bilingual” as well as “monolingual” factors, as presented earlier. On the other hand, phonology (pronunciation) errors might be easier to recognise and report for parents, compared to any

The lack of positive, significant associations between the vocabulary scores and the AldeQ responses might be the result of within group variability (particularly for the LI children). This variability could be caused by both “bilingual” as well as “monolingual” factors, as presented earlier. On the other hand, phonology (pronunciation) errors might be easier to recognise and report for parents, compared to any