• Aucun résultat trouvé

“Alice started to her feet, for it flashed across her mind that she had never before seen a rabbit with

3.6 Data collection

3.6.2 The online survey

The question that presents itself upon consideration of all these difficulties is why an electronic format would suit the purposes of the study better than a paper-based questionnaire. This is indeed an important issue since the advantages and limitations of the format chosen might influence the quality of the resulting data. Therefore, the questionnaire used in the study was initially developed in both forms and the online version was selected as the more promising after careful planning and testing.

Undoubtedly, the electronic format made a number of changes possible while it necessitated others. However, the choice was partly inspired by pilot participants’ comments on the paper questionnaire, since many of their suggestions pointed toward the use of a more modern, digital, interface. The most frequent issues were the need to omit repetitive wording in items of the same scale and to have access to certain clarifications if needed. The former was met by grouping items of the same scale together as in Figure 3.1, where six items on direct contact are shown on the same page. The latter was achieved through footnotes to some of the items, such as the one defining mother tongues as “the language(s) that you learned as a child and that you still understand” (Appendix 1). These could not have been implemented in a paper-based questionnaire as seamlessly as the web interface allowed.

Figure 3.1 Excerpt from the online questionnaire. Screenshot of the first six items on direct contact

Moreover, the digital format increased the potential of the questionnaire layout, making it more efficient and attractive, which, as Dörnyei points out, is “half the battle in motivating respondents to produce reliable and valid data” (2007). Once students opened the link in a browser, they were greeted by a title page marked with the logo of the Faculty of Letters, which featured a short welcome message and description of the study and instrument. At this stage, participants were informed that the project, as part of a PhD study, aimed at mapping language learning attitudes among UNIGE students. Their attention was also drawn to the fact that whether they studied any languages at the time of enquiry or not, their opinion mattered and there were no right or wrong answers to any of the questions. Our contact details and credentials were listed on this page as well, along with our sincerest thanks for students’ participation.

Specific instructions are key to the success of any questionnaire, and thanks to the technical tools it was possible to tailor these individually not only to each part of the questionnaire but to some of the items as well. Therefore, although the general instructions remained the same in each part of the questionnaire, further precisions could be added to certain items to indicate, for instance, the number of options allowed. However, as with clarification notes, these were kept to the minimum and worded in a neutral, non-leading way so as not to influence students’

responses (cf. Appendix 1).

Another important issue to revisit here is the method of item sequencing applied in the electronic questionnaire. While the initial paper version preserved the traditional layout of a randomized item list in each section, which maximizes the benefits of multi-item scales (cf.

Dörnyei, 2007), the online questionnaire used a different format. As pilot participants and colleagues often pointed out, a mixed list might produce an effect of repetitiveness, partly due to the structure of multi-item scales that aims to eliminate idiosyncrasies by making use of several, differently worded items that are identical in meaning. The other reason is that questions in a multi-item scale frequently make repeated use of the same expressions or structures, which only serve as a framework for the more relevant part of the question. A good example of this would be the scale on direct contact, where each item starts with the words How often do you…? Although focusing on different aspects of the same concept, such items tend to lose their finer nuances in isolation, giving respondents the impression that it is the same question reoccurring several times. By clustering questions of the same scale together, the online questionnaire reduced some of the weight that the wording of each item carries. On the other hand, this method also decreased repetitiveness considerably by cutting the number of items with the same focus and by presenting questions of the same structure in tables, such as the one seen in Figure 3.1.

Similarly, the decision to open the questionnaire with items on demographic information was driven by pragmatic concerns and the goal of the research. Traditionally, factual questions would be included in the last section of the questionnaire. However, since this information was crucial to defining the sample and constituted the basis of the contrastive analysis, it was imperative that all participants complete these items. Consequently, placing them at the end of the questionnaire would have greatly augmented the risk of losing participants who completed all previous items. Nonetheless, questions in this part of the survey remained as non-intrusive as possible and thoroughly observed the governing principle of anonymity. Similar concerns inspired the decision to set all questions as compulsory.

The final pages of the questionnaire were dedicated to the item offering participants the option to voice their comments and the concluding thank you message. The latter included a link to the website where further information about the research and its results would be published.

Overall, completion took approximately 20 minutes and was therefore within the range of acceptable length (Dörnyei, 2007).

Preserving participants’ anonymity was a deciding factor in designing the instrument as well as in choosing the appropriate format. Therefore, data collection took place exclusively through the LimeSurvey platform of the University of Geneva, where each respondent was assigned an automatic identification number. The electronic IP address of each computer logging in was recorded in the dataset, however once double entries were controlled for, these were removed completely. Participants from there on were identified solely by the code assigned to them upon entering the system. Finally, although respondents were given the option of indicating their email address for further contact, none of them chose to do so.

3.7 Participants

This section provides an overview of the various characteristics of the participants, against the backdrop of the general population of UNIGE students at the time of enquiry. Therefore, the results of the descriptive statistics obtained from the dataset are contrasted with the figures published by the University’s Bureau des Statistiques for the year 2013-2014 (Peila, 2014). On the one hand, this approach lends a framework to the analysis, placing the findings of the study in a broader perspective. On the other hand, it helps address questions of sample size and offers the necessary basis for gauging the representativeness of the sample. These issues are examined at the end of the section, with conclusions regarding the generalizability of the results.