• Aucun résultat trouvé

Towards a new framework and tool for assessing digital literacy skills of youth and adults

Dans le document Download (6.8 MB) (Page 127-131)

5. Skills in a digital world

5.4 Towards a new framework and tool for assessing digital literacy skills of youth and adults

LITERACY SKILLS OF YOUTH AND ADULTS (INDICATOR 4.4.2)

68

This section discusses the approach and methodological challenges in an ongoing desk research project that aims to advise the UIS in designing an instrument to assess digital literacy skills in the context of collecting data on Indicator 4.4.2.

The SDG Target 4.4 contains three indicators (UIS, 2018c):

m 4.4.1 Proportion of youth and adults with

information and communications technology (ICT) skills, by type of skill.

m 4.4.2 Percentage of youth/adults who have achieved at least a minimum level of proficiency in digital literacy skills.

m 4.4.3 Youth/adult educational attainment rates by age group, economic activity status, levels of education and programme orientation.

The UIS is responsible for the development and validation of new methodologies for indicators under SDG Target 4.4. While Indicators 4.4.1 and 4.4.3 have already been implemented in reporting for 2017, the status of Indicator 4.4.2 is still under development (UIS, 2018c). Although many countries have been collecting data on digital skills or ICT literacy of their citizens for various purposes, there is no common agreement on what constitutes a minimum or basic

68 Written by Mart Laanpere, Senior Researcher, Centre for Educational Technology, Tallinn University.

level of proficiency in digital literacy that would allow aggregation of national data on the global level. As a result, there is a serious knowledge gap about the global state of digital literacy skills of youth and adults, while these skills play an increasingly important role in achieving SDG 4.

There have been some supra-national initiatives in this field, but those have focused on international assessments within a few countries (e.g. ICILS or ICDL). All these supra-national initiatives could definitely inform the UIS in designing a global

instrument for collecting reliable and valid data on the digital literacy target, but none of these practices was specifically designed to inform Indicator 4.4.2.

The UIS should also keep an eye on the development of supra-national policy indicators on digital literacy.

The EC has defined a new standard on a digital competence framework for citizens (DigComp, see Section 5.2), which has already been used for various purposes in several European countries (Carretero et.al., 2017). DG Connect and Eurostat have already used DigComp to redesign their digital skills indicator in 2015. Their survey asks respondents about digital activities carried out within the previous three months, assuming that “persons having realised certain activities have the corresponding skills” (European Commission, 2016). The indicator defines three levels of proficiency: below basic, basic and above basic levels. However, there is no common European instrument for performance-based assessment of digital competence of citizens based on DigComp.

As a major milestone in the process of developing its framework for digital literacy, the UIS commissioned a report, A Global Framework of Reference on Digital Literacy Skills for Indicator 4.4.2 (UIS, 2018c). This report reviews digital literacy assessment frameworks used in 47 countries and summarises consultations with a number of experts, resulting in the suggestion to use the European DigComp framework as the foundation for the UIS DLGF, while expanding it by five additional competences and adding two competence areas. The report raises three challenges. First, the

need for mapping existing instruments for digital skills assessment to DLGF, pointing out that “...there is not a one-size-fits-all assessment of digital competence that can serve all purposes and contexts”. Second, it also calls for cost-effective cross-national R&D programmes to develop and validate “context-sensitive and fit-for-purpose digital literacy indicators and assessment instruments”. Third, the report points out the discrepancy between the proficiency levels and related measurement scales of the SDG indicator versus DigComp. While Indicator 4.4.2 focuses on a minimum level of proficiency, DigComp distinguishes eight proficiency levels.

These three challenges raised by the authors of the report are addressed by ongoing desk research that has three objectives:

m Mapping existing digital literacy assessments to DLGF;

m Evaluating advantages and disadvantages of selected assessments that cover a large part of the DLGF, with emphasis on their cost-effectiveness for rollout on a population scale; and

m Recommending the next steps on developing an assessment tool suitable for Indicator 4.4.2.

5.4.1 Methodological challenges in the assessment of digital literacy

Digital literacy is a relatively new concept to join competing concepts such as ICT, media, information and computer literacy (or competence). Ferrari (2013) was among the first authors who tried to settle the relationship between these existing labels and newcomers (digital literacy/competence) in a similar manner with the definition suggested by authors of the 2018 UIS report: “Digital literacy is the ability to access, manage, understand, integrate, communicate, evaluate and create information safely and appropriately through digital technologies for employment, decent jobs and entrepreneurship. It includes competences that are variously referred to as computer literacy, ICT literacy, information literacy and media literacy”.

This definition builds on previous practices by

incorporating vocabulary from predecessors (e.g. from information, media and ICT literacy frameworks), resulting in a list of 26 competences grouped into seven competence areas. As experience with the EC’s DigComp has demonstrated, such a competence framework can be used for various pragmatic purposes: re-designing the outdated curricula and professional development programmes, developing policy indicators, professional accreditation, recruitment and (to a lesser extent) research.

As an alternative to this pragmatic approach, recent psychometric approaches to measuring digital literacy have been guided by Multidimensional Item Response Theory (MIRT) that understands Computer and Information Literacy (Fraillon et al., 2014) or Digital Information Literacy (Sparks et.al., 2016) as a single latent trait that cannot be directly observed in test situations and, thus, should be inferred indirectly through statistical analysis of test results. Like any mathematical model, MIRT has some assumptions that need to be fulfilled in order to make valid inferences on the basis of test results.

For instance, the monotonicity assumption requires that the instrument does not make knowledgeable persons more likely to participate in the test (Chenery and Srinivasan, eds., 1988). An assumption of local independence means that performance in one item in a test does not influence performance in other items. While such assumptions are relatively easier to guarantee in the case of knowledge-based multiple choice tests, the same might be quite difficult in the case of authentic performance-based assessments.

Two approaches to digital literacy assessments that were described above illustrate the struggle between internal and external validity in the context of educational assessments. Validity in general is understood as the degree to which test results can be interpreted and used according to the stated purposes of the assessment (AERA, 2014). Internal validity refers to methodological correctness/

coherence of a research instrument, while external validity can be interpreted as its re-usability through

Figure 1.1 Interim reporting of SDG 4 indicators

the relevance or usefulness for a wider audience.

The pragmatic approach to defining and measuring digital literacy tends to result in poorer internal validity but higher external validity of the assessment instrument, as it is better understood and accepted by various stakeholders (most of whom may not have a background in mathematical statistics or psychometry). On the other hand, the psychometric approach guarantees higher internal validity quite often at the expense of reduced external validity.

The UIS report (2018) recommends using pathway mapping methodology for operationalising a DLGF, focusing rather on users’ perception of digital literacy in various contexts and concerning external validity of assessment. Eventually, the digital literacy assessment based on DLGF will have to address the challenge of balancing internal and external validity, both through methodological considerations and the design of the digital literacy assessment instrument.

5.4.2 Existing instruments for assessing digital literacy

Carretero et al. (2017) have reviewed 22 existing instruments that are used to assess digital competence in line with the DigComp framework in various European countries. They grouped these instruments into three major categories based on the data collection approach:

m Performance assessment, where individuals are monitored by a human observer or software while being engaged in solving authentic, real-life problems by using common software tools (e.g. browser, word processor, spreadsheet).

m Knowledge-based assessment, where individuals are responding to carefully designed test items that measure both declarative and procedural knowledge.

m Self-assessment, where individuals are asked to evaluate their knowledge and skills by means of questionnaires that might range from structured scales to free-form reflection.

These approaches can be strengthened by secondary data-gathering and analysis (e.g. by providing an e-portfolio that contains creative works, certificates and other documentary evidence). It is likely that performance assessment and analysis of secondary data are not cost-effective approaches in the context of global assessment of digital literacy in the context of the SDGs. Self-assessment would be the easiest and most cost-effective to implement but will likely suffer from low reliability and validity. However, it should be possible to combine self-assessment with knowledge-based or performance assessment. For instance, Põldoja et.al. (2014) have designed and validated an instrument called DigiMina that combined self-assessment of teachers’ digital competence with peer-assessment, knowledge-based tests and an e-portfolio containing teacher’s reflections and creative work. Within the DigCompEdu project, JRC tried to balance internal and external validity in assessing a school’s digital capability with the design of the SELFIE tool, so that schools are allowed to expand the scientifically-validated core instrument with additional items from a pre-designed, publicly available pool or even design their own additional items that seem relevant to them (Joint Research Centre, European Commission,2018). The future instrument that will be designed by the UIS for digital literacy assessment might also benefit from a similar balancing of needs for global standardisation (contributing to internal validity) and local context (contributing to external validity).

The ongoing study uses the three categories of instruments for digital literacy assessment described by Carretero et al. (2017) to identify the existing practices and evaluate their applicability in the context of data collection for Indicator 4.4.2. The applicability analysis mainly focuses on the cost-effectiveness of the given instrument but also considers its reliability and validity, following the discussion above.

The existing digital literacy assessment practices and instruments will be sought from three types of sources:

m Scientific research publications;

m Policy documents in education and employment domains; and

m Professional certification frameworks and related technical documents.

The current study will map the existing assessments to DLGF and address the methodological

challenges described in this chapter, resulting in recommendations to the UIS regarding the next steps in developing a new instrument for assessing Indicator 4.4.2 that is cost-effective, reliable and valid (both internally and externally).

Figure 1.1 Interim reporting of SDG 4 indicators

6. Learning evidence and

Dans le document Download (6.8 MB) (Page 127-131)

Outline

Documents relatifs