• Aucun résultat trouvé

Developing a shared understanding of quality

VI. QUALITY IN THE SOLO CASE: INTER-ORGANIZATIONAL

6.3. Micro-level foundations [of quality]: towards SOLO quality practice

6.3.3. Developing a shared understanding of quality

To get the programme ‘off the ground’ and to run it successfully, SOLO staff had to agree upon some common principles and baseline standards. Initially it was important to develop a shared core idea of the programme and understand various profiles of PIs and their contribution to the JP. In addition, a common understanding about the role of research in teaching, organizational aspects, the administrative support, e.g. the establishment of a coordinator’s position and his/her role at PIs, was found to be important. Some common consortium standards, rules, procedures, and general guidelines emerged step by step through interaction, discussions, collaboration, mobility of staff, granting of awards and understanding differences.

A standard of transparency was established by practicing evaluation activities of teaching and organisational aspects (see 6.1.4); discussing their results at consortium meetings; benchmarking various practices, and contributing to external reporting. Scheduled bi-annual SOLO meetings are always held during Winter and Summer Schools. These schools were identified by respondents as the main venue for direct interaction, discussions, and exchange of ideas. Academic staff members observe that Winter and Summer Schools are “the elements holding together or creating some kind of Erasmus Mundus spirit among students and teaching staff”

(e.g., Academic, University C, interview, 27 January, 2016). Meetings of all PIs are used for building co-understanding. It happens that power play comes into action when “talking about what is done in one place and then trying to convince the

145

others to use those instruments” (Academic, University B, interview, 21 October, 2015). For instance, baseline standards for student workload, common quality denominators of a research project, and certain methodological and practical aspects related to the thesis were discussed. Based on a common practice of scholarly work staff shared their insights about what makes a good thesis and introduced the best thesis award. Theses are nominated by each PI and read by academic staff members from each institution. However, an academic from University E shared that she is assessing from her point of view whereas colleagues from PIs:

were reading this from another point of view so it was very difficult to agree. It was a matter of academic practice, a number of people read the dissertation, then we voted and we ranked them and the one that got more votes won the prize.

(interview, 25 January, 2017)

While this practice might show where academic staff position themselves and what is regarded academic excellence to them, it does illustrate that academic staff chose not to come up with explicit common criteria for assessing thesis work.

Benchmarking practice or quality benchmarks that take place among PIs, as expressed by one of the academic staff members, is “more as a convention.., not as written out set of rule” (University E, interview, 24 January, 2017). Nonetheless, the findings indicate that benchmarking enables reflexivity about current organizational practices, aids transparency and change towards “more uniformity”…:

we needed to enlighten one another on what that entails but also to, perhaps, adjust our practices so they conform better with one another and just describing what we did in a way which would enable to present it in a table in parallel with what they did elsewhere. It kind of prompted you to rethink what is that we do, what is that we offer. Should we, perhaps, do it slightly differently to ensure more uniformity. I think that has, sort of, notched everybody a bit and facilitated quite a bit of change. (Academic, University A, 2 June, 2016)

External evaluations of the programme as well as accountability to the EU structures of the EM is seen by SOLO staff as the occasion to not only describe and share how things work at each institution, but also to deliver jointly developed reports. The consortium head explains how in his view, common understanding emerges:

I guess, on the one hand... we had three or four times evaluations, so you have to prepare jointly to answer questions from outsiders and I guess that is the first

146

way to create this atmosphere of the common understanding what the quality is.

Second, we are confronted by students who travel, so their report to us that last semester it was different in [University A], and therefore we insist here on the same changes, so I guess we are invited both from above and from below, so it's to permanently reflect on what is possible to improve (interview, 27 October, 2015).

External monitoring and evaluation conducted via regular reporting to the EACEA while benefiting organizational actors in the development of shared understanding, also shows elements of symbolic legitimacy in order for the JP staff to maintain particular aspects of EM quality. An academic coordinator from University A also notes that “what comes out in such contracts [EM] of the EU is what they want to hear” (interview, 7 June, 2016). Organizational actors talk about the perceived importance of language used while talking/reporting about the existence and practice of QA. For example, central offices knowing the discourse are checking the terminology and language of JP reports prepared for EM structures, and external evaluators. The increased professionalism and the phenomenon of ‘quality workers’ is among the unintended outcomes that the practice of quality brings. This has been observed in other studies on quality in HE (cf. Stensaker, 2008) and is becoming an accepted way of dealing with quality demands.

Shared meanings and understanding about quality are groundwork to normative and cultural-cognitive elements of JP quality practice. JP quality practices are based upon, created, and modified via the behaviour of actors (here, SOLO staff everyday work). In the course of collaboration, SOLO staff have developed a shared understanding that a quality programme needs to be based on core concept, baseline processes, transparency and respect for diversity of perspectives and practices. One notable variation from JP quality practice found in SOLO’s approach is how differences in academic cultures across PIs are treated. Rather than focusing on commonalities, SOLO chooses respecting, accepting, and in most instances celebrating diversity (of curriculum and academic cultures). Another variation is related to the spread of a culture of quality where evaluation activities and formalizing quality processes are at the core. The SOLO staff note that there was not only a trend of introducing more QA instruments especially student evaluation activities, but also formalizing, systematizing and describing all QA processes in a so-called ‘Compendium’. The consortium staff claim to have chosen a more ‘organic’ approach to the observance of quality, focusing on what is needed to be done rather than describing how QA works in quality manuals. These specificities and variations from the emerging JP quality discourse and practise

147

show how practical-evaluative agency of individual actors is exploited to create a particular approach to JP and its quality. These specificities and variations are best observable from the everyday work perspective and staff interpretations of quality practices. The following section describes in greater detail what quality means to SOLO staff, how it played out in their daily work, and what contributes to quality enhancement.