• Aucun résultat trouvé

Chapitre 5 : L’influence de l’évaluation sur les politiques régionales

4. L’évaluation, un système d’action complexe: analyse des répercussions de

Cette partie est dévolue à l’exploration de la nature complexe de l’évaluation. En effet, l’évaluation est bien plus qu’une intervention compliquée, elle constitue en elle-même, un système d’action complexe.

L’évaluation, une intervention compliquée

L’évaluation est une intervention compliquée parce qu’elle est découpée en une série d’activités à réaliser à différents temps du processus, par les principaux acteurs de l’évaluation.

L’élaboration de la commande, qui relève de la responsabilité du commanditaire, nécessite la mise à jour des enjeux et objectifs de l’évaluation ; c’est à cette occasion que s’expriment des discordances entre les représentations des acteurs relatives à la politique visée par l’évaluation.

Le temps dévolu aux travaux de collecte et d’analyse est de la responsabilité de l’évaluateur ; l’adoption d’un référentiel commun et l’aptitude de l’évaluateur à ajuster ses travaux et cadres de référence aux besoins de son commanditaire sont les enjeux cruciaux de cette étape.

Enfin, la restitution des résultats accompagnée ou non d’un processus de diffusion et de mise en débat représente l’étape finale sous la responsabilité du commanditaire.

Les frontières sont poreuses entre chacune de ces phases marquées par l’alternance des responsabilités.

L’évaluation, un système d’action complexe

Cependant, l’évaluation ne se limite pas à une série de tâches et travaux. Elle met en relation et fait interagir une diversité d’acteurs, porteurs d’intérêts, de représentations, de valeurs, de logiques et d’attentes variés, diversité potentiellement génératrice de désaccords voire de conflits. Par ailleurs, parce qu’elle s’inscrit dans la durée, l’évaluation est soumise aux aléas de l’environnement dans lequel elle se déroule. La survenue d’évènements sociaux, politiques,

économiques imprévus, l’évolution de la trajectoire ou de l’engagement des acteurs responsables, sont des facteurs qui fragilisent l’évolution de l’évaluation et rendent son devenir imprévisible.

Pour comprendre le phénomène de l’utilisation de l’évaluation, il est nécessaire d’appréhender l’évaluation en tant que système, dans une double perspective :

- holistique, c’est à dire comme un tout indissociable incluant les composantes de l’évaluation, les acteurs impliqués, l’environnement dans lequel elle survient et les effets qu’elle génère ;

- dynamique, en analysant les interactions entre le processus d’évaluation, les acteurs et le contexte, tout en tenant compte des contraintes liées aux situations paradoxales, aux divergences, à l’incertitude, à l’échelle de temps.

Nous avons appliqué le modèle théorique que nous avons construit pour analyser cinq évaluations de PRSP en tenant compte de cette double perspective.

Contributions and limits of evaluation in reviewing health policies: in-depth analysis in five French regions

Françoise Jabot1,2*, Jean Turgeon3, François Alla1,4 1

EA 4360 Apemac, Faculté de médecine, Université de Lorraine, France 2

EHESP Rennes, Sorbonne Paris Cité, France 3

Ecole nationale d'administration publique, Canada 4

Inserm, CIC-EC, Centre hospitalier universitaire, France

*Corresponding author Francoise.Jabot@ehesp.fr

Département Sciences humaines, sociales et des comportements de santé, Ecole des hautes études en santé publique, Avenue du Pr Léon Bernard, 35043 Rennes, France

Abstract

This article is the second in a series of two papers. The first set out to apprehend evaluation as a complex intervention and presented an appropriate conceptual framework using, as explaining concepts, the context, the characteristics of individuals likely to make or cause use, and the evaluation itself, in order to analyse evaluation use in the decision-making process. This article presents an application of this framework to the evaluation of regional health policies in France. The research strategy combines qualitative research based on five case studies with a multicriteria analysis. Due to a major reform on health system, contextual factors are the most influent on evaluation use. Notwithstanding this, the motivation of high- ranking decision-makers, the major role of users and the emergence of a leader are critical points. Evaluation itself is less influent. Despite some limits, multicriteria approach provides a complementary interpretation and enriches qualitative analysis.

Key words : context, evaluation process, decision-makers, evaluation use, health policy,

The abundant literature on evaluation use bears witness to the interest of researchers in understanding the paradox of evaluation. If use is the raison d’être of evaluation, can an evaluation that is not used be described as such? (Hojlund, 2014). It is difficult to predict, much less guarantee the outcome of an evaluation. Indeed, the use of evaluation is subject to many different factors (Cousins and Leithwood, 1986; Johnson et al., 2009) and the long, entangled paths taken (Mark and Henry, 2004) do not always successfully lead to actual use.

This article is the second in a series of two papers. The first explored the issue of evaluation use and then set out to apprehend evaluation itself as a complex intervention (Shiell et al., 2008). It also presented an appropriate conceptual framework for this perspective in order to analyse evaluation use in the decision-making process. This article presents an application of this framework to the evaluation of regional health policies in France.

The regionalisation of health policies in France

Since the 1990s, a process of health system reform has been engaged to develop health policies in local territories (Castaing, 2012). The level of the administrative region has gradually become established as the locus for developing and conducting health policy. There have been a number of reforms during the course of this shift. Since 199630, French regions31 have been allowed to identify their own public health priorities and have had a framework for regulation of the healthcare offering. The decentralisation laws transferred a large number of powers, and in doing so have opened up the way for local government to take responsibility for the issue of health. The gradually increasing involvement of local government has altered the relationships between the State (still ‘lead partner’ in terms of health) and its institutional partners, establishing more cross-cutting governance of public action (Jabot and Loncle- Moriceau, 2010). A further step was taken with the adoption of a Public Health Policy Act in 200432. Faced with fragmented institutional powers, multiple actors and a plethora of interventions (Chambaud, 2008), the law gave new instruments to the regional level – regional public health plan (plan régional de santé publique, PRSP), aimed at developing more coherent public health policy. This public health plan brought together all health prevention and promotion interventions. Regional public health group (groupement régional

30 Order no 96-345 dated April 24, 1996 on the medical control of healthcare expenditure

31 France is currently divided into twenty-seven administrative regions (with between 300,000 and 12,000,000

inhabitants)

32

de santé publique, GRSP) was body bringing together the institutions engaged in public

health under the joint leadership of the State and state health insurance representatives, with the remit of implementing and monitoring these plans. Five years later, a new law continued this drawing together, with the establishment of regional health agency (agence régionale de

santé, ARS): this was to bring the fields of prevention, treatment and medico-social care

together under a single authority for each region.33 This new law enshrined a ‘long-term administrative strategy’ (Pierru, 2011) which had been widely accepted since the beginning of the decade.

The PRSPs were intended to be for five years, ending in 2008 or 2009. The act establishing them also provided for them to be evaluated before coming to an end. To assist regional stakeholders with these evaluations, France’s health minister, aided by regional representatives, drafted a guide and organised a national debate on the scope of these evaluations with a view to renewal of the PRSPs. All the regions in France proceeded with this exercise. Those that began late did so with the imminent reform looming; the GRSPs were to be discontinued and PRSPs replaced by regional health projects (projet régional de

santé, PRS): new instruments for the overall health policy of a region. By October 2009, the

‘forerunners’ (the future heads of ARSs) had been appointed in the regions to organise the merger of staff originating from a range of statuses and cultures (Pierru, 2011), with the creation of ARSs scheduled for April 1, 2010. The upshot of all this was that the reorganisation of the health system raised the question of the raison d’être and outcomes of these evaluations.

This study relates to the analysis of five regional PRSP evaluations in France with a view to appraising those evaluations use in the review of regional health policy, the factors that influenced it, and the interactions between these factors. The notion of ‘use’ is here understood in the broad sense of the term, to include instrumental dimensions (concrete decisions), conceptual dimensions (progress in knowledge and representations, changes in practice) and symbolic and legitimising dimensions (justification of policy actions) (Weiss, 1998).

After briefly listing the components of the conceptual framework, we shall describe the method used to test the hypotheses, the forms of use identified, the factors they have been

33

influenced by, and the dynamics between these factors, before turning to a discussion of the contributions and limits of this framework.

Method

The conceptual framework for analysis of the decision-making process

The suggested conceptual framework apprehends evaluation as a whole, and from a dynamic perspective: as an event that interacts with the components of the environment it is a part of (Jabot et al). Based on the literature, three categories of determinants of use have been selected to construct the model (figure 1): the context, which facilitates or impedes change; the characteristics of the individuals likely to make or cause use (users); and the evaluation, insofar as it produces knowledge that is useful, credible, and can be appropriated. These three categories are broken down into a number of variables. The context-based variables relate to the intervention, the institutional and organisational environment of the evaluation, and the contingent factors interfering with the decision. The user-related variables concern users’ personal characteristics and their ability to influence and interact with other stakeholders. The variables relating to the evaluation activity itself concern the mandate, the evaluation process and the data produced. The evaluation, dealt with as a system, constantly involves many different interactions between the three groups of determinants. The socio-political and organisational context within the institution shapes the attributes of the evaluation, and models the representations, interests and expectations of users – and by extension, the practices of evaluators. In return, the evaluation alters the context and characteristics of decision-makers, who may in turn influence changes to the system. We have assumed that interactions may be envisaged between the three categories of determinants and that use or non-use of the evaluation may also have a retroactive effect on the determinants.

Figure 1. Conceptual framework

The research strategy combines qualitative research, based on multiple case studies with several levels of analysis (Yin, 2009), with a multicriteria analysis.

Qualitative research based on multiple case studies

The case study method is a research strategy that is appropriate for the purposes of exploring a phenomenon in its context and analysing the interactions of this phenomenon with a number of elements relevant to the research (Yin, 2009).

The cases here are the five regional evaluations. The cases were selected according to two criteria: the diversity of the regions (their size and their geographic, administrative and socio-economic characteristics), and the capacity for in-depth documentation of each case. For each of them, the authors were involved in the evaluation work, with different roles and at different times (before and after the ARSs were set up). These roles were: evaluator (one region), member of the monitoring committee (one region), support worker for the preparation of the regional health project evaluation (three regions). This special ‘observer- participant’ status (Malinowski, 1989) provided detailed knowledge of the context and facilitated understanding of the key issues.

Individual case study. Each evaluation constituted a unique case, the progress of which was

studied along with the evaluation outcome, the factors influencing use, and their interactions, as well as the mechanisms underlying the decisions and the transformations observed.

A number of sources of information were used to document each case: collection of documents relating to the evaluation of PRSPs and regional health project planning,

individual interviews with potential users (managing directors of GRSPs and ARSs, heads of departments34, PRSP and/or evaluation managers, and representatives of consultative forums). A total of fifty individual interviews were conducted (between 7 and 14 for each case). An interview guide was produced in order to explore the variables in our model. The interviews were recorded, transcribed, categorised and entered into a table based on the questions used: post-evaluation changes, the reasons given, evaluation perceptions (contributions, how the evaluation was perceived within the institution, and predictive factors as to the success of a given evaluation). Information collected at work meetings in three regions and during two training sessions on evaluation with all the individuals in charge of the PRS evaluation, comprised an additional source. An analysis grid was drawn up to apprehend the changes caused by the evaluation, explore the hypotheses for each category of determinants, and identify the mechanisms operating at each of the levels.

The evaluation use was assessed by : a) in the short term, the decisions that altered the implementation of the PRSP (review of strategies, procedures, relations with project bearers); b) in the medium term (following the establishment of ARSs), the drawing up and governance of the PRS; c) in the longer term (4 to 5 years after the evaluation), the extent of progress in the knowledge, representations, opinions and attitudes of users, transforming their practice.

The data was analysed using the model. A monograph was written for each region, offering an interpretation of the conditions influencing the evaluation use.

Comparative case study. A cross-cutting analysis of the five cases was conducted to compare

the changes introduced by the evaluation and the influences of the three chosen categories of determinant. This comparison made it possible both to highlight the factors influencing use, on the basis of the accumulated data and the recurring elements and to select the most discriminating variables.

Multicriteria analysis. To deepen understanding of the interactions between the variables, a

multicriteria analysis was adopted. Our use of ‘Multicriteria Decision Analysis’ (MCDA) methods involved a process of triangulation to retrospectively analyse the role of the variables in evaluation use in the different regions, as well as the relationships between the variables themselves. The results of this analysis were compared with the results of the case studies analysis.

34

MCDAs are regularly used for decision support (Belton and Stewart, 2002) and to analyse complex situations involving information that is complex and conflicting in nature (Roué-LeGall et al., 2005). They also make it possible to consider a set of criteria of different natures – both quantitative and qualitative – within a single tool. There are a number of decision-related aspects: choice (selecting the best option); ranking (ranking actions from the best to the least good); and classification (classifying potential actions in predefined categories). Our approach falls into the latter type, since it involves differentiating evaluation situations on the basis of the potential for the evaluation to be used. We used Visual PROMETHEE35 software, which is based on the PROMETHEE and GAIA methods developed by Mareschal et al. (1986). The PROMETHEE approach (Preference Ranking Organisation METHod for Enrichment Evaluations) proceeds by comparing pairs of actions: this results in a ranking of ‘actions’ according to the level of stakeholder preference, based on predetermined ‘criteria’. The ‘preference’ function represents the difference in performance between two actions for any given criterion, and may be applied to each. Each criterion may be weighted. The levels of preference are combined according to the weight of the different criteria to form a multicriteria preference ranking. In addition, the GAIA analysis (Geometrical Analysis for Interactive Aid) makes it possible to view the main characteristics of the decision-making problem, by showing conflicts and synergies between criteria.

In our study, the ‘actions’ are the ‘cases’ and the ‘criteria’ are the ‘variables’ that explain the evaluation use. The study did not seek to predict use but to observe as a whole how the variables explain the result observed in the regions, which use (or did not use) the evaluation work.

The method involved, firstly, filling in the table available in Visual PROMETHEE using empirical data. Each case was ‘assessed using a list of twenty variables. Each of them was scored in two ways: either on a five-point scale (very poor, poor, average, good, very good), or on a binary basis (yes/no). For the ‘preference’ aspect, the variable was qualified according to its ability to influence use. For example, we worked on the assumption that the more cohesion there was between the institutional partners engaged in regional policy (C1), the more likely it was that the evaluation would be used. Table 1 summarises the variables chosen, the scoring method descriptors, and the scoring system used. For qualitative data, the usual preference function was chosen; this means that there is no indifference threshold and that the highest score is the best. Initially, the same weight was assigned to all the variables.

35

Table 1. Matrix variables

PROMETHEE-GAIA delivers overall and relative analysis for each case. The PROMETHEE approach generates a ranking of cases, from the most preferred to the least preferred; the most preferred being the case deemed to be potentially the most inclined to use the evaluation. It supplies the profile of each case, showing the performance of each variable. The calculation of a stability interval indicates the limits within which the weight of the variable may be altered without any impact on the overall ranking. The GAIA approach allows the variables that work the most in favour of preference, the positioning of cases with respect to these variables, and similarities between cases to be identified.

Results

After a brief presentation of the five evaluations, the results of the case study analysis and MCDA analysis are set out below.

Summary presentation of the five evaluation cases

Case 1 (c1). The evaluation was launched to meet the regulatory requirement. In this small

region, evaluation is not a routine activity. Resources were nonetheless dedicated to it: training the person in charge, official appointment of the evaluation committee members, and

Variables Descriptors Score

C1 Cohesion Importance ascribed to the PRSP and partner consensus 5 pts

C2 Propensity to evaluate Culture and dynamic of evaluation  5 pts

C3 External pressure Influence of the consultative body (CRS) 3 pts

C4 Continuity of tier 1 decision‐makers (directors) None in ARS, 1 out of 2, both present 3 pts C5 Continuity of tier 1 decision‐makers (managers) Dispersal, partial regrouping, most, head of department 5 pts C6 Presence of a leader Individual serving as interface, displaying strong leadership Y/N

D1 Motivation of PRSP for evaluation Interest and engagement 5 pts

D2 Favourable view of evaluation Representations of usefulness, positive experiences 5 pts

D3 Agreement with findings Congruence with representations Y/N

M1 Motivation for PRSP evaluation Interest and engagement, involvement 5 pts

M2 Favourable view of evaluation Representations of usefulness, positive experiences 5 pts

M3 Agreement with finding Congruence with representations Y/N

M4 Collective competency  Presence of trained individuals and/or individuals with expertise 5 pts

E1 Envisaged goals Explicit goals, use envisaged Y/N

E2 Status of the evaluation committee chair  Internal or external with respect to the mandating institution Y/N E3 Working framework Participatory, discussion at all stages, debate about findings 5 pts

E4 Consensus on evaluative questions Development with stakeholders 5 pts

E5 Credibility of the evaluation  Credibility of the findings and of the evaluator Y/N

E6 Timeliness of the evaluation Prior to or after 2008 Y/N