• Aucun résultat trouvé

Choosing the appropriate evaluator is an important concern that is closely related to the designated purpose of any evaluation. Desig-nating a team of people from relevant sectors who have an interest in planning and managing the progress of the evaluation is an exer-cise frequently performed to determine the subject, scope and time frame of the evaluation. Evaluators should possess the necessary skills and knowledge to do an evaluation. One of the purposes of this publication is to help provide these skills.

Internal and/or external

It is important to consider the position of the evaluators and their relationship to the programme being evaluated (12). Internal evalu-ators are affiliated with the organization being evaluated and exter-nal evaluators are not, and each has its strengths and weaknesses.

Internal evaluations are useful because the analysis is likely to contribute directly to the information needs of an organization.

Moreover, data collection is more straightforward and less expen-sive. The credibility of internal evaluations, however, is often chal-lenged because the results may only have limited use outside the organization, the potential subjects for evaluation are narrow and the evaluators are perceived as being biased.

External evaluations are more effective when addressing broad governmental issues and programmes between different sectors of government. Results may be more widely used because external evaluators have greater access to decision-making processes and are perceived as having more authority and credibility. In compari-son with internal evaluators, professional evaluators (consulting firms,

auditing offices, academic institutions) may be more experienced, or possess a broader range of evaluation skills. Lack of familiarity with the programmes to be evaluated may, however, limit external evalua-tions, and managers may be less willing to accept an outsider’s view.

Often, especially in larger evaluations, both internal and external reviewers are employed, which helps to prevent conflicts that arise from the perceptions that outside evaluators do not fully appreciate a programme and that internal evaluators are not objective. The best combination of internal and external evaluators will depend on the programme or service to be evaluated.

Timing and frequency

The frequency of evaluations is an important decision. It should be often enough to produce meaningful information, but not so often as to prevent the progress of the work at hand. It is natural for smaller-scale evaluations to occur more frequently than large-smaller-scale tions. For example, it is not feasible to conduct large-scale evalua-tions on a weekly or even a monthly basis. But it is also undesirable for them to occur only at the end of a project or at an intervention.

The monitoring and information systems should be able to provide information on a regular basis, thus allowing infrequent full-scale evaluations. Midterm evaluations may be a viable solution to the issue of frequency. The optimum time frame should be derived from a consensus among advocates and designers of an evaluation. It is important to remember that evaluation should reflect the continuing nature of the management process. Sometimes, evaluations are seen as discrete events that occur only at the end of a project, service or intervention. Although it is recommended that projects always be evaluated at their conclusion, this is clearly not the only time for an evaluation; less formal information and monitoring systems should identify problems as they arise throughout a programme’s imple-mentation. The essential task is to pass on correct information in time to make the necessary decisions. Thus, the evaluation manager needs to strike a balance among such competing factors as time, cost and personnel constraints in order to determine the optimal time frame and optimal format (formal or informal) for a major evalua-tion of the informaevalua-tion collected.

Box 2 summarizes the key recommendations of this chapter.

Recognize evaluation as a part of the ongoing cyclical management process Identify interested and affected parties as soon as possible; develop political and community support by bringing them into the decision-making process Keep evaluations open to new circumstances, new facts, new opinions, new ideas and new stakeholders

Develop and follow a written plan for evaluation; disseminate it to the stakeholders at the beginning of the project (see Box 1)

Prevent the perception that evaluation is a judgemental activity; stress its edu-cational and constructive nature

Narrow the scope of the evaluation according to its intended purpose

Strive to develop a clear monitoring and information protocol that delivers im-portant information to decision-makers but limits unimim-portant information Include a mixture of internal accounting of programme operations and external reviews of them

Distinguish between outcomes that result from the services themselves and those that result from uncertainty or chance

Avoid oversimplified conclusions

Be both flexible and action-oriented in order to guide future planning and decision-making

Promote issues of equity, intersectoral collaboration and community partici-pation

Box 2. Key recommendations from Chapter 1

R

EFERENCES

1. Health programme evaluation. Guiding principles for its appli-cation in the managerial process for national health develop-ment. Geneva, World Health Organization, 1981 (“Health for All”

Series, No. 6).

2. Health systems research in action. Case studies from Botswana, Colombia, Egypt, Indonesia, Malaysia, the Netherlands, Norway and the United States of America. Geneva, World Health Organi-zation, 1988 (document WHO/SHS/HSR/88.1).

3. REINKE, W., ED. Health planning for effective management. New York, Oxford University Press, 1988.

4. HEALTH21. The health for all policy framework for the WHO Eu-ropean Region. Copenhagen, WHO Regional Office for Europe, 1999 (European Health for All Series, No. 6).

5. Health in Europe 1997. Report on the third evaluation of progress towards health for all in the European Region of WHO (1996–

1997). Copenhagen, WHO Regional Off ice for Europe, 1998 (WHO Regional Publications, European Series, No. 83).

6. TIJSSEN, I. & ELSINGA, E. Evaluatie-onderzoek op het terrein van de gezondheidszorg [Evaluation research in health care]. In:

Maarse, J. & Mur-Veeman, I., ed. Beleid en beheer in de gezondheidszorg [Health care policy and management]. Assen, Van Gorcum, 1990.

7. ROSSI, P. ET AL. Evaluation: a systematic approach, 6th ed. Thou-sand Oaks, Sage Publications, 1999.

8. UNITED NATIONS ECONOMIC COMMISSIONFOR EUROPE COMMITTEE ON ENVIRONMENTAL POLICY. Convention on access to information, public participation in decision-making and access to justice in environmental matters. Report on the Fourth Ministerial Conference “Environment for Europe”, Århus, Denmark, June 23–25, 1998. New York, United Nations, 1998 (document ECE/CEP/43).

9. Managerial process for national health development. Guiding principles. Geneva, World Health Organization, 1981 (“Health for All” Series, No. 5).

10.Terminology for the European Health Policy Conference. A glos-sary with equivalents in French, German and Russian. Copen-hagen, WHO Regional Office for Europe, 1994.

11.Our planet, our health. Report of the WHO Commission on Health and Environment. Geneva, World Health Organization, 1992.

12.Improving evaluation practices: best practice guidelines for evaluation and background paper. Paris, Organisation for Eco-nomic Co-operation and Development, 1999 (report no. PUMA/

PAC(99)1).