• Aucun résultat trouvé

Education staff training development programme: programme evaluation

N/A
N/A
Protected

Academic year: 2022

Partager "Education staff training development programme: programme evaluation"

Copied!
47
0
0

Texte intégral

(1)

r

ECA/PHSD/HRP/89/18/5.1 (a)

EDUCATIONAL TRAINING MANUAL No. 4

(CURRICULUM DEVELOPMENT COURSE MANUAL)

EDUCATION STAFF TRAINING DEVELOPMENT PROGRAMME

PROGRAMME EVALUATION

UNITED NATIONS

ECONOMIC COMMISSION FOR AFRICA April 1989

(2)

United Nations

Economic Commission for Africa

Public Administration, Human Resources and Social Development Division

EDUCATIONAL TRAINING MANUAL NO. 4 (CURRICULUM DEVELOPMENT COURSE MANUAL)

EDUCATION STAFF TRAINING DEVELOPMENT PROGRAMME

PROGRAMME EVALUATION

April 1989

(3)

ECA/PHSD/HRP/89/12/5.l(a)

CONTENTS Page

OBJECTIVES 1

I. Introduct ion 3

II. Analysis and Definition of Concepts 4

A. What is Programme Evaluation 4

B. What is a Programme 4

C. Evaluation Concepts 5

(a) Characteristics of Evaluation 6

(b) Types and Models of Evaluation 7

(i) Types of Evaluation 8

- Formative Evaluation 8

- Summative Evaluation 8

(ii) Models of Evaluation 9

- Discrepancy Evaluation Model 9

- The CIPP Evaluation Model 10

- Goal Free Model 12

III. Nature, Scope and Focus of Programme Evaluation 12

A. Programme Components 13

B. Programme Evaluation Process 16

C. Setting up an Evaluation System 17

(1) At Project/Programme Level 17

(2) At Sectoral Level 19

IV. Synthesized Approach to Programme Evaluation 19

A. Planning the Evaluation 20

(a) Needs Assessment 21

(b) Evaluability Assessment 21

(4)

(c) Formulating Questions and Standards 22

B. Administrative/Management Agreement/Procedures 22 (a) Developing Administrative Agreement and

Establishing Schedules 22

(b) Selecting Designs and Sampling Procedures 23

(1) Evaluation designs 23

(2) Sampling 27

(c) Assigning Staff and Monitoring their activities 30

(d) Budgeting 30

C. Conducting Programme Evaluation 30

(a) Measurement and Collecting of information 30 (b) Use of a particular research design 31

(c) Analyzing Information/data 31

(d) Reporting the information 31

D. Evaluating the Evaluation 32

V. Finale - Guidelines for Programme Evaluation 33

Bibliography 36

(5)

ECA/PKSD/HRP/89/12/5.1(a)

OBJECTIVES

After lectures, discussions and assignments based on this manual, participants should be able to:

- define and analyse programme evaluation concepts,

- describe characteristics, types and models of evaluation, - define and describe programme components,

- understand programme evaluation process,

- plan, manage and conduct a programme evaluation by being able to:

: conduct a needs and evaluability assessment,

: formulate evaluation questions and establish schedules, : select designs and sampling procedures,

: budget for an evaluation,

: develop evaluation instruments and be able to use evaluation methods and techniques,

: collect, analyse and interpret data,

: write and evaluation report and publicize it.

PROGRAMME EVALUATION

Focus on:

1. Analysis and definition of concepts;

- tThat programme evaluation is,

- Evaluation concepts and characteristics of Evaluation, - Types and Models of Evaluation,

2. Liature, Scope and Focus of Programme Evaluation;

- Programme Components,

- Programme Evaluation Process,

(6)

- Setting up an Evaluation System;

3. Synthesized Approach to Programme Evaluation;

- Planning an Evaluation,

- Managing Programme Evaluation, - Conducting Programme Evaluation, 4. Evaluating the Evaluation;

5. Guidelines for Programme Evaluation.

(7)

ECA/PHSD/HRP/89/12/5.1(a) Page 3

PROGRAMME EVALUATION

I- Introduction

This paper is part of; a series of education training manuals for 'use"1 in the curriculum development course as well as educational planning courses in African countries. The general objectives of

the Curriculum Development course are:

(a) to acquaint participants""with~ thT'concepts and techniques

' : of curriculum development and implementation;

(b) to give participants practical orientation in the processes

and techniques of curriculum development; and,

(d) to stimulate professional interest in and inculcate the habit of self-development in curriculum development and

evaluation.

A number of manuals have been written aimed primarily towards enhancingthe fulfilment of the first two of these general,objectives.

The present manual on Programme Evaluation is intended to assist participants fulfil some of the aspects., of_ the _ third objective.

It does not therefore purport to be an academic treatise but a practical manual to assist persons directly or indirectly involved in the evaluation of programmes, education, curriculum development, non-formal education etd* > However it can also be of assistance to; professionals and academicsr as quick reference notes when one

is preoccupied with other matters.

• For this reason, the manuafr focuses attention on: i

- Clarification and ' definition of concepts by making a distinction between a programme and a project; between

evaluation and programme evaluation; and by analysing programme

components and key evaluation questions, and some of the

problems and obstacles of evaluation;

- Programme evaluation process; .

- An approach to evaluation;

-Planning and management of evaluation including the designing of an evaluation and selecting samples;

- And finally the manual provides some guidelines which one may use in planning an evaluation.

The rest of this" manual will be fVdevoted to the treatment of

these issues as indicated above. ■;■ ■'-;■ . i

(8)

II. Analysis and Definition of Concepts

Many issues get confused for lack, of clarity and clear definition of concepts. Indeed in discussing this topic on programme evaluation, it is important to understand the difference between a programme and a project; the concept of evaluation as generally applied and when applied specifically to a programme.

A. What is Programme^ Evaluation

There are a variety of interpretations when defining the term

"programme evaluation" in relation to its purpose, scope, design strategy arid 'methodology to be used. In essence, however, programme evaluation entails the use of scientific methods to measure the implementation''arid outcomes of programmes for decision-making

purposes.

In defining programme evaluation, a number of concepts ought to be clarified further i.e. how is this different from project evaluation or appraisal, from social or evaluation research etc.

It is important therefore that other concepts should be defined.

B. What is a Programme '

A Programme is an intervention or a set of activities designed to achieve external objectives i.e. to meet some recognized social need or to solve some identified problem. From another point of view, a programme is the embodiment of Ideas about the means of achieving desired objectives. In other words, a programme is an organized set of ideas, activities, projects, processes or services which is oriented towards the attainment of specific objectives/

For instance, a country can develop a literacy programme which aims at increasing the number of literates among its population while at the same time the programme aims at equipping nationals with knowledge and skills to enhance their productive capabilities and thus meeting some of the national needs.

This definition of a programme may be contrasted with that of a project in that a project is a planned undertaking which is a set of inter-related and co-birdinated activities designed to achieve certain specific objectives within a given budget and period of time, e.g. a World Bank three year project on education for five million dollars.

Projects are generally part of a programme; and several programmes in turn form part of a plan (say a five/ten year education plan). However all projects and programmes are activities organized for achieving specific objectives - the difference between the two is usually one of scope, magnitude and diversity.

(9)

Page 5

A project is therefore seen as a small unit of a programme i.e. a project is an undertaking carried out under a single management which is designed to achieve specific objectives within a given period and budget. On the other hand, a programme is- an organized set of activities often with several management units.;1 or organizations, directed towards the attainment of specific, but mostly longer-term objectives.

The purpose of all projects programmes is to convert a set of resources into desired results (objectives) through a set of activities or processes. Resources devoted to a programme or project are inputs and the results emanating from these activities are divided into three broad categories; outputs, effects and impact. Objectives may be short-term, intermediate or long term. (This is 'treated

in detail under programme components). .-■ . C. Evaluation Concepts

Evaluation is the process for determining systematically , and.

objectively the relevance, efficiency, effectiveness and impact, of activities in the light of their objectives. In other words, it is a set of procedures to appraise a programme's merit by providing information about its goals, expectations, activities, outcomes,.

impact, effects and costs. At the same time, evaluation is an organizational process for improving activities still in progress and for aiding and assisting management in future planning, programming and decision-making.

From another point of view, evaluation is actually the process of analysis and control designed to determine the relevance, worth, effectiveness, significance and impact of the specific, activities and the degree of efficiency with which they are carried out. In this regard, it should be under-taken with reference to the immediate objectives for which they were designed and planned and within the wider frame of reference of the more comprehensive longer term objectives of the programme.

Unlike social research which produces information and contributes to a body of knowledge, evaluation research data 'are useful to

•programme developers, managers, sponsors and future consumers for decision-making. It must however, be emphasized that evaluation can also contribute to a store of knowledge about innovative programmes because evaluation is an element of socio-economic research designed to improve methods in and approaches to activities relating to social and economic change and development. In this regard, evaluation is an important basis for international and national exchange of experience in development methodology.

It is important to draw a distruction between evaluation and social research. The principal objective of evaluation is to identify the relative efficiency and effectiveness of alternative approaches to development and the level of significance of specific activities

(10)

in the process of change. In other words, the main concern of evaluation is systematic effort to achieve certain expectations and change behaviour, atfcitudes» and thought. It is concerned with providing data about a programme's trferit or worth. Its clientelle are often limited — say the programme developers, sponsors, decision- makers whereas those of social research are varied.and as suth social research findings get published and receive &■ wider circulation.

Social research is mainly concerned with analysing and finding solutions to problems and issues or testing: theories, ideas ' etc = As such social research is often subjected- to scientific and technical rigours which may not be. the case with1; evaluation because of time constraints, financial restrictions and the inaccessibility of control

groups.

This distinction, however, can be very thin because the strategies and methodologies employed by both social research and evaluation are the same or similar. They vary in the degree of

intensity and. magnitude.

. Indeed there are variations in the defintions and specifications of evaluation. For instance, is the concept of ofl-'gpj.nja; evaluation the same as formative evaluation? Is jzpntext evalibation' the ; same as needs assessment? In view of these difficulties, an' effort ■ Is made here to discuss the issue further in relation to' programme evaluation by examining some of the characteristics of evaluation on which most of us have agreed; and an analysis of some ! of the types of evaluation applicable to a programme or project.

(a) Characteristics: of Evaluation

While these characteristics are not necessarily universal, they do apply to most uses of programme evaluation viz:

(1) The primary purpose of evaluating any programme (education programme or otherwice) is to provide information for decisions about that programme. Thus an evaluation study should be planned with relevant alternative decisions in mind.

(2) Evaluation results should be useful for programme .; improvement decisions and not just for decisions about

continuation or termination of a programme.

(3) Evaluation, data and information should be provided in time to be useful for such programme decisions otherwise such information or reports will be left on shelves unused.

(4) Evaluation is a human judgemental process applied to the results of programme examination, bearing in mind that

(11)

ECA/PHSD/HRP/89/12/5.1(a) Page 7

judgemental processes and the value systems which influence them are themselves subject to systematic examination.

It has t.O; be added here that measurement is -not evaluation but that it can provide useful data for evaluation.

(5) Evaluation efforts should take into account the short- and long-term objectives of the programme. It is also desirable for evaluators to be alert to any unintended effects that a programme might have i.e. the focus of goal free evaluation.

(6) It is equally important to consider the effects that a

programme was not necessarily designed to foster as well as to delineate the events (other than the programme) which might have produced any effects that are discerned during evaluation. In this regard the evaluator should not lose sight of the possible effects of the processes designed to collect data for the evaluation.

(7) A useful model of evaluation of education programmes should

be multivariate in nature" because human behaviour is complex and variously determined, particularly when one is .._ ... -evaluating the behaviours, attitudes, feelings and interests of individuals in a programme. ", They often feel differently;

and

(8) The processes —-of- obtaining data and information for

. ^evaluation should, meet the appropriate criteria of objectivity, reliability, validity, practicability, utility and ethical responsibility. While data may be collected on individuals, the focus of evaluation efforts is on the programme. Therefore, the measurement and interpretation (<ir; systems involved in evaluation have different requirements ,u,t from systems designed for selection, placement or guidance.

Beyond these broad principles, evaluators may conceptualize their tasks and roles quite differently and still produce useful

data and judgements. It does not matter what type, of evaluation,

or model or\ evaluation adopted, the basic evaluation processes are the same, regardless of what is being evaluated. What differs is what is being evaluated, how the evaluation process is applied, and the types of decisions made. , :

(b) Types and Models of Evaluation

Evaluation research has often been categorized into two types

bearing in mind that there are always sub-divisions and variations

to the two types - formative and, .summative. ,ftince evaluation is a continuous process throughout the life or duration of a project or programme, it is not what you do at the end that matters, but

(12)

at the beginning, in the middle, at the end and after the end. Every stage of a project or programme (from planning through implementation - to outputs}- is-subject to evaluation. In this regard, we can talk about on-going evaluation: terminal and ex-post evaluation; all within the two broad types of evaluation of formative and summative.

By types of evaluation we refer to the different processes, products, and persons subject to evaluation - students, curricula, schools, ' education systems, projects, programmes and personnel.

In all these the basic evaluation process is the same regardless of what is being evaluated. On the other hand, models of evaluation refer to the approach one adopts in collecting evaluation data depending on the type of questions one wants to be answered. In this regard, one should select a model which suits the evaluation

objectives.

In the next few pages, an attempt will be made to define some

common models and types of evaluation.

(1) Types of Evaluation

Three of these types are examined: formative (on-going) evaluation; and summative which may be subdivided into terminal

and ex-post evaluation. —

(*) Formative (on-going) Evaluation

Formative evaluation is essentially concerned with helping the developers of programmes or products through the use of empirical research methodology. it is an intervention or assessment of the efficiency, effectiveness, applicability and expansion capacity of the project/programme.'; Formative evaluation therefore provides information to the decision-makers about any needed adjustments of objectives, policies, implementation strategies and about future

planning.

In effect, it is designed to test the validity of objectives, hypotheses, assumptions set during formulation and planning stages

;a_s. to whether they are still valid, and that objectives can still

be fulfilled.

Formative evaluation is intended to sharpen the focus of

objectives in the light of experiences gained while the programme

is in progress.

(2) Summative Evaluation

(This can be subdivided into two: r ,

(i) Terminal Evaluation often undertaken say six to twelve

(13)

ECA/PhSD/HRP/89/12/5,l(a) Page 9

months after the completion of the project/programme either as a

substitute for ex-post evaluation of a project/programme with a short gestation period or before initiating fullow-up phase of the

project/programme.

(ii) Ex-Post Evaluation is often undertaken at the full project or programme implementation and development i.e. some years after the completion of the project/programme when the full benefits and impact are expected to have been realized.

The purpose of summative evaluation (i.e. terminal or ex-post evaluation) is two fold:

i - to assess the achievements of overall results of the programme ,;.;-■ in terms of the efficiency, outputs effects and impact;

. and-

- to learn lessons for future planning i.e. the design,

■: formulation, appraisal, implementation, monitoring of . activities.

Evaluation is thus viewed as a learning process which provides information for planners and decision makers for use in future planning activities and to avoid making mistakes.

(ii) Models of Evaluation .

There are many evaluation models one can come up with such as the discrepancy evaluation model, the . individually; prescribed instruction model (IPI); the Stake Model; the CIPP Model; and the Goal Free Evaluation Model. This paper describes only three of these models - the Discrepancy, the CIPP and the Goal-Free Models.

Time and space do not allow us to examine other models. (See Annex

I)

(i) Discrepancy Evaluation Model

Discrepancy evaluation refers to the search for differences between two or more elements or variables of an education programme or any other programme, that, according to logical, rational or statistical criteria, should be in agreement or correspondence.

It is therefore primarily a comparison of programme performance with expected or designed programme outputs, and secondarily among other things, a comparison of client performance with expected client outcomes. In other words, thTs evaluation involves a comparison of programmes in the real life situations. It also involves a comparison: between .intended or planned outcomes with actual measures of student performance. , ■ , . ;

The discrepancies noted in such an evaluation serve as feedback for improving the education programme under" review or for planning similar programmes in the future. Usually discrepancy evaluation

(14)

focuses attention on a wide variety of programme variables such

as:

(a) Discrepancy between programme plans or intentions and

actual programme operations.

(b) Discrepancies between predicated and obtained programme

outcomes. In this case the focus stems from the question:

"Do the students change in the direction and amount that

they were expected to change?" Decision about the programme will depend on the answers to this question. '

(c) Discrepancies between student status and the desired

standards of competency. Sometimes this discrepancy goes by the name of needs assessment and often provides the stimulus for development of new improved training

programmes.

(d) Goal discrepancy often, applied to studies of consistencies

in the goal values held by different parties to an educational or training programme or between the educators and the public.

(e) Discrepancies between hypothetically interchangeable parts

of an educational programme; for example, between two classes of the same form A and B as regards emphasis of topics in their mathematics.

(f) System inconsistencies, for example, by asking whether

there are inconsistencies in the logic or organization

of the programme among programme objectives, instructional

procedures and measures used to assess student progress.

The discrepancy evaluation model involves a comparison between reality and some standard: i.e. the design of "the programme against a set of design criteria; the actual programme operations against the input and process sections of the programme; the degree to which interim products are achieved, against the hypothesized relationship between process and product; the achievement of terminal products against the specifications of these products in the programme design and the cost of the programme against the cost of programme having similar goals.

(2) The CIPP Evaluation Model

The CIPP model involves four types of evaluation for four major

types of decisions in education viz: context evaluation; Input Evaluation; Process Evaluation and Product Evaluation (CIPP).

(a) Context Evaluation: In the planning of on-going educational

programmes and activities, context evaluation which is diagnostic

(15)

ECA/PHSD/HRP/89/12/5.l(a) Page 11

in nature, helps to discover any discrepancies between programme goals and objectives and the actual impact of educational programmes and then to allow for planning decisions to match the intended and actual Outcomes. It thus involves the identification of needs;

statement of programme objectives, and the development and selection of. criterion measures through interviews or expert opinion research and surveys.

(b) Input Evaluation

Since we are concerned with making educational programme goals operational which have been identified and clarified by context evaluation, there is need to assess the optimal utilization of resources in relation to the results. Input evaluation helps us reach those decisions; and some of the important issues which demand our concern are: the feasibility of accomplishing the goals and expected outcomes; the availability of strategies for accomplishing the goals and objectives; the potential costs of the various strategies employed to achieve the goals and objectives; and the optimal utilization of staff and other resources. Thus input evaluation focuses on the examination, of various input strategies by evaluating their strengths and weaknesses and then selecting the best strategy for achieving the programme's goals.

(c) Process Evaluation: This is geared to monitoring the

change process so . as to detect defects and thereby institute corrective measures for the success of the project. It provides feedback to the managers, administrators and educators of an educational programme. The focus of such evaluation includes the assessment of interpersonal relationships, teaching logistics, and the adequacy of staff performance. It is used to make decisions during the course of the programme.

(d) Product Evaluation: This involves an assessment of programme/project outcomes as they relate to project objectives,

context, inputs and processes of the programme. Actually in assessing the extent to which the anticipated outcomes have been achieved, we are engaged in product evaluation or one may say, summative evaluation as opposed to process evaluation which may be similar/same as formative evaluation. Product evaluation helps us to decide whether to continue or terminate a programme based on the results of our findings.

If the gap between the anticipated goals/outcomes and the actual outcomes is too big, it will be necessary to adjust and make changes to the programme; whereas if the gap is too small, the tendency will be to keep it and pursue the same objectives.

In all these evaluations, the evaluator has two responsibilities.

(16)

He must describe the ^antecedents i.e. the entry behaviours of the students/participant before entering into a programme;

the transactions i.e- the educational processes which occur during the programme; and the outcomes, i.e. the abilities interests, attitudes and achievements resulting from participation in the programme;

He must judge the appropriateness or merit of the three

categories of data i.e. antecedents, transactions and outcomes.

( Goal Free Model ; .

Goal-free,evaluation is an approach used as a means of ensuring

that evaluators take into account the actual effects and not just the intended effects of education and training programmes*

Some programmes achieve their goals in exemplary manner but are terminated because of serious side effects; while other programmes make little or no progress towards intended outcomes but are implemented because of important unintended gains. In this regard, it is no use making a distinction between intended and unintended outcomes because the final appraisal should focus on the importance and value, and not on intentions of a programme.

Goal-free evaluation focuses on actual effects against a profile of demonstrated needs in the area of , the programme being evaluated.

The evaluator should collect data bearing on a broad array of actual effects and should evaluate the importance of the.se effects in meeting educational needs. The demonstrated effects should help us see whether the major goals of the programme were fulfilled. ,!

Having described three of the evaluation models, it may be necessary to also review and examine other popular models in: terms of their contributions, usefulness and limitations. These are

summarized in Annex I. Most of models are known by their proponents

or educators who championed their cause.

III. The Nature, Scope and Focus of Programme Evaluation

At the very beginning of this analysis, it was pointed out that a programme is the embodiment of ideas about the means of achieving the desired goals and objectives; and that the implementation of those ideas and their impact on a target population are the concerns of programme evaluation.

It remains to be emphasized in this section that programme evaluation is basically concerned with different questions depending upon the immediate and development objectives towards which the programme was designed. For instance quantitative objectives will require questions which will produce quantitative answers to such questions.

(17)

ECA/PHSD/KHP/89/12/5U(a)

Page 13

If the objectives are both social and economic, evaluation questions will seek to establish selected social and economic effects of the programme; and often these go beyond the immediate pre-occupation of the programme and beyond its quantitative effects.

As the objectives of a programme tend to become wider and more general, evaluation questions tend to develop into a kind of comprehensive social and economic research.

Some of the key evaluation questions are:

- How is the programme being carried out?

- Is the programme being implemented in the prescribed manner?

- Is the target population being reached?

- What are the outcomes and effects of the programme?

- Were these the intended outcomes and likely effects?

- How can the programme activities be delivered more effectively

to the client population? etc.

The list of questions can go on, but the above questions are indicative of the questions one may ask when embarking on an

evaluation.

Programme evaluation focuses on programme structure. It draws attention to the significant structural elements of the programme i.e. to the programme components. We will examine these components.

A* Programme Components

To undertake a programme evaluation it is necessary to examine components which are suitable units for evaluation. These components

include: objectives, outputs, effects, impact, inputs, monitoring and other control components of inspection and auditing. The last two, though not key components are crucial in the overall management

of the programme/project.

: (a) Objectives are the desired results of development in a project/programme. They can be hierarchically arranged from short- Je™}Jfin5«"*«ate or long-term objectives, or from lower to higher levels of goals. In this case one may talk about proximate, mediate

and ultimate goals.

For Jnstance the short-term ob^cM™ of a literacy programme

ill ^ft" iUlteracv and ^crease a country's literacy rate,

and with literacy skills, acquired improve the productive capability

k- Ji°8e* >iterates *n their various activities (intermediate

objective) i.e. the effects; and that in the long-run' <lOng g£

(18)

objective) an increase in productivity would contribute so higher

incomes and the Well being of individuals (impact)*

Usually, objectives are stated ends, and so resources are directed towards their achievement as well as activities. Objectives should be clearly stated and capable of attainment, otherwise it would be difficult to evaluate them.

(b) Inputs

Programme inputs ban be of three kinds: First, material inputs

taftging from physical facilities, 'instructional facilities, tools,

equipment and consumables used in the execution of the project.

Second, there are financial inputs - funds to run the project, buy equipment and pay the project staff; and third, human resources input which range from part ic ipants, s tudent s, re source pe rsons, managers, supervisors, those executing the project etc. An evaluation looks at the timeliness of delivery of inputs the quality and quantity of material inputs, the quality, qualifications, capability and quantity of human resource inputs; and the effective utilization of financial resources. All these inputs are subject to evaluation as they can greatly affect the fulfilment of objectives and outputs.

(c) Outputs

Programme outputs are often specific products expected to be produced from inputs into the programme so as to achieve the objectives of the programme. Outputs can either be intermediate so as to be an input into something else leading to the production of a final product. They can also be final. Evaluation can look at the quantity and quality of output produced from a programme.

(d) Effects

Programme effects are the intended or unintended consequences of the programme components. They can be actual or observed effects of the use of the project or programme outputs. Effects usually begin to emerge during implementation of the programme but often they take time to emerge, perhaps until the full development of the project.

An evaluation can be used to examine the effects in order to find out how pervasive they were, and to what extend the programme is responsible for their occurence.

(e) Impact is an expression of results actually produced at

the level of broader long-term objectives. It is the ultimate change in the living conditions of beneficiaries resulting from the implementation of the programme or project.

(19)

■ECA/PHSD/HRP/89/12/5.1(a)

Page 15

In effect, impact is the outcome or ultimate expression of

programme effects. Impact of a programme usually takes time to

be felt, and as such it is usually in the ex-post evaluation of the programme. It must be pointed out that a distinction between

output, effects and impact of a programme depends upon the scope, nature, size and specific objectives of a programme.

Monitoring is a key element in the implementation of any

project or programme. It is the continuous review and surviellance by programme management at every level of the hierarchy of implementation of an activity to ensure that input deliveries, work schedules, targeted outputs and other required actions are proceeding

according to plan.

Monitoring helps to achieve efficient and effective project performance by producing feed-back to the programme management at all levels. It enables the management to improve on the operational plans and take timely corrective measures in the case of constraints and shortfalls. It is therefore part of project management information system, althought it is an internal activity and needs to be conducted by those who are responsible for programme implementation at every level of management hierarchy.

Monitoring is therefore complementary to other control functions of programme implementation i.e. it is complementary to evaluation, auditing:and inspection, the other three control functions.

(g) Evaluation is a component of programme management which starts at the initial preparation and appraisal stages of a project.

Evaluation should be _; a_ built-in- component of a programme spanning the entire life of a project from inception to ex-post evaluation to allow for refinement of objectives, assumptions and activities, control mechanisms and for both formative and summative evaluation

to take place.

Evaluation should not be considered as exclusive to programme formulation, planning, design, implementation, management and programme evaluation. It is intricately bound to all these stages of a project and as such it is an indelible part of a programme.

Inspection and Audit should not be confused with monitoring and evaluation, since the former are forms of organizational review carried out for checking and control of higher levels of management by external staff or independent bodies to investigate to what extent

a process or performance conforms to established procedures and

then to report the extent of conformity or irregularity. Inspection and audit are therefore management functions used for higher management and control purposes. They are therefore components of a programme which are subject to evaluation and are a useful

source of data and information.

(20)

B. Programme Evaluation Process

Evaluation of programmes is a management tool conceived as an input into the decision making process relating to the concepts and implementation of social projects and programmes. It is also an. element of social research which aims to increase knowledge of the. techniques and constraints of social development.

A key issue of concern in evaluation is to ask questions which

help us understand the evaluation process:

- What is it we want to do and why it should be done. In this

case we should be clear what we mean by evaluation and why

we want to do it.

- When should it be done?

- How should it be done and in what order?

- For whom should it be done and by whom?

- How are we to report findings?

In answering all these questions, we come up with a systematic

array of activities constituting an evaluation process. This process

then involves:

(a) Defining the Primary Beneficiaries .-.

The first step is to define the primary beneficiaries of evaluation i.e. the client. Usually interested parties include legislators, policy and decision-makers,: donors, funding agencies, interested public, managers and programme owners. Evaluation has to be conducted with these beneficiaries in mind.

(b) Determining the Purpose of Evaluation

A second consideration is to de^er^iine the purpose of evaluation.

Broadly there are three purposes of evaluation: j

(i). Meaningful accountability as regards the worth or value of a programme so, that continued support can be given.

■ ■-. ■ ■ >■(

(ii) Improved programme delivery as regards timeliness of delivery of activities, cost effectiveness and efficient

management;

(iii) Adding to the knowledge of the social sciences and thus laying foundations for programme innovations in future

planning.

(21)

ECA/PHSD/HSP/89/12/5.1(a)

Page 17

**-•«-

"-

<e> DevlQPing l..fn,i.n,. for collecting information.

(f) Scheduling and timing the evaluation.

P°°dUCtlnR thP ™"'""nn by collecting and analysing haVe " -«»*«•* to the

with ^ kind of

C* Setting-up an Evaluation System

£

reflnement °f objectives, assun.ptions and

the cost °f evaluation

project f°r bUUt"in evalua"» *«i»g the life of the

at di^rent^w'lsr15 " eValu"ion '*««■. " ^ould be organized

At Project/Programme

At this level, there are seven important tasks to be performed,

the project/programme design so as to identify^

(22)

hierarchy of objectives, both explicit and implicit with

a view to establishing their relationship to national

goals and the intended beneficiaries.. At the same time the analysis allows for 'establishing linkages between the various components and the projects critical stages of activities, inputs, processes, outputs, and delivery

schedules.

(b) Determining information needs and choosing indicators.

Evaluation calls for collection and analysis of data which should be reduced to an essential minimum, and to choose relevant, meaningful and objective data. Indicators chosen should be used for both formative and summative evaluation bearing in mind the following questions* who needs

information,

- Of what kind and how often?

- For what purpose? - ■■?

To ensure that these questions' are properly answered, it is important to meet clients or beneficiaries' when

designing art evaluation proposal.

(c) Reviewing the existing management system. This is done to make maximum use of data,already available and to ensure that there is no duplication. In this regard, evaluators should always examine the contents of data and the indicators used; the format and the frequency of existing

reports and for what use they were intended.

(d) Surveying secondary sources of information and also

examining how current the data are, and whether the data meet the needs of the evaluators and whether available

data are consistent with programme requirements. Data are often collected from national statistical offices, from relevant ministries, central planning offices and from research and educational institutions.

(e) Surveying the primary services of data from functional ministries, agencies, planning offices/departments etc, and ensuring that staff will have access to those sources.

(f) Analysing the data on input and output follows for decision making. Such data may include among other things physical facilities and infrastructure, institutional aspects, e.g. staff training, turnover and recruitment; delivery systems, volume of inputs, efficiency, results achieved,

outputs and effects.

(23)

ECA/PilSD/HRP/89/12/5.1(a) Page 19

(g) Communicating findings and recommendations in written form on which decisions about the programme will be made.

(2) At.tho Sectoral Level

The principal tasks at tbe sectoral level of the department or ministry are:

(i) For Formative Evaluation

During programme implementation the purpose of evaluation would be:

- to assess the overall performance in programme implementation - check whether the objectives are being achieved,

- assess the validity of assumptions,

(ii) For Summative Evaluation

During summative evaluation at sectoral level, evaluate with regard to:

- performance according to programme design and plan;

- impact in terms of economic, social and environmental objectives;

- institutional development as regards programme management, delivery of services, procedures etc.

The purpose of these tasks would be to keep track of the overall progress in the implementation of the programme; to assess results in terns of outputs and impact; and to learn lessons for future planning so as to ensure better implementation, formulation, monitoring and evaluation of future projects.

It needs to be emphasized here that formative evaluation is concerned with in-depth analysis of persistent constraints say in

the delivery systems, while at sectoral level, the issues are mainly of policy nature and are generally qualitative, e.g. how effectively

the delivery systems cover the target groups.

IV. Synthesized Approach to PrograTame Evaluation

This section attempts to synthesize what has transpired in

the following pages into some kind of scheme of programme evaluation.

The activities described are -not necessarily sequential because

there are instances where some can be undertaken almost

(24)

simultaneously. In all, however, they constitute an aggregation of activities considered necessary in programme evaluation.

Usually, an evaluation of any programme/project encompasses the following major activities:

(a) Planning the Evaluation Which includes:

(i) Weeds Assessment, - ■—- (ii) Evaluability Assessment,

(iii) Formulating questions and standards.

(b) Developing Adimini s t ra t ive/Management Agreement and procedures:

(i) Establishing an administrative agreement and evaluation schedules,

Selecting designs and sampling procedures;

(iii) Assigning staff and monitoring their activities;

(iv) Budgeting.

(c) Conducting programme evaluation which entails the following tasks:

(i) Measurement or collecting of information,

(ii) Use of particular research design, (iii) Analysing information/data,

(iv) Reporting findings/information,

(d) Evaluating the evaluation which in this case includes:

(i) Proof (analytical, impartial and objective evidence)

that a programme works/does not work,

(ii) acceptance/rejection of the evaluation findings.

We will now analyse and examine these activities and their implications for evaluation.

A. Planning the Evaluation

Programme .evaluation._ requires .careful planning, to ensure relevance and credibility. In this regard, three activities should be taken into account when planning for evaluation of a programme:

(25)

ECA/PHSD/HRP/89/12/5.1(a) Page 21

(a) Heeds Assessment^ ""

The purpose of a needs assessment is to establish goals for which a programme should strive^ But needs assessment is technically part of programme planning rather than programme appraisal.

Heeds assessment is a.-, process by which needs are, Identified and priorities are set. A nead may be defined as a condition in which there is a discrepancy between acceptable state of affairs and an observable state of affairs.

To conduct a needs assessment five steps are necessary!

(i) Identify potential objectives of a programme being planned;

(ii) Decide which objectives are raore important for the

programme;

(iii) Assess the services of current or available activities;

(iv) Collect information on existing activities; and, (v) Select final objectives,

(b) Eyaluability Assessment

Evaluability assessment is the front-end analysis used to determine the manner aiid tALtaL to wmch a programme can be evaluated.

This evaluability assessment focuses on programme structure and examines such questions as:

- Is the programme to be evaluated well defined?

- Are objectives and effects clearly defined?

- Are the objectives plausible etc

A second consideration in evaluability assessment is to decide on the type of methodology most suited to achieving the purpose of evaluation. Such - consideration requires the identification of methodology most appropriate to ~£He purpose; the feasibility of implementing that methodology; availability of funds and data; time frame and constraints i;uch $.cf. legal, political, ethical and administrative constraints. ■....

.;: f '.-■■, ^ ""'.

When an evaluability .assessment has been completed, it helps to refine the terms of reference for an evaluation study, and focuses on the objectives, the questions to be addressed, the sources of information, research design, time frame, and the resources available.

(26)

(c) Formulating questions and standards

These define the needs of beneficiaries and consumers of evaluationj and they also set boundaries for the study. For instance:

- How well did the programme achieve its goals?

- Were the programme activities implemented as planned?

- For which groups of people was the programme most or least successful?

- What social and political effects did the programme have?

and

- Uhat did the programme cost?

Evaluation questions may be mandated by law or may come from

the clientelle who want the evaluation to be done. Such questions should not originate from the ©valuator, but the evaluator should be familiar with the programme, its sponsors and its participants.

Evaluation standards entail deciding the kind of information which provides convincing evidence of a programme's success.

Standards can be set by measuring improvement and by using established

practice provided, in each programme, due and adequate care is given to consider variations in each case. All these issues have to be considered during the planning stages of evaluation.

B. Administrative/Management Agreement/Procedures

The management of evaluation studies should begin before the evaluation is implemented and should continue until evaluation is completed. It needs to be emphasized that evaluation must be

accomplished within strict budgetary restraints, time and personnel constraints. To effect all these, evaluators must be conversant with management functions of establishing administrative agreement

and evaluation schedules; selecting designs and sampling procedures;

assigning staff and monitoring their activities; and budgeting.

(a) Developing Administrative Agreement and Establishing

Schedules

During this stage, it is important to have a broad agreement

between the sponsors of the programme evaluation or beneficiaries and the evaluators as to what kind of evaluation study is expected.

This requires a legal or administrative mandate often given as the

terms of reference for the study in setting out the scope of the

evaluation; the responsibilities of the programme staff in carrying

out the evaluation tasks; the controls to ensure adherence to

(27)

ECA/PHSD/HRP/89/12/5.1(a) Page 23

evaluation plan; the limitations, expectations and recommendations (if any) of the study, and the publicity of the report.

As regards scheduling, it is important to note that evaluation has to be completed within a given time and therefore, activities have to be scheduled so as to meet certain deadlines. To ensure that this is done, attention should be paid to the evaluation activities themselves; to the deadline for completing each activity and the amount of time to be given for completing each activity;

for example:

- familiarizing oneself with programme goals and objectives;

- formulating evaluation questions, preparing evaluation design strategies and sampling procedures;

- collecting, analysing and interpreting data, and - preparing evaluation report.

(b) Selecting Designs and Sampling Procedures:

CD Evaluation Designs

A design strategy describes how people will be grouped to answer evaluation questions and how the evaluator will manipulate or control variables involved in the answering of evaluation questions.

Sometimes, a single design can be used to answer all questions in an evaluation, but sometimes, several designs may be used to ensure

the validity, reliability and applicability of an evaluation.

In choosing a design, an evaluator is guided by internal and external validity - i.e. how accurately a design being chosen will answer the evaluation questions. In this regard, internal validity distinguishes between changes caused by the programme being evaluated and changes resulting from other causes. On the other hand, external validity measures whether the evaluators' findings would hold true for ;other people in other places or programmes. All designs, if they are of any good, must have internal validity but external validity is important 1£ the decisions based on evaluation will

Influence decisions regarding future participants and progrmmes.

There may be threats to internal validity such as the following!

" historical; - changes in the environment which occur at the same time as the programme. This may influence the results of evaluation.

" maturation - changes within the individual arising from the natural biological or physiological growthj

(28)

■ testing - the effects of pre-testing on subsequent tests which may influence evaluation; and

- instrumentation - changes in the administration or scoring of a chosen instrument from one group or time to the next which will influence or affect the results.

Threats to external validity often aret

- the reactive effects of testing i.e* how the results of a pre-measure will make participants more sensitive to the aims of the progrmme and this influence the evaluation study;

" the interactive effects of selection bids i.e. how gerteralizdble the evaluation fundinge are to other subjects,

programmes and settings.

- the reactive effects of innovation i.e. changes that occur in subject performance arising from the participants excitement in participating in an experimental or evaluation programme.

To these should, be added the, multiple programme interference which may be defined as the difficulty caused in isolating the effects of an experimental programme because the participants

currently/also participating in other complementary activities

programmes.

are or

In programme evaluation, three designs appear to be more commonly

used and these are:

The Case Design which is used to examine a single cohesive group. Evaluation uses case designs to answer questions which ask for the description of participants of a programme, its goals, activxties and results. Also questions about new programmes or demonstration . projects for which there are no comparisons often

require case design strategies. !ji

Case designs are some times known as pre-experimental -designs because they are often used to establish the existence of certain factors which, if. ..confirmed, can be studied in more controlled

situations. i

The Time Series Design which involves collecting information about the same group or groups of people over several periods of time so as to compare a group's current performance against its past performance and in so doing check whether a programme has lasting effects.

However, using time series over a period of time

causes problems in keeping track of people, and those

(29)

ECA/PHSD/HRP/89/12/5.l(a) Page 25

who may have dropped out of the study for one reason or another. Time series designs are sometimes considered as quais- experimental because they provide only partial control over the threats of internal validity.

(iii) The Comparison Group Design which is a strategy often recommended to answer evaluation questions more appropriately. In this strategy people are divided into two" * or more groups - one of which is experimental and the other is control group. The groups can also be tested over a period of time; and in this regard, the comparison group design can also employ the time series design when evaluating a programme.

A true experimental design involves assigning individuals into groups at random without taking into account various considerations.

Figure 1 shows a checklist of these designs and when they can be used for programme evaluation.

(30)

Figure1

EvaluationDesignSelectionChecklist

Amanygroupsbeingcompared

one';groupinvolved

•—■■-ormoregroups b.*:.:

Howcanytimesis"eachmeasurebeingadmiristered

(a)fnetimeonly(b)Twoormoretimes

(a)Justonetime

(b)Twoormoretimes C

Arethegroupsbeingcomparedequivalentatthebeginningoftheevaluation

1.Doesnotapply2.Doesnotapply

(i)Notequivalent

groups(ii)Equivalentgroups

(i)Notequivalentgroups(ii)Equivalentgroups D

Designstrategyto

use1.CaseDesign2.Timeseriesdesign

(i)Quasi-experimentalcomparisongroup(ii)Trueexperimentalcomaprisongroup

(i)Quasi-experimentalcomparisongroupandtimeseries

(ii)Trueexperimentalgroupandtimeseries

(31)

ECA/PHSD/HRP/89/12/5.1(a) Page 27

(2) Sampling

The object of sampling is not to beat the odds but to fix them as high as possible in one's favour and to know what those odds are.

Sampling should be avoided when:

- it is easier and cheaper to test everyone in the programme;

- trained personnel for proper sampling is unavailable;

- money and time are not available so that proper sampling methods can be used;

- information wanted would make sampling methods questionable i.e. based on approximation rather than on mathematical analysis;

- and the information wanted and collected through sampling methods would not be very useful to the sponsors of the evaluation.

A sample is often considered as a miniature of the population for which the findings of the evaluation study will apply. As such, a number of methods are often used in sampling:

\ (i) Simple Random Sampling in which a subset of "n" individuals is chosen from a population of "W" individuals at random. The subset

"n" chosen at random is then subject to evaluation study.

(ii) Stratified Random Sampling in which the population "N" is

subdivided into groups or strata and then" a' given number of individuals are selected at random from each stratum to get a sample of size "n".

(HI) Simple Random Chosen Sampling in which a subset of "n"

groups is chosen at random from a population of "N" groups for an evaluation study4

(iv) Paired^selection Cluster Sampling in which the population

"K" individuals is divided into "I" clusters. The clusters are then arranged into strata and from each stratum two are selected at random for evaluation.

(v) Simple Random Item Sampling in which a number of "K" items

are randomly selected from a* population of "K" items to get a sample size of "K" and administered to all "N" examinees.

(vi> Simple Random Item Examinee Sampling in which a number of

"K" items are randomly selected from "K" items and given to "n" examinees

selected at random for a population of "n" examinees.

(vii) Multiple Matrix Sampling which involves more than one

combination of randomly selected items and individuals, are formed and the results of each combination are merged. Further descriptions are given in Figure 2.

(32)

Figure2

ASummaryofSamplingMethods

1.Technique/

SimpleRandomSampling-Asubsetof!ln"individualsischosenatrandomfromapopu

lation of "11" iuJj.vi-

duals

StratifiedRandomSampling-Thepopulation(")issubdividedintosubgroupsorstrata,andthen,agivennumberofindividualsaroselectedatraiiJoiufromeachstratumtogetasampleo*si2e(n).

SimpleRandomClusterSampling-Asubsetof(n)groupsischosenatrandomfromapopulationof(n)groups 2.WhogetsSampled

(i)Individuals

(i)Individuals

(i)Groups 3.Advantages

(a)Simplestofallsamplingmethods(b)ManycomputershaveTbuilt-inprogrammesfordrawingrandomscruple8

(a)Canbemoreprecisethinrandomsampling(b)Permitsevaluatortochooseasamplethatrepresentsvariousgroupsinjdebiredpopulation

(a)Canbeusedwhenitisinconvenientorunethicaltorandomly...selectindividualsubjects(b)Administrativelysimplesincenoidentificationofindividualsisneces

sary 4.Disadvantages

~-*■(a)Producesgreaterstandarderrorsthanothersamplingmethods;(b)Cannotbeusedwhenyouneedtosamplebysubgroupse.g.60%female,40%male

(a)Requiresmoreeffortthansimplerandomsampling(b)Oftenneedsalargersamplesizethanasimplerandomsampletoproducestatisticallymeaningfulresults

(a)Notmathematicallyefficient

(33)

(0CM

i—ICD60

CMi—I

COaCO

Ed D.Paired-SelectionCluster

Sampling-Thepopulationof(K)individualsisdividedinto(I),clusters.Theclustersarearrangedintostrataandfromeachstratumtwo-Twoclustersareselect-;edatrandom

IE,SimpleP-andom_11en*Samp1ing-(K)itemsarerandomlyselectedfromapopulationof(K)ofitemstogetasamplesiz«of(K)andadministereduo~all(lOexaminees

F.SimpleRandomItemExamineeSampling-

- (k) items are randomly ;

-selectedfrom(K)itersandgivento(n)examineesselectedatrandomforapopulationof(N)examinees

MultipleMatrixSampling-Morethanonecombinationofrandomlyselecteditemsandindividualsareformedandtieresultsofeachcombinationaremerged-■ 1•Groups

(i)Items

Itemsandindividuals

(i)ItemsandIndividuals (a)Sameasforsimplerandomclustersampling,butalsoallowsyoutostudygroupsinsomedesiredproportion

(a)Cansavetime(b)Reducestestingburdenonexaminee:

(a)Permitssamplingontwofronts (a)Sameasforsimplerandomclustersamplingbucanevenlargersampleisneededforstatisticallymeaningfulresults.

(a)Reducestestingburdenonexaminees(b)Allowscomprehensivetesting(c)Mayreducethepreciseestimatesamongallsampling (a)Requiresalargeandandvalidateditempool

(a)Administrativelyandmathematicallycumbersome

(a)Extremelycomplextoadminister(b)Requiresthe"creationofmanyvaliditemsandsub-tests(c)Cannotbeusedtoassessindividuals(d)Normsassociatedwithstandardizedtestsarenotuseful

(34)

(c) Assigning Staff and Monitoring their Activities

The team evaluation leader should know what skills it takes to perform each activity before assigning staff to be responsible for execution of those activities, such as instrument development, information collection and analysis, reporting etc. At the same, time a point should be made to have regular consultations with the staff assigned to do the activities. This also ensures monitoring

of evaluation activities.

(d) Budgeting

Evaluation is often conducted within a given budget and for a specified period. Evaluators therefore should critically examine what needs to be done within the money alloted for evaluation. Within this budget, there will be direct costs coveting salaries of evaluation staff and non-staff costs related to time directly spent on the evaluation. Also there will be indirect Costs not directly related to the particular evaluation but which contribute to the overall success of the evaluation e.g. secretaries who keep the offices of evaluators while the evaluation is going on; or guides who take their time to show the evaluators sources of information, or those who take their time answering evaluation questions and questionnaires. These indirect

costs are often not included in the budget.

C. CONDUCTING PROGHAMME EVALUATION

Conducting an evaluation calls for four major tasks to be

undertaken: :

(a) Measurement or Collecting of Information

Within this major task, a number of activities will need to be

considered:

(i) Consideration of the amount and type of information required to answer evaluation questions. In this regard, four categories of

information will be necessary:

- information on the programme, - on objectives and effects;

- on antecedent conditions and - on intervening conditions.

(ii) Collecting of information is a set of tasks which includes

what is to measured (the dependent variables), selecting, adopting

or developing a strategy or instruments for measurement; administrating

those measurements, scoring and intepreting the results.

(35)

ECA/PHSD/HRP/89/12/5.l(a)

Page 31

mmsmm (b) Use of a Particular Research Pea^n This often addresses two central concerns:

what

ofatthf ^"e«lizabiUty by asking to what extent the results of the evaluation are considered relevant to the situation

places, clients and circumstance

are consider

situation, places, clients and circumstances.

Analysing Information/Data

(d) Reporting the InforTna^^nT,

"

on the scope o£

(36)

- the information collection techniques and instruments and their limitations!

.... . .' i

- how data were collected and how confidentiality was observed during the evaluation! ;-r; . ■

■ .■■-.■'-.":.)"'

- methods used to analyse the information, their limitations and the results of each analysis;

- the answers to~ each "evaluation questions including an

interpretation of the findings, and a list of the recommendations, and

- administrative, staff assignments and costs.

All these and other administrative' 'details should be reported

either in the main ^body of the report or as part of the annexes.

D. Evaluating the Evaluation

; Evaluating an 6Vaiua"f^6n" can be done oh two' fronts - that of

the evaluators themselves and that of sponsors or beneficiaries. On

the part of evaift^brs, they have to critically assess their final

report as to whether it can be believed,1 is reliable, objective and practicable. Sponsors review the report whether it conforms to the terms of reference, that the findings are. .correct*, .reliable^ objective, valid and believable. In either case, what is required is proof

(analytical, impartial and objective evidence) of the workability

or non-workability of a programme. To effect this, certain steps

should be taken to see whether a programme and its evaluation measure

up to rigorous standards.

First evaluators will examine evidence before them related to

objectives of the programme using valid and reliable measures. On

the other hand, sponsors and policy-makers will critically examine the described evidence in the report which is directly and logically related to the objectives and purpose of the programme.

Secondly the sponsors and beneficiaries examine whether the effects reported in the study were statistically significant and were based on evidence of small or large samples. If the results are based on

a small group of samples it may be difficult to generalize the findings

and thus the report may not be acceptable.

Thirdly, evaluators as well as sponsors examine the effects of

the report as to whether they were programatically significant in

terms of coverage and cost of evaluation.

Fourthly both evaluators and sponsors usually want to determine

whether the observed effects in the report or findings resulted from

(37)

ECA/PHSD/HRP/89/12/5U(a)

Page 33

the intervention of the programme and not from something else. It is possible that factors other than those of the programme could have brought about the changes and effects. The findings should be able to show very clearly this distinction of the programme and non-programme

effects.

Fifthly, for evaluators evidence to be believable and interpretable, they need to present a complete discription of their methodology and findings, valid and reliable information and instruments

used for collecting the information.

Finally, sponsors and beneficiaries of an evaluation examine and assess whether a programme which has been evaluated can be implemented in other locations with reasonable expectations of comparable impact, and that the success or failure described in the

report is not just unique to one area because of the particular

circumstances, staff and sheer opportunities of that programme which

has been evaluated.

What is being done in considering the foregoing issues is an evaluation of an evaluation. In one case, it may be said that the evaluators are doing a self-evaluation of their own evaluation. In the other case, the sponsors, beneficiaries, policy-makers and end-

users, are actually evaluating an evaluation i.e. whether the evaluation report has lived up to the terms of reference.

V. Finalei Guidelines for Programme Evaluation

This finale purports to give some broad measures which should be taken into account when designing an evaluation which describes

a programme in terms of what it does, what it is, and how well it

does it. It is therefore a summary of guidelines for planning,

designing and managing an evaluation viz:

(1) Every evaluation should ask questions about the outcomes and effects of a programme being evaluated. Answers to

such questions should be used to determine the extent to

which the outcomes and objectives of the programme were

fulfilled.

(2) An evaluation must ask specific questions and should be able to test hypotheses about a programme by stating them in advance. This enables the scope of the evaluation study to be limited within the terms of reference, and also helps the clients and beneficiaries to understand what kind of information can be expected from the report. ..

(3) Evaluation questions should be limited to those which will

provide useful, reliable, valid and interpretable information

for the clients who expect to act on the report.

Références

Documents relatifs

Most significantly, priority needs will vary between countries with respect to the degree of commitment with which national manpower and employment policies are designed and

The purpose of evaluation is the same regardless of the type of evaluation or the nature of the objectives, whether instructional, curriculum, or project objectives, explicitly

design formulated, planned, devised, created or invented touring about change in the educational policy, programme, structure, and operation in a given system of education so that

do indicate some of the broad classifications of/educational objectives which are possible of educational attainment by students.—' However, an educational objective which is

Evaluation of training programmes in terms of on-the-job behaviour is more difficult than the reaction and learning evaluations described in the two previous sections. A more

Certainly the two handbooks of the &#34;Taxonomy of Educational Objectives: Cognitive Domain and Affective Domain&#34; do indicate some of the broad classifications of

The crystalisation of systemic readinp, is the development of a tentative outline of a research plan or evaluation proposal which should contain = -eneral introduction or statenent

In boarding schools the numbers of such staff can reach alarming proportions, (it has been known, in a university college, for the total number of staff, teaching and non-