• Aucun résultat trouvé

Chapitre 1 Préambule

Quality in health care :

a social construct

Dans ce chapitre nous montrons que la qualité en

santé est un construit social.

Par conséquent les démarches d'amélioration de la

qualité sont un processus social.

Diffusion :

Texte présenté au séminaire d'épidémiologie de l'Institut de Médecine Tropicale - Anvers ' Méthodes en

santé Publique' :

Quality in health care : a social construct

Introduction

Today stating one’s dedication to produce high quality care is not enough. Such quality now

has to be demonstrated, hence a global quest for a fair, reliable and comprehensive measurement

of quality of health care and services. This prevails in industrialised countries (resulting in an

unprecedented growth of institutions promoting and ascertaining quality in health care) as well as

in low and middle income countries. Policy makers in the latter are also increasingly aware that

scaling up the coverage of health services or reaching higher numbers of the target population of

disease control programmes does not suffice. The content and the process of care need

improvement for the health system to contribute effectively to the well-being of individuals and

populations.

For years, in developing countries public health services and disease control programmes

have been collecting large amounts of data in order to monitor their activities. However, a major

gap remains in transforming these data into meaningful information for quality improvement. An

assessment of quality is required indeed to monitor services, to compare programmes or health

services outcomes across the system, for accountability purposes or, more down to earth, to set

up reward systems for good performance. Politicians ask health professionals to be accountable

and to show that quality of care is up to their claims. Decision makers request evidence on the

impact of their decisions on the quality of care or services delivered. Public health researchers

and economists need a variable to include quality in their models. Programme managers need a

comprehensive outcome measure. System managers want to compare quality over time and

across the health system facilities. All managers want to check the impact of interventions on

quality.

Therefore the case for measuring quality is made: we need an answer to the question how

much of this quality there is in what we are doing. It appears straightforward. We need first to

define quality, then identify its determinants or components and finally combine them in a valid

and comprehensive framework that can produce the synthetic quality index that so many

stakeholders request. Let us embark on this seemingly easy pathway to quality assessment.

Quality in health care

Defining health care quality: several perspectives

As early as 1933, Lee & Jones, quoted by Donabedian, defined quality as the kind of

medicine practised and taught by recognised leaders of the medical professions: a professional

perspective (Donabedian 1968; Health Services Research Group 1992). Berwick & Enthoven

consider that offering a quality service is primarily meeting the clients’ needs: a customer

perspective(Berwick et al. 1992). Gray defines quality of a health service as the degree to which it

conforms to preset standards of care : a managerial perspective (Gray 1997; Ovretveit 1998). Still

the three perspectives offered by these definitions are not sufficient, apparently, to embrace the

whole concept of quality. Ovretveit includes cost in his definition: an economic perspective

(Ovretveit 1998). Lohr’s definition adds a focus on results considering ‘desired’ outcome (Lohr &

Harris-Wehling 1991). Steffen adds non-medical outcome, swinging the definition back to the

patient perspective (Health Services Research Group 1992; Steffen 1988).

The above enumeration shows that none of these definitions is deemed to be

comprehensive enough to generate a universal consensus. Indeed, Lohr (1991) identified more

than a hundred definitions! Moreover, none of them actually tells us precisely what is to be

measured. Let us therefore move on to the next step and identify quality determinants or

components to see if this is a better starting point for measuring quality.

Analysing health care quality: components

Effectiveness is probably the first determinant coming to mind. Effectiveness is obviously

related to the technical competence of the care providers and to the safety of their decisions. However,

care is not delivered in a vacuum. The patient perspective is important and thus the quality of

interpersonal relations must also be considered. Donabedian identifies three components of health

care delivery: technical care (a professional perspective); interpersonal relationships (a patient

perspective); and the amenities (Donabedian 1980). Likewise, Campbell considers that measuring

quality can be reduced to two questions: (1) does an individual have access to the care he needs,

and (2) when s/he gets it, is it effective both biomedically and in terms of the individual caring

relationship? (Campbell et al. 2000). Care must be relevant not only to the objective but also to the

perceived needs of the patient. Another element is cost. Cost-effectiveness and cost-efficiency are

important issues. When it comes to costs, the issue of affordability and financial accessibility for

all – and thus equity – is relevant.

We could go on and refine these components or identify more of them. However, as with

the definitions of quality, none of them is comprehensive enough to be an adequate ‘proxy’

indicator for quality of care. Obviously, we need to combine them in a valid quality measurement

framework.

Putting them back together: frameworks for quality of care evaluation

In 1966, Donabedian popularised the Structure - Process - Outcome framework

(Donabedian 1966). He then suggested exploring these three levels in the three dimensions of his

definition of quality: technical care, interpersonal relations and amenities (Figure 2). Ovretveit’s

framework (Figure 2) for health programme evaluation, building on Donabedian's model, spells

out three slightly different dimensions: patient, professional and managerial perspectives

(Donabedian 1980; Ovretveit 1998). Campbell’s framework (Figure 2) is reduced to the two

dimensions of his definition: access and effectiveness (Campbell et al. 2000). He considers the

health care system as a structure and patient-centred care as the central process in general

practice, and views outcome from a broader perspective than health only. Di Prete Brown’s

(1994) framework (Figure 2) keeps Ovretveit’s three perspectives (patient, professional,

managerial) and adds continuity and safety to the determinants already proposed by Donabedian

(technical, interpersonal and amenities) and by Campbell (access and effectiveness). The Institute

of Tropical Medicine of Antwerp proposes the IPOPOC (Input-Process-Output-Outcome)

model for evaluation (Beghin et al. 1991; Tellier et al. 1991). This model makes a distinction

between outcome (impact on health), usually difficult to relate to a particular action, and output

(immediate, measurable product of an intervention: for example, immunisation coverage). The

overarching dimensions of quality emphasized by their model are continuity and

patient-centredness of integrated care. In addition care, for them, must be both effective and efficient

and enhance the autonomy of individuals and population. The US Institute Of Medicine (IOM)

offer another framework in their publication ‘Crossing the Quality Chasm’ (Institute of Medicine

2001a), that is composed of six aims (safety, effectiveness, patient-centredness, timeliness,

efficiency and equity), to be brought about by applying ten cross-cutting rules or principles.

Technical care

Interpersonal relations

Amenities

Input Process Outcome

Representation of the framework proposed by Donabedian

Patient quality Professional quality Management quality

Input Process Outcome

Representation of the framework proposed by 0vretveit

Accessibility Effectiveness Structure: Health care system Process: Patient centred care Outcome: Consequences of the care

Representation of the framework proposed by Campbell

Patient perspective Provider perspective Manager perspective Interpersonal relations Efficiency Effectiveness Technical quality Access Continuity Safety Amenities

Representation of the framework proposed by DiPrete Brown

Figure 2 : Four frameworks for quality of care evaluation

All these frameworks help to identify the areas that are worth attention and, more

importantly, to put them in perspective. However, using these frameworks to compute the value

of each dimension and combine them in one weighted synthetic indicator (as do the balanced score

cards popular in commercial firms (Kaplan & Norton 1992), or the league tables in health care)

raises a lot of controversies (Appleby & Thomas 2000; McKee et al. 1997; Nutley & Smith 1998).

First, there is an inherent trend to overemphasise what is quantifiable at the expense of the more

intangible (Casalino 1999). Second, it is questionable practice to compute and add up scores for

‘things’ as different as the number of operating tables or the number of post-operative infections,

and the patient-centredness score of a consultation or the satisfaction score of a client. In the

quality measurement domain, researchers assess the validity of one or another combination of

indicators. However, there seem to be as many indicators and frameworks as there are authors

and they are many. Moreover, a combination of indicators relevant to one context is not

necessarily relevant to another. Finally, the technical difficulties in measuring several dimensions

(satisfaction, quality of life, adequacy of clinical decision…) and how best to combine them are

far from resolved indeed. The diversity speaks for itself: there is no agreement as to what could

be a scientific, universally validated framework.

Quality measurement: what’s all the fuss ?

So what can we answer to those who need to measure quality because they are urged to

manage or to demonstrate the quality of their services ? What to answer to evaluators,

programme and service managers, who need an indication of the ‘quality’ level of their

programmes or interventions ? Is quality properly measurable at all ?

One way out might be to stop further examining sophisticated measurement models and to

try to understand how we got into this blind alley (paraphrasing Anderson (1992) -discussing

quality jargon- we may ask ‘Quality measurement, why all the fuss ?’). We believe three perspectives are

useful to reach such an understanding: a historical perspective, a semantic perspective, and a

human relations perspective.

Quality management in health care: a concept transferred from industry

Let us first recall the origin of the quality movement in health care. Before it gained

momentum in the service industry and, later, in health care, quality management in its various

forms was first developed in the manufacturing industry. Market forces put pressure on the

quality of industrialised products and mere quantitative increase of output was no longer enough:

products have to be reliable and variation reflects a quality problem. Quality management in the

manufacturing industry is indeed essentially about conformity to preset norms, and

standardisation of processes and output through process re-engineering (Berwick 1989).

Moreover, subcontracting part of the production to external suppliers requires standardisation,

thus the need to specify product properties. The ISO system derives from that. Standardisation

and process re-engineering, emphasised in industrial quality management, were then adapted to

the service industry in order to reduce waste, time, cost, administration, and to bring the

customer closer to the service provider. From commercial services, it gained momentum in

public services and health care.

Variation reduction and standardisation is relevant also for health care services: a normative

approach goes along with the concept of Evidence Based Medicine (EBM), which seeks to align

clinical and therapeutic decisions with best available evidence (Sackett et al. 1996). It is relevant

indeed to consider variation of medical practice as a problem in medical care as well.

Furthermore, standardisation is welcome to coordinate processes provided by interdependent

teams of professionals, sometimes far apart in space and in terms of technical competence and

specialisation. Likewise, process re-engineering is obviously relevant to many health care activities

like, for instance, laboratory procedures, blood products management, sterilisation and infection

control procedures, etc. After all, a surgical procedure is not that fundamentally different from

assembling a car, infection control from cleaning a hotel room, or admitting a patient from

handling a client at a hotel reception. And, as is the case in the service industry, the voice of the

patients (clients) is increasingly getting attention, with more emphasis on customer satisfaction in

health care as well.

However, delivering health care is obviously not the same as assembling products or offering

commercial services. Variation can actually be good in health care (Thompson 2002), for care

refers to a scientifically grounded process implemented through a human relation and embedded

in a societal context. A patient-centred attitude, an important quality dimension, implies a level of

participation in the medical decision, which obviously varies from patient to patient (Fehrsen &

Henbest 1993; Mead & Bower 2000). The ‘best-buy’ decision coming out of a clinical trial carried

out according to the EBM state of the art does not necessarily fit the singular illness experience

of a particular patient seeing his family physician. Evidence based medicine has also been the

object of hot epistemological debates(Gupta 2003; Miles et al. 2002). Moreover, the components

of quality we identified earlier are beset with potential conflicts of interest: what is effective might

not be affordable or perceived as a need, given a specific patient’s profile or expectations. In addition,

ethical considerations have always received high consideration in health care, adding even more

intangible components valued through a very complex social process.

Quality is not a ‘thing’: the semantic confusion

A second aspect, which obfuscates the concept of quality in health care, is semantic in

nature. Loughlin, an epistemology philosopher, explains that there is a language category mistake

in assuming that the function of the term ‘quality’ in language is to describe a ‘thing’, possessing

defined attributes, while in fact it is to (re)commend some thing as a good instance of its kind

(Loughlin 2002). In other words, the word ‘quality’ is put in the wrong logical category. The

former refers to a descriptive function , while the latter corresponds to a commendatory function.

Placing the word in the wrong category gives rise to the indefinite search for the specific

properties recognized as ‘quality’, externally defined in absolute, value-free terms, and applicable

to anything from car to care. Another consequence is that by mixing the commendatory function

with the descriptive function, there is no possibility to reject or discuss what has been stamped as

‘standard quality care’ under supposedly neutral scientific supervision. As a result, failure to

adhere to quality management is looked upon as reactionary resistance to be forcefully overcome.

Non-believers are to be converted.

Measuring quality: the stakes of power control

Quality managers, in their relations with health professionals, often unknowingly perpetuate

this confusion. This leads to a third explanation for the difficulties to measure quality in health

care. The whole process of quality management is perceived as the struggle of managers to

control professionals, triggering in turn the struggle of health professionals to regain control in

the name of professional autonomy(Edwards et al. 2003; Leung 2000; Sutherland & Dawson

1998). This power struggle reveals a shift from implicit, internalized control by the professionals

towards explicit externalized control by the management; hence the precedence taken by the

tangible, the measurable and the quantifiable, all found in the current quality management

paradigm.

The manager - professional struggle reflects the tension between rationalization of resource

use and processes on one hand, and responsiveness to expectations on the other. There is indeed

a profound rift between the managerial paradigm and the professional paradigm. The former

refers to the application of a homo economicus rationality to human resources management -human

behaviour responds to incentives- and to the application of a cost-effectiveness rationality to

resource management. The latter refers to the need for independence and discretion in decision

making justified by the specificity and the complexity of the medical task -claimed to be an art-

and by the singular relation with the client whose interest is supposedly placed above self-interest

(Freidson 2001).

Quality in health care: a socially constructed concept

In the above sections we have moved forward in our exploration, but we are still left with

neither a universal definition nor a validated measurement instrument. Does this mean quality

does not exist ? In our view, the way out is to step away from the descriptive function of the

word and to accept that we have to live with its commendatory function. First we must accept

that quality is multifaceted and that there is neither a universal definition nor a single synthetic

quality index or universal measurement framework. Second, we must accept that quality is

incommensurable. It is not sufficiently reducible to quantitative evaluation. All its dimensions and

attributes are not equally measurable, and, moreover, measurement is biased towards what is

easily quantifiable (Casalino 1999). Third, we must accept that quality of care is a socially

constructed and thus a dynamic concept, continuously evolving over time and across groups of

people.

From this new starting point, we can reconsider the issue of measuring and managing quality

in health care. We do not actually measure quality. What we measure is the degree of presence of

selected attributes, which ‘we’ consider relevant to recognize a service as being of quality. We

refer here to the commendatory sense of the word. As there is no universal definition of quality

in health care, it is precisely through this process of selecting attributes that we define what ‘we’

consider valuable attributes to praise a service as (re)commendable. We thereby recognize that

this process is shaped by the context, by the actors’ strategies and by the goal pursued. Just like

we cannot define quality in absolute terms, we do not ‘manage quality’. What we manage are

resources and services. As a result, when we speak about ‘quality management’, it means that we

manage services and resources to achieve results which ‘we’ agreed to be sufficiently valuable.

As a consequence, the validity of a quality framework only holds for those who built it or

adopt it -the 'we' mentioned above. This means that the more participatory the process, the more

validity the framework has for a given group of people. A quality of care framework cannot be

isolated from the social process which gave it birth, reflecting a balanced set of values and

societal goals. Making this process and the underlying values explicit should therefore be at the

heart and start of quality assessment and improvement.

As a result, because quality is a socially constructed concept, measuring quality is more a

social process than a technical procedure.

The fields of social tension in quality management: not just a study theme for

sociologists but an integral part of quality management.

Applying this vision has consequences for the measurement of quality of care and for the

design of the most appropriate measurement frameworks. The issue is no longer to extract from

the literature a scientifically validated framework: it does not exist. The way forward then is to

build a framework together with the users of the measurement, or to select an existing one together,

which ‘we’ agree to adopt. The quality of the participatory process itself will determine the

‘quality’ and the ‘validity’ of the framework in and for a given context. Practically, this means that

the start of the quality process is to clarify who is involved, why we measure and what we intend

to do with the results. The decision of what to measure and how then only comes in a second

stage - the reverse of what usually happens.

Applying this vision to quality improvement also has consequences for the management of

the organisation: ensuring quality must include the management of the social process at work.

This is implicitly acknowledged by the current quality movement, which often emphasises the

overlap between people management and quality management. However, the issue is not to

develop a strategy to ‘enlighten’, mobilise, and train people to adopt a supposedly universal and

irrefutable approach to quality, but rather to help people build and own a shared and dynamic