• Aucun résultat trouvé

Ann Taket

I am going to talk about how we use research and evidence in moving ahead in implementation, in achieving change and in action.

As my starting point, I note that it is a recurring theme in HEALTH21 that policy, service development, organisational arrangements, managerial decision-making and professional practice should all be built on the strongest possible knowledge base. So we are looking at evidence-based health policy, evidence-based health services and evidence-based health practice. The problem is that a lot of work has been done on evidence-based medicine, and there are some differences between evidence-based medicine and the evidence bases I would argue we need to adopt for HEALTH21.

Dynamic situations require flexible and adaptive resources. The environment for policy implementation in countries is never static at any level; regular monitoring is needed to compare policy practice in action with the original intent, and to assess the implications of any divergence. So what we need is reflective practice. Foresight and monitoring can be used to detect weak signals of any change in the operating environment.

There are a number of different challenges that we have to face if we are to create evidence-based health policy, health services and health practice. These can be thought of as requiring response at various levels: at the policy level, at the strategic level and at the practice level and also, in terms of geography, at the international, national and local levels.

These are challenges that also require responses by at least three overlapping and interacting groups of actors, defined by their relationship to research and evidence. There are, first, those who commission research, i.e. the generation of evidence. Second there are the researchers themselves, who generate that evidence, and thirdly those who use the research. There are a number of different user groups: policy-makers, decision-makers, health professionals, health service users and health service non-users. Looking towards the future, we hope they will all be users of research and evidence, perhaps by soft computing or in other ways.

Each of these challenges has to be met in a different way by each of those different groups.

First there is the challenge of valuing appropriately different types of evidence. This applies at all levels and to all actors. Evidence can be scientific, in that it can result from the natural, biomedical or social sciences. It is particularly important that the social scientific evidence – the evidence resulting from policy sciences and the broad range of social sciences, including that gleaned from appropriate case studies – is not neglected. Our major challenge is that when we meet a term such as “science-based” or “scientific evidence”, it is taken to be the natural and biomedical sciences and not the policy or social sciences.

Thus a challenge for the future is to ensure that the criterion of appropriate evidence is

applied. And corresponding to each of those sciences we have different criteria for validity that must be called into play. Alongside that is a recognition that different types of science require quite different types of research methodology. In particular, we have to put more value on qualitative methodologies if we are to answer some of the questions that face us.

This brings me to the challenge of responding to gaps in research and the evidence we have.

Again, this applies at all levels and particularly requires a response by commissioners of research and by the broad community of researchers. There are significant gaps in our

knowledge in a number of areas: how we organise health promotion, primary health care and public health; and how we develop and implement a “healthy public policy” and multisectoral action.

We are particularly short of good demonstration and dissemination studies. The gaps are strongly related to problems arising from the way research is funded, which values

biomedically-based research over the health services research and the social science research that we badly need at the moment. Every time I sit in a research funding committee and I hear the discussion, I notice how my colleagues are valuing different types of research quite

differently and, I would argue, often quite inappropriately.

The next challenge is that of creating appropriate, accessible and quality-assured evidence bases or research databases. These need to be quite different in order to respond to the needs at the different levels and of the different actors. Here we need the soft computing databases that individuals in the community can consult to help them decide what health action they take or do not take. We also need to provide databases for health professionals to aid them in their work.

I think that here we have a challenge of making those evidence bases action-oriented,

particularly for the practice level. An article appeared recently in the Health Service Journalon the new United Kingdom National Electronic Library for Health, which is to be the United Kingdom evidence base. It presented an example of what a clinician might see when typing in a query about management of diabetes for a 45-year-old man. The first item was, “Shared care

works provided you have a good information system”. Now, that to me is content-less. What exactly is a good information system? Without further interrogation of that database, I would argue that that item of evidence is really useless.

The same article suggested that the appropriate time of retrieval, which would allow clinicians to use information, is 15 seconds. In other words, if useable information does not come back from the system within 15 seconds health professionals will not use it. They will not have sufficient time to actually do the job well – to use that evidence base.

My final challenge is that of building capacity, particularly of the third group of actors, the users of research or evidence. You will recall that I defined those as extending from the community through health service professionals to policy-makers at all levels. How far do we want health professionals to become generators of research, i.e. researchers as well as the users of research?

All health professionals have an enormous opportunity to generate evidence by reflecting on their everyday working practice. Do we want all health professionals to be researchers? The challenge of building capacity in those who use research and evidence means achieving and sustaining a significant cultural shift. It requires a major change in attitude towards health services research and development. It requires a major change in the notion of what constitutes scientific evidence.

Ron Zimmern

Let me say that I agree with the argument Ann Taket has made in her presentation of this field.

In response the first point I would make is that we should not forget basic science. Some of the really major findings that are of use to public health come from basic science and do not at first sight appear to have any sort of applications.

“Effectiveness” is one of these values that we bandy about, and when one talks about effectiveness to medical students they immediately think this is the one thing that is not a value – this is about something concrete, something absolutely objective.

What is effectiveness? By definition something is effective when it achieves the objectives for which it was designed. You cannot say whether it is effective or not until and unless you have defined the objectives. Of course, the definition of objectives is in itself very laden with values.

A cancer drug may be totally ineffective if you define your objective as cure, but highly effective if you define your objective as palliation for six months. So there are these issues of terminology, which are very important in this field of evidence and research.

We should not forget that medicine and caring are an art as well as a science, and that is very much taken account of in the definition of public health. I wonder if that is not one of the reasons why in the United States more consultations are made in the field of complementary than orthodox medicine. Has it something to do with art and science? I think we too often, in this day of evidence-based medicine, forget about the art of science or the art of public health, which may be seen as population medicine.

Very often, in making policy decisions, we do not distinguish between knowledge and

judgement. I think this is probably what has gone wrong with some government policy, in that this differentiation was not explicitly made. The evidence, knowledge and confidence intervals concerning BSE are there for everybody to see. It is just the judgements that people make

about it that are different. One is in the field of art and value and judgement, while the other is actually some degree of scientific fact. When you confuse this you get all sorts of nonsense about science not being science.

My final point is about motives and it responds to Ann Taket’s question, “what are you using the science for?” How should we understand the recent establishment in the United Kingdom of the National Institute of Clinical Excellence? What is the intention behind it? Should we accept that this is to be something that is totally value-free, or should we be slightly cynical about why it was set up, what it accepts as strict evidence, and what statements are then made about it? To put it another way: do we not sometimes distinguish issues of effectiveness from issues of affordability?

Cristina Puentes-Markides

I welcome Ann Taket’s approach to what we actually know or do about evidence-based policy and action. A meeting on “Translating evidence into practice”, organised in Washington, DC by the American College of Preventative Medicine, highlighted current experiences as well as the benefits and possible risks of using this.

In spite of the recognised values of the approach, the risks are particularly pertinent when it is true that testing absolutely everything is impossible, and that we can never be certain that the design of the trial is always perfect. The flaws may lead to wrong decision-making.

It was pointed out by the participants in the meeting that the population in which randomised controlled trials are applied is significant; reactions to drugs can be very different in different population groups. In fact, randomised control trials using women, for example, is a recent phenomenon, and there are very few based on minority populations. In fact, most drugs had been tested only on white men. The participants also mentioned the problems that may arise for a physician who uses evidence-based medicine on patients with several concurrent

conditions, including the possibility that the doctor may not be up to date on the literature or that he or she may not have the same commitment to using evidence-based medicine correctly.

Sandra Dawson

We have been working with a group of people in Cambridge on evidence-based medicine and evidence-based practice. Part of our work is at the highly qualitative end, on subjective evidence of how clinicians account for their practice. These accounts are all in terms of experience and judgement rather than on formal notions of evidence. This prompted me to write something about different worlds. If we take evidence-based medicine, we are looking at the world of natural scientific research, but we could also look at the other world of guideline production for different patient groups or carers. There has been very little connection

between those worlds, and that’s just in the realm of clinical decision-making. How much more divided, blinkered and closed-in are the different worlds in the broader scenario that we have been talking about?

Jorma Rantanen

Ann Taket emphasised the new approach of qualitative studies. I acknowledge that this is the way to understand something rather than merely knowing it, and therefore very important.

Many of the studies are excellent, but it seems to me that the quality of qualitative research is not always very high. I would argue that we need to set some standards for qualitative

research; otherwise we might be in a mess in using those data. We positivistic biomedical

people have difficulties in judging what is good qualitative research and what is not. In order to use the research as a resource, we need quality assurance for qualitative research.

Ann Taket

I think standards for quality of research do exist, in the same way that we now have a lot of guidance about how to critically appraise “quantitative research”. A lot of effort has gone into producing a comparable guide on how to appraise qualitative research, so both sets exist and can be used. Among the main sources are the following.

BLAXTER, M. Criteria for the evaluation of qualitative research papers. Medical sociology news, 22: 68–71 (1996).

POPAY, J. ET AL. Rationale and standards for the systematic review of qualitative literature in health services research. Qualitative health research,8: 341–351 (1998).

I am in a position to see some extremely badly designed quantitative research. The examples that Cristina Puentes-Markides gave about problems with randomised control trials are very salutary. We should resist the temptation to take them as a gold standard; when we delve beneath the surface, we find that they are far less convincing than we had previously thought.

Good guidance can be found in, for example, the following.

SACKETT, D.L. ET AL.Evidence-based medicine: how to practice and teach EBM, 2nd ed.

London, Churchill Livingstone, 2000.

Undertaking systematic reviews of research on effectiveness. CRD’s guidance for those carrying out or commissioning reviews. York, NHS Centre for Reviews and Dissemination, University of York, 2001 (CRD Report No. 4, 2nd ed.) (http://www.york.ac.uk/inst/crd/report4.htm).

Cristina Puentes-Markides

My point was that, in some contexts, there can be problems with evidence-based medicine. I very much like the idea of using the results of experience and social experimentation to feed into decision-making.

Ilona Kickbusch

If evidence – whatever that is – replaces judgement, how does that relate to the political risk that elected officials as policy-makers are supposed to take in terms of making judgements? I

personally consider it very dangerous to think that evidence can replace judgement, but it is also related to the current focus on issue politics. Some people think it positive that there are no longer any ideologies, but to me there is something very seriously wrong in the policy process. There is this notion of the “technocrats’ republic”, which I think would be a very negative utopia.

John Wyn Owen

The Canadian government is undertaking experiments to engage the community in debates about policy, particularly policy that involves risk. Normally, and certainly in the United Kingdom, these are handled within government and the documents are not made public. The Canadians have embarked on a much more open development of policy. I was quite intrigued to see how much risk the politicians were actually going to take in developing that process.

Jorma Rantanen

I think we should ask about applying knowledge at the policy level, or rather the lack of action

even when the evidence is there. Why do so many people find themselves excluded from services?

In my own field, in spite of very evident need, 70 million European workers do not have any occupational health service. That makes discussion on quality, priorities and effectiveness irrelevant for them. You must first have the service before you can discuss quality and effectiveness.

Richard Walsh

I want to address the issue of applying knowledge and judgement in policy-making, because there is more to it than that. There is something that in the United Kingdom civil service is called “handling”. It is included in all submissions to ministers and it means presenting the policy and anticipating the reaction. Take an issue such as BSE. Ministers will have the evidence before them from the expert group. They have to make a judgement, and their judgement is based on that evidence, but they are also thinking about how they are going to present that evidence to the media and the public and how the public and the media are going to react. Their judgement needs to take account of all those things.

Sandra Dawson

I have no doubt that the United Kingdom government has a very clear commitment to act on issues and to bring together interdepartmental, interdisciplinary groups to work on those issues. But as Richard Walsh has illustrated, the fear of failure, partly exaggerated by the media, is so great that the issues become narrower and narrower. What we have identified at this meeting is the need to get away from that narrowness and work on the bigger picture.

We could usefully look at the Prince of Wales International Business Leaders’ Forum. It has done some excellent work on partnerships between civil society, the public sector and business in terms of concepts, strategy and educating people about partnership-building.

Ann Taket

No matter how much evidence you have, it will never remove the need for judgement. The observation I would make is that we also need to ask who participates in this judgement. That brings us back to our other issue of changes in notions of governance and changes in the role of civil society. There have been the debates around the role of citizens’ juries in priority-setting. Those are efforts to put some participation into the process of making judgements, and I think more needs to be done there.

I think the United Kingdom civil service’s notion of “handling” that Richard Walsh referred to is part of an old model, whereby the policy decision is made with a view to how the public will react but not with their participation. I think we are beginning to see signs of recognition that that model ought to change, but I do not think we have yet considered what the new model might be and how it might actually be enacted.

Further reflections

Sandra Dawson

So far we have only talked about the “above-ground partners”, the actors who are trying to do good. We also need to look at the global “black health market”: trade in illicit drugs, human organs, sex, unsafe medicine, tobacco and ill health in general. This could even be said of some of the health consulting, exporting managed care to places where it is

definitely not appropriate. There is a lot of money in those systems; and information

technology and globalisation support them, such as in the way the Internet is used for marketing. We need to look much more systematically at those kinds of actor, and make

technology and globalisation support them, such as in the way the Internet is used for marketing. We need to look much more systematically at those kinds of actor, and make