• Aucun résultat trouvé

Some Basic Principles

Divya Srivastava, Martin McKee

their needs as demand. It was assumed that the health-care provider would act in good faith to provide care that was effective, but no one ever checked.

The obvious weaknesses in this approach probably did not matter very much when health care had little to offer. This was the case until at least the end of the nineteenth century when the introduction of safe anaesthesia and asepsis made surgery (especially that involving the opening of body cavities) more likely to cure than to kill. Hence, the number of people in a population who would voluntarily subject themselves to surgery (and health-care’s impact on population health) was limited. By the 1960s, the situation had changed out of all recognition. New pharmaceutical products were becoming available that made it possible to treat large numbers of people with common chronic disorders, in particular hypertension, as well as less common conditions such as cancers. Sulphonamides and penicillins were joined by new classes of antibiotics that could treat many common infections, at least for a time.

Modern health-care’s scope to improve population health was greater than it had ever been (Nolte and McKee 2004).

Yet evidence slowly emerged that health care was not achieving its full potential.

Even in systems providing universal access to care, many of those who might benefit from treatment were not doing so. At the same time, it was being realized that some of the health care provided was ineffective (Cochrane 1972).

Of course, these were concerns for the individuals affected but, increasingly, they were also seen as a concern for society. It is true that society has always taken an interest in whether some people receive effective health care. City authorities in many countries had put in place mechanisms to look after those with infectious diseases (such as tuberculosis) or mental disorders, especially where those affected were potentially violent or disruptive. Although these policies may have owed something to altruism, society’s main reason to act was to protect its members from either contagion or physical attack. However, in some places, society began to take an interest in populations by asking whether the health system it had put in place was delivering what was intended.

A similar evolution has taken place in the broader determinants of health.

Again, it has always been accepted that society should take action on health in some circumstances, classically infectious disease and mental health.

Fundamentally, this was to protect the interests of elites. Infectious diseases posed a personal risk to those in power, especially once mass urbanization began in the nineteenth century. The rich were no longer able to maintain large physical distances from the poor but were forced to breathe the same air and drink the same water, thus rendering themselves susceptible to the same diseases. The state also had an interest in promoting health as a means of protecting itself from attack. Thus, in countries such as France and the United

Kingdom, major public health initiatives arose in response to the realization that there were not enough young men to supply the armies; those who were available were not sufficiently healthy to do anything useful. In due course, this concern moved beyond the need for a large and effective military force to a concern about the entire workforce’s ability to contribute to the economic development of a country. This instigated a series of policies that, in Europe, led progressively to universal education, health care and/or health insurance and the widespread provision of social housing. More recently, these ideas have been developed within a growing literature demonstrating how better population health feeds into economic growth (Suhrcke et al. 2006) – healthier people are more likely to participate, and remain, in the workforce and to achieve higher levels of productivity.

Yet, even when it is accepted that the state has a legitimate role in promoting the health of its population, it is still important to ask where the limits of its responsibility lie. When is the state justified in acting to constrain or otherwise influence individuals’ choices? This is an extremely complex question that goes far beyond what can be covered in this book (McKee and Colagiuri 2007).

However, it is important to consider some key principles.

First, many unhealthy choices are not made freely. In many countries, people on low incomes simply cannot eat healthily, primarily because they cannot afford more expensive foods such as fresh fruit and vegetables. Their situation is often exacerbated because large retailers build their stores in wealthier locations, recognizing the limited scope for sales in deprived areas. Poor people would incur high transport costs and must depend on small convenience stores selling mainly preserved food at comparatively high prices. Similarly, people face problems with access even when a health service (such as a screening programme) is made freely available. Those on low incomes particularly are constrained by distance and travel costs or, in the case of migrants, because information is not available in a language they understand.

Second, there is a consensus that it is appropriate for the state to intervene when one person’s action causes harm to others. This was conceded by John Stuart Mill, whose writings provide the intellectual basis for libertarianism.

…the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will

make him happier, because, in the opinion of others, to do so would be wise, or even right. (Mill 2003).

Yet even Mill conceded that there were certain limits to an individual’s right to organize his or her own life, such as the need for a prohibition on selling oneself into slavery. However, beyond the obvious (e.g. a state law enforcement agencies’ right to take direct action to stop someone opening fire on the public), in what circumstances is the risk to others sufficient to provoke action?

Ultimately, almost every action undertaken by anyone will have (good or bad) consequences for someone else. Norms vary between countries and change over time. In the nineteenth century, the United Kingdom was willing to go to war against China to secure the right to export opium; in the twentieth century the British navy patrolled the Caribbean to prevent Colombian cocaine dealers engaging in the same activity. In some cases norms change as the result of new evidence. Thus, the bans on smoking in public places that are now spreading rapidly in many parts of the world are due, at least in part, to the knowledge that second-hand smoke (particularly from cigarettes smouldering in ashtrays) is far more toxic than was previously thought. This fact has been long known to the tobacco industry but actively concealed by them (Diethelm et al. 2005).

In other cases, shifts are a result of ideology. During the twentieth century there were concerns about injuries and deaths among motorcycle riders and the subsequent costs incurred by society and the families of those injured. These stimulated legislators in many parts of the world to enact laws requiring the wearing of crash helmets but the prevailing libertarian ideology in some American states led them to revoke existing legislation in the late 1990s. It is assumed that the legislators felt that the significant increases in fatalities (Houston and Richardson 2007) were a cost worth bearing for the right to drive at speed without the encumbrance of a helmet.

Third, there is now an unprecedented amount of evidence on the contributions that different diseases and risk factors make to the overall burden of disease, most obviously as a result of the work of the Global Burden of Disease programme (Ezzati et al. 2004). This highlights the cost of failing to address this avoidable loss of human life. It has also highlighted the inconsistencies in policies that focus on phenomena that cause small numbers of deaths while ignoring some of the main killers. This is exemplified by the events in New York and Washington on 11 September 2001 that caused almost 3000 deaths and gave rise to an unprecedented political response, including the American-led invasion of Afghanistan and the suspension of fundamental legal principles such as habeas corpus. In contrast, firearms are used in the murder of over 9000 American citizens every year but this has not produced legislation to control access to guns. Equally, the avoidable deaths of

many thousands of American citizens caused by inadequate access to health care has failed to bring about universal access (Nolte and McKee 2008).

Fourth, there is a growing acceptance that when the state does act, it should ensure that its actions have a prospect of working – policies should be based on evidence of effectiveness. Unfortunately this is often not the case, especially where the media press politicians to respond to each day’s “crisis”. Often, this leads to a stream of poorly thought-through initiatives based on superficial analyses, although their populist nature has the advantage of attracting favourable newspaper headlines. The state should also take measures to implement its policies as action is needed to enforce legislation. This is shown by experience in areas as diverse as under-age drinking, seat-belt wearing and the use of mobile phones when driving. Equally, it is important to ensure that measures have popular acceptance, often by confronting powerful vested interests that seek to persuade the population to act in ways that endanger their health. In all countries, advertising by junk food manufacturers far exceeds efforts by health promotion agencies. For many years skilful (and highly paid) lobbyists for the tobacco industry were able to persuade people that second-hand smoke was simply an irritation rather than a potentially lethal combination of toxins. In the United Kingdom, debate about the respective roles of the individual and the state has been distorted by frequent disparaging use of the term “nanny state” by those who oppose government intervention in any aspect of life. Failure to resolve this debate means that those charged with implementing policies must first sort out the philosophical confusion (McKee and Raine 2005).

In summary, even most individualistic countries have always shown a degree of acceptance that the state is justified in intervening to improve health. There is now a growing understanding of the importance of doing so in a coherent way that is underpinned by an explicit philosophy; recognizes the challenges that many people face when making healthy choices; is based on evidence of effectiveness; and accompanied by measures that will allow it to work.

At the same time, it should not be assumed that everyone will welcome a health strategy, much less one that incorporates targets. A proposal to develop a health strategy assumes that society as a whole has an interest in the health of the individuals that comprise it. As noted above, even today this view is only partially accepted in many countries where responsibility for obtaining health care is seen largely as a matter for the individual. Thus, what most Europeans would see as a promise of government-funded health care for all is seen as a threat by many American politicians (even if limited to children, as apparent in the failure to pass legislation on the State Children’s Health Insurance Program in 2007). Even where there is acceptance of societal responsibility to

pay for health care, there is often a view that the system should simply respond to those who seek care rather than actively seeking them out, interfering in the care they receive, or taking action to prevent them needing care in the first place. If someone is unaware that they have a condition that might benefit from treatment, it is for them to educate themselves and to determine what package of care, from the many on offer, best meets their needs. It is assumed that (with the help of an Internet search engine) they can acquire the expertise needed to ascertain the most effective treatment, disregarding the often enticing promises of quacks and charlatans.

A fifth factor contributes to growing societal interest in health and health care – increasing concern about upward pressure on the cost of care. If society pays collectively for health care, it is legitimate to ensure that the care provided is effective, not wasted and, where possible, measures have been taken to avoid the illness arising in the first place. This is the major consideration underlying the Wanless Report (Wanless 2001). This detailed analysis commissioned by HM Treasury in the United Kingdom revealed that the best way to make the cost of future health care sustainable was to act to reduce the likelihood of becoming ill. Of course, this should be self-evident but it is a sad reflection of the disconnect between health care and health promotion in many politicians’

minds. The corollary is that if people want to obtain care that has not been proven effective (such as homeopathy) then they can always pay for it themselves.

Taken together, these considerations have led governments in many countries to explore a number of mechanisms that are linked to health policies. They include proactive assessment of health needs; development of guidance on what forms of care are effective, and in what circumstances; and monitoring health outcomes to ensure that what is theoretically possible is being delivered in practice. In some places these developments have been brought together in the form of programmes based on health targets. In such programmes, an organization (whose precise role and configuration will depend on the health system within which it exists) will state what it wants to achieve for the health of a given population and when it expects it to be achieved.

This chapter continues with a summary of the historical development of health targets and then looks briefly at two key issues – who should be responsible and what information do they need?

The emergence of health targets

Health targets draw on the concept of management by objectives which emerged in the 1970s from the literature on new public management. This

included closer scrutiny of areas that had been the preserve of professionals through the use of performance measurement, target setting and linkage between resource allocation and measurable outputs. One of its disciples, Peter Drucker (Drucker 1954), identified five key elements of this approach:

1. cascading of organizational goals and objectives 2. specific objectives for each member

3. participative decision-making 4. use of explicit time periods

5. performance evaluation and feedback.

A large body of literature has grown up and provides increasing insights into how these elements can be employed. For example, there has been much discussion about the relative merits of goals that are process or outcome oriented even though, in reality, this is a false dichotomy. While there is little debate about the ultimate importance of achieving changes in outcome, it is also recognized that this may take a long time. For example, an intervention that is extremely effective in preventing young people from starting to smoke now will reduce the subsequent rate of lung cancer, but that change may not be detectable for 30 years. In such a case it is clearly more useful to measure success in terms of reduction in smoking initiation.

Other debates have focused on the extent to which targets should set a general direction of travel or should be detailed road maps, indicating every point along the way. This has been addressed by the separation of aspirational, managerial and technical targets, ranked in terms of the extent to which they prescribe what should be achieved and how (van Herten and Gunning-Schepers 2000).

Similarly, much has been written about the optimal characteristics of targets.

At the risk of simplification this literature has been reduced to the SMART mnemonic described in the introduction. In many ways the target-based approach was simply a formalization of processes undertaken in the commercial sector for many decades, symbolized by the sales chart on the chief executive’s wall. The introduction of the concept into the health sector is often traced to the publication, in 1981, of WHO’s Health for All strategy (WHO 1985). This represented a radical departure from the prevailing approach. It suggested that the progress of nations might be measured not just by their economic progress, using conventional measures such as gross national product, but by the health of their populations. Paradoxically, this represented a return to an earlier time. Before the modern system of national accounts was established in the post Second World War era, the strength of a nation was often assessed in terms of its population, although the focus was on numbers rather than health.

Recognizing the extent of global diversity, each of WHO’s regional offices was charged with developing its own set of targets appropriate to the challenges facing its Member States. The most comprehensive approach was taken by the WHO Regional Office for Europe. This initiated a major consultation process that led to the creation of 38 separate targets covering many aspects of health (WHO Regional Office for Europe 1985). The intention was that they should be achieved by 2000, consistent with the stated aspiration that that date would see the achievement of health for all. The targets varied greatly – some were ambitious, aspirational and probably unachievable; others were highly specific and required only a small improvement in existing trends. Some focused on structures and processes, others on outcomes. Inevitably, only some were achieved and often only in some countries. By the mid 1990s, when it was clear that complete success was not forthcoming (and it was unlikely that it ever would have been) (WHO Regional Office for Europe 1998) it was decided to renew the process. Health21, a health strategy for the twenty-first century, retained health targets but reduced their number to reflect the name of the strategy (WHO Regional Office for Europe 1999 ). It built on some of the lasting legacies of the previous strategy, in particular the substantial strengthening of information systems exemplified by the widely used Health for All database.

Initial development of the Health for All targets involved over 250 experts from across Europe and inevitably impacted on policies within national governments, as did developments elsewhere. For example, the American Government’s Healthy People 2000 programme, launched in 1979 (Surgeon General 1979), was renewed in 1990 and incorporated a number of specific health targets. The Health for All process had itself stimulated many governments to develop strategies to improve health, which (as already noted) often produced a new way of thinking about the role of government. A growing number of these strategies included some form of targets (McKee and Berman 2000).

Who is responsible?

Health is everyone’s business – this statement has become almost a cliché. It has the advantage of reminding us that we all play a part in improving our own

Health is everyone’s business – this statement has become almost a cliché. It has the advantage of reminding us that we all play a part in improving our own