• Aucun résultat trouvé

3.5 The „Human Factor” Approach

3.5.3 Failures happen where humans are - risk and safety in clinical

As mentioned, Risk Management considers besides errors and failures also adverse events.

“In spite of increased attention to quality, errors and adverse outcomes are still frequent in clinical practice”, (Leape L. , 1994).

A recent American observational study delivers insight into some figures. They

“found that 45% of patients experienced some medical mismanagement and 17% suffered events that led to a longer hospital stay or more serious prob-lems”, (Andrews, Stocking, Krizek, Gottlieb, Krizek, & Vargish, 1997).

Leape emphasised that safer practice can only come from acknowledging the potential for error and building in error reduction strategies at every stage of clinical practice (Leape L. , 1994). Reason pointed out, that “human factors prob-lems are a product of a chain of causes in which the individual psychological factors (that is, momentary inattention, forgetting, etc.) are the last and least manageable links…”, (Reason J. , 1995). This means, not all errors lead to seri-ous harm. In fact, it usually requires a string of errors to result in harm to pa-tients. Yet it is important to consider these human factors (Streimelweger, Wac,

& Seiringer, 2015).

It is not enough to practice Risk Management or to try everything to improve safety, but it is also necessary to monitor and control the results. As Arrow said,

“the problem of control is defined as that of choosing operating rules for mem-bers of an organization and enforcement rules for the operating rules so to max-imize the organization's objective function” (ARROW, 1964). Arrow sketched the control problem for three characteristic types of large organizations: large corpo-rations, governments in their budgetary aspects, and economic systems as a whole (ARROW, 1964).

Arrow divided the problem of organizational control itself into two parts: the choice of operating rules instructing the members of the organization how to act, and the choice of enforcement rules to persuade or compel them to act in ac-cordance with the operating rules. “A widespread usage is to refer to the operat-ing rules as control-in-large and the enforcement rules as control-in the-small. It should be noted that enforcement, here as elsewhere, includes both the detection and the punishment of deviations from the operating rules”, (ARROW,

1964). In the literature there are various other terms for these two problems in use. Arrow is of the opinion that enforcement includes both the detection and the punishment of deviations from the operating rules.

Of course, failures might happen where humans are. The question is how to deal with failures caused by humans. Is it possible to define risk indicators or risk categories and hence would it make sense at all to classify human factors?

CIRS – Critical Incident Reporting Systems

“Errors in medicine” are among the ten most common causes of death in healthcare (Brennen, Leape, Laird, Herbert, Localio, & et al., 1994), (Com Q of HC in A, 2001), (Kohn, Corrigan, & Donaldson, 2000), (Ennker, Pietrowski, &

Kleine, 2007).

To mitigating errors fall so called Incident-Reporting-Systems (IRS) respectively Critical-Incident-Reporting-Systems (CIRS) are used. The goal of such systems is to detect errors before they occurred at all. There is a guideline available by WHO56 on how to design and implement successfully a CIRS. Reason (Reason J. , 1995) for example concluded, that on the one hand “effective Risk Manage-ment depends critically on a confidential and preferably anonymous incident monitoring system that records the individual, task, situational, and organiza-tional factors associated with incidents and near misses”. On the other hand

“effective Risk Management means the simultaneous and targeted deployment of limited remedial resources at different levels of the system: the individual or team, the task, the situation, and the organisation as a whole”, (Reason J. , Understanding adverse events: human factors, 1995).

Below Figure 16 gives an overview on an Incident-Reporting-System.

56 WHO: the report is available under

http://www.who.int/patientsafety/events/05/Reporting_Guidelines.pdf

Figure 16: Pyramid of safety relevant events in the cycle of an Incident-Reporting-System (source: (Rall, et al., 2006), based on (Dieckmann & Rall, 2004), (Möllemann, Eberlein-Gonska, Doch, & Hübler, 2005), (Rall, Manser, Guggenberger, & Unertl, 2001)

Talking about Incident-Reporting-Systems it is necessary to understand the dif-ferentiation between errors, incidents and accidents:

 Errors are a deviation from a reputable as correct behaviour or a de-sired result which could have been performed or achieved by the actor.

Errors can lead to incidents and accidents (Hofinger, Horstmann, &

Waleczek, 2008).

 An incident is deemed to exist if the safety of the patients was limited or could be, that it that is, it could have been an accident, but did not because, for example, different security mechanisms have prevented this (Reason J., 1990), (Perrow C., 1999), (CIRS), (Hofinger, Horstmann, & Waleczek, 2008).

Not all errors lead to serious harm. In fact, it usually requires a string of errors to result in harm to patients. This concept was illustrated by J. Reason through his well-known “Swiss Cheese Model” (Figure 17). In this model, the slices of cheese represent the various system defences between hazards and adverse events and the holes represent active and latent (system) errors (Reason J., 1990). The slices of cheese are in constant motion. The holes generally do not form a straight line, with at least one slice blocking hazards from reaching pa-tients (upper part of Figure 17).

Most incidents of harm occur when the holes in the slices of cheese (the active and system errors) temporarily align, allowing hazards to reach patients (lower

part of Figure 17) (Reason J., 1990), (Reason, 2000), (Vincent, Taylor-Adams, &

Stanhope, 1998), (Latino, 2004), (Hofinger, Horstmann, & Waleczek, 2008).

Reason also “uses the terms intent, actions and consequences, which correlate to latent, human and physical roots”, (Latino, 2004).

According to Reason, “safety significant errors occur at all levels of the system, not just at the sharp end” (Reason J. , 1995).

Figure 17: “Swiss Cheese Model” by J. Reason, adapted from (Reason J., 1990)57

“A fundamental culture change is necessary to ensure that innovations intro-duced to improve patient safety actually achieve their potential”, (Nieva & Sorra, 2003). For example, according to Leape, adverse event reporting systems will not overcome chronic underreporting problems within a punitive culture where acknowledgement of error is not acceptable (Leape L. , 1994).

57 Source of the Model: http://www.evidenceintopractice.scot.nhs.uk/patient-safety/what-is-patient-safety.aspx,

retrieved 2014-11-25

Taking a look across the borders, it can be said that in countries as Germany58, Switzerland 59, England60, and the United States of America61 and a number of other nations, e.g. Ireland62, Sweden63, Netherlands64, etc., have already estab-lished national CIRS systems to report events and especially adverse events or rather so called near-miss in healthcare. In some countries such systems are completely voluntary on the other hand some nations have compulsory reporting system, which results in more accurate and meaningful reports based on the investigated data.