Thinking differently about patient safety

Post date: 16/12/2019 | Time to read article: 7 mins

The information within this article was correct at the time of publishing. Last updated 17/12/2019

An obvious question to ask when people are unintentionally but avoidably harmed is: “why did things go wrong?”  Professor Paul Bowie, Programme Director (Safety & Improvement) at NHS Education for Scotland, looks at how we can think differently about patient safety.

When it comes to the science and practice of patient safety, “why did things go wrong?” is the first question that may be asked when unintentional but avoidable harm has been caused.

There is, however, an alternative and equally intriguing question: “why do things go right most of the time?” We might further add: “…especially in our highly complex healthcare systems?”

Like most safety-critical industries worldwide, healthcare is being challenged to think differently about how we view the concept and practice of safety in the 21st century.1 Traditionally, the goals of patient safety are to learn from when things go wrong and also to create the conditions of care delivery that minimise the risks of patients being harmed as much as feasibly possible – a risk management principle known as ALARP (As-Low-As-Reasonably-Practicable).2

Most of our patient safety efforts tend to focus on highlighting, reporting, quantifying and learning from incidents. When we seek, with hindsight, to learn as care teams or organisations, we frequently try to detect deviations from ‘ideal’ practice and then design improvements to prevent or minimise the risk of future incidents – basically this is a ‘find and fix’ approach that aims to isolate specific ‘causal’ events (eg failure to communicate a test result to a patient) and rectify them so the identified incident trajectory should not reoccur.

Failing functionality

The assumption is that unreliable technology and fallible clinicians, executives, managers and others should be treated as one and the same – as problematic system elements that either function as we intended (eg behave as expected or follow protocols rigidly) or do not function as we intended (eg breakdown, or deviate from or violate expected healthcare practice).

In these scenarios, ‘error’ is viewed as variability in human performance that we need to contain or eliminate.3 Typically we do this by developing or redesigning a protocol or procedure, fire-off warnings and reminders, or suggest refresher or further training for those involved. 

The goal is to increase compliance with evidence-based guidance, organisational protocols or expected professional standards, which tends to over-focus on improving our behaviours to minimise the number of unwanted outcomes. Think about how we carry out and act on improvement recommendations from audits or learning from significant event meetings, for example. 

Ultimately, we believe that if all system elements, including us, behave as expected then things will not go wrong. 

A new movement

In recent years this type of dominant thinking in patient safety (known as Safety-I) is coming under critical challenge for being insufficient for complex healthcare systems and a significant reason why we have made little progress in making care safer.4 

A new approach, known as Safety-II, has gradually emerged.5 This perspective introduces the contrasting but compelling concepts of Safety-I and Safety-II as ways to explain why things sometimes go wrong in complex healthcare systems, but also go right in the great majority of cases.

Balancing Safety-I and Safety-II thinking (Box 1)

In orthodox Safety-I thinking, safety is defined almost completely by the absence of something – the point where as few things as possible go wrong in everyday practice. To get to this reductionist state we examine why these wrong things happen and attempt to repair them (often with limited success).

Box 1. Comparison of Safety-I and Safety-II thinking3,8

Aspect

Safety-I

Safety-II

Definition of safety

Absence of adverse outcomes, absence of unacceptable levels of risk.

Things going right, presence of resilience abilities.

Safety management principle

Reactive following incidents, risk-based, control of risk through barriers.

Proactive, continuously anticipating changes, achieving success through trade-offs and adaptation.

Learning from experience

Learning from incidents and adverse outcomes, focus on root causes and contributory factors.

Learning from everyday clinical work, focus on understanding work-as-done and trade-offs.

Performance variability

Potentially harmful, constraining performance variability through

standardisation and procedures.

Inevitable and useful, source of success and failure.

In contrast, Safety-II thinking aims to increase safety by maximising the number of events with a successful outcome. To achieve this means going beyond the study of adverse events to understand how things happen – good and not so good – under different conditions in everyday clinical work. We then get a more sophisticated understanding of the complexity of our work systems, which may better inform efforts to prospectively improve care quality and safety. 

The Safety-II philosophy can be difficult to grasp for some with ingrained Safety-I beliefs. In essence it means accepting that the same behaviours and actions that lead to good care can also contribute to things going wrong – ie the same decisions that lead to care successes can also lead to care failures, even under similar conditions. So our everyday behaviours or actions that can sometimes lead to error are actually variations of the same actions that more often than not produce successful care outcomes. However, it is only with hindsight that we can see that some of our decisions contributed to failure, while some led to success. But traditionally we only focus our learning on failures and often, more specifically, the failures of people.

While things going wrong in healthcare are not uncommon (international evidence highlights adverse events are reported in approximately one in ten hospital patients and 1-2% of primary care consultations6,7), successful clinical outcomes are obviously the norm in the vast majority of care provided. Focusing our improvement efforts on learning about how and why this is the case is at the core of Safety-II thinking.

Key concepts

Against this background, some key concepts related to Safety-II are briefly outlined, along with some practical pointers for care teams in thinking differently about patient safety:

Appreciate healthcare is a complex (sociotechnical) system

Healthcare performance is achieved through interactions (successful or otherwise) between human, technical, social and organisational components of the system, and these interactions are rarely simple or linear. We need to move away from assuming linear, cause and effect thinking (ie A + B led to C) because it is largely unsuited to adequately appreciating the complexity of patient care.9

Recognise that outcomes are emergent in complex systems

In complex systems, important outcomes such as patient safety or workforce wellbeing emerge as a result of the interactions described above.10,11 For example, patient safety is not an inherent feature of the system. We cannot state with certainty that a system is safe at any one time (eg the warfarin monitoring system). It is people that largely create safety because of their dynamic ability to adapt and adjust their performance based on the system conditions faced at the time; underpinned by their skill, knowledge, experience and ingenuity, with support from technology, colleagues and procedures etc. 

Rethink ‘human error’

Despite widespread use, we should avoid employing unhelpful terms such as ‘human error’ and its synonyms (eg ‘medical error’). It is problematic because it is fundamentally inaccurate, ill-defined, ambiguous, misleading and educationally backward, especially when it is viewed as a cause. 

To continue to use these terms uncritically is arguably self-defeating and self-harming when it comes to learning, as it just continues to foment the blame and shame culture by focusing on the person rather than the wider system.10,11

Reconcile work-as-done (WAD) and work-as-imagined (WAI)

WAD and WAI are important Safety-II concepts. WAD refers to the actual reality of how everyday work is really done, ie how clinicians and others adapt and adjust what they do to keep patients safe and get the job done. 

WAI refers to the imagined assumptions of how work is done or should be done by those – often detached from sharp-end reality – who design care processes or guidelines, manage organisations, formulate policies or regulate services.3

As a simple example, think about any clinical protocol – is it used as it should be and does it really reflect how the work is actually done? Can you work with colleagues to amend this to make it more informative and useful by reconciling WAI and WAD?

Consider local rationality

When looking back with hindsight at the decisions of others at some point in time, seek to understand why decisions made sense based on the system situation and context they faced (known as local rationality).4 People do not go to work to do a bad job but, at the time, decisions made sense to them otherwise they wouldn’t have made them; so why was this and how can we learn from it?

Efficiency-Thoroughness-Trade-Offs (ETTOs)

Again, when looking in hindsight at decisions and outcomes, consider the ETTOs that people made.5 In complex systems, conditions are dynamic and people adjust what they do, which often involves making trade-offs between being efficient (eg signing a pile of prescriptions with a cursory check of each one) and being thorough (eg carefully checking every single prescription that is signed).

Systems thinking in team-based learning from events

Before trying to understand and answer why something went wrong, ask what does successful work-as-done normally look like in this situation? In this way, you can begin to reconcile both perspectives to get a more informed picture of the system of care you are trying to learn about and potentially change.10,11 

Some systems thinking pointers

  • Start by understanding and describing current systems.
  • What does work-as-done look like?
  • How does every day work usually lead to success?
  • Consider the whole system: are there key functions that need to be completed in a certain way? If so, this may be an area for checklists or specified criteria.
  • Are there areas where a variety of responses would be beneficial? If so, how can staff be helped to make the correct decision?
  • How can variability be managed? Consider the interactions between staff and with technology – can this be simplified or strengthened to improve co-ordinated working?


REFERENCES

1 Mannion R and Braithwaite J. False dawns and new horizons in patient safety research and practice. Int J Health Policy Manag 2017; 6: 685–689.

2 Health and Safety Executive. http://www.hse.gov.uk/risk/expert.htm [accessed 31st July 2019)

3 Hollnagel, E. Resilience engineering: A new understanding of safety. Journal of the Ergonomics Society of Korea, 2016; 35(3), 185-191.

4 Braithwaite J, Wears R, Hollnagel E. Resilient health care: turning patient safety on its head. Int J Qual Health Care.2015;27: 418–420.10.1093/intqhc/mzv063

5 Hollnagel E. Safety-I and Safety-II: the past and future of safety management. Surrey: Ashgate; 2014.

6 To Err is Human: Building a Safer Health System. Institute of Medicine (US) Committee on Quality of Health Care in America; editors; Kohn LTCorrigan JMDonaldson MS, editors

Washington (DC): National Academies Press (US); 2000.

7 Panesar, Sukhmeet Singh, deSilva, Debra, Carson-Stevens, Andrew, Cresswell, Kathrin M, Salvilla, Sarah Angostora, Slight, Sarah Patricia, Javad, Sundas, Netuveli, Gopalakrishnan, Larizgoitia, Itziar, Donaldson, Liam J, Bates, David W and Sheikh, Aziz 2016. How safe is primary care? A systematic review. BMJ Quality & Safety25 (7) , pp. 544-553

8M. Sujan (2018). A Safety-II Perspective on Organisational Learning in Healthcare Organisations. Int J Health Policy Manag, doi 10.15171/ijhpm.2018.16

9 Plsek P and Greenhalgh T. The challenge of complexity in healthcare. Br Med J 2001; 323: 625–628.

10 McNab D, Bowie P, Ross A, Morrison J. Understanding and responding when things go wrong: key principles for primary care educators. Educ Prim Care. 2016;27: 258–266

11 McNab D, Bowie P, Morrison J, Ross A. Understanding patient safety performance and educational needs using the 'Safety-II' approach for complex systems.Educ Prim Care. 2016 Nov;27(6):443-450.

Share this article

Share
New site feature tour

Introducing an improved
online experience

You'll notice a few things have changed on our website. After asking our members what they want in an online platform, we've made it easier to access our membership benefits and created a more personalised user experience.

Why not take our quick 60-second tour? We'll show you how it all works and it should only take a minute.

Take the tour Continue to site

Medicolegal advice
0800 561 9090
Membership information
0800 561 9000

Key contact details

Should you need to contact us, our phone numbers are always visible.

Personalise your search

We'll save your profession in the "I am a..." dropdown filter for next time.

Tour completed

Now you've seen all of the updated features, it's time for you to try them out.

Continue to site
Take again