Showing posts with label Healthcare. Show all posts
Showing posts with label Healthcare. Show all posts

Friday, July 31, 2020

Culture in Healthcare: Lessons from When We Do Harm by Danielle Ofri, MD

In her book*, Dr. Ofri takes a hard look at the prevalence of medical errors in the healthcare system.  She reports some familiar statistics** and fixes, but also includes highly detailed case studies where errors large and small cascaded over time and the patients died.  This post summarizes her main observations.  She does not provide a tight summary of a less error-prone healthcare culture but she drops enough crumbs that we can infer its desirable attributes.

Healthcare is provided by a system

The system includes the providers, the supporting infrastructure, and factors in the external environment.  Ofri observes that medical care is exceedingly complicated and some errors are inevitable.  Because errors are inevitable, the system should emphasize error recognition and faster recovery with a goal of harm reduction.

She shares our view that the system permits errors to occur so fixes should focus on the system and not on the individual who made an error.***  System failures will eventually trap the most conscientious provider.  She opines that most medical errors are the result of a cascade of actions that compound one another; we would say the system is tightly coupled.

System “improvements” intended to increase efficiency can actually reduce effectiveness.  For example, electronic medical records can end up dictating providers’ practices, fragmenting thoughts and interfering with the flow of information between doctor and patient.****  Data field defaults and copy and paste shortcuts can create new kinds of errors.  Diagnosis codes driven by insurance company billing requirements can distort the diagnostic process.  In short, patient care becomes subservient to documentation.

Other changes can have unforeseen consequences.  For example, scheduling fewer working hours for interns leads to fewer diagnostic and medication errors but also results in more patient handoffs (where half of adverse medical events are rooted.)    

Aviation-inspired checklists have limited applicability

Checklists have reduced error rates for certain procedures but can lead to unintended consequences, e.g., mindless check-off of the items (to achieve 100% completion in the limited time available) and provider focus on the checklist while ignoring other things that are going on, including emergent issues.

Ofri thinks the parallels between healthcare and aviation are limited because of the complexity of human physiology.  While checklists may be helpful for procedures, doctors ascribe limited value to process checklists that guide their thinking.

Malpractice suits do not meaningfully reduce the medical error rate

Doctors fear malpractice suits so they practice defensive medicine, prescribing extra tests and treatments which have their own risks of injury and false positives, and lead to extra cost.  Medical equipment manufacturers also fear lawsuits so they design machines that sound alarms for all matters great and small; alarms are so numerous they are often simply ignored by the staff.

Hospital management culture is concerned about protecting the hospital’s financial interests against threats, including lawsuits.  A Cone of Silence is dropped over anything that could be considered an error and no information is released to the public, including family members of the injured or dead patient.  As a consequence, it is estimated that fewer than 10% of medical errors ever come to light.  There is no national incident reporting system because of the resistance of providers, hospitals, and trial lawyers.

The reality is a malpractice suit is not practical in the vast majority of cases of possible medical error.  The bar is very high: your doctor must have provided sub-standard care that caused your injury/death and resulted in quantifiable damages.  Cases are very expensive and time-consuming to prepare and the legal system, like the medical system, is guided by money so an acceptable risk-reward ratio has to be there for the lawyers.***** 

Desirable cultural attributes for reducing medical errors

In Ofri’s view, culture includes hierarchy, communications skill, training traditions, work ethic, egos, socialization, and professional ideals.  The primary cultural attribute for reducing errors is a willingness of individuals to assume ownership and get the necessary things done amid a diffusion of responsibility.  This must be taught by example and individuals must demand comparable behavior from their colleagues.

Providing medical care is a team business

Effective collaboration among team members is key, as is the ability (or duty even) of lower-status members to point out problems and errors without fear of retribution.  Leaders must encourage criticism, forbid scapegoating, and not allow hierarchy and egos to overrule what is right and true.  Where practical, training should be performed in groups who actually work together to build communication skills.

Doctors and nurses need time and space to think

Doctors need the time to develop differential diagnosis, to ask and answer “What else could it be?”  The provider’s thought process is the source of most diagnostic error, and subject to explicit and implicit biases, emotions, and distraction.  However, stopping to think can cause delays which can be reported as shortcomings by the tracking system.  The culture must acknowledge uncertainty (fueled by false positives and negatives), address overconfidence, and promote feedback, especially from patients.

Errors and near misses need to be reported without liability or shame.

The culture should regard reporting an adverse event as a routine and ordinary task.  This is a big lift for people steeped in the hierarchy of healthcare and the impunity of its highest ranked members.  Another factor to be overcome is the reluctance of doctors to report errors because of their feelings of personal and professional shame.

Ofri speaks favorably of a “just culture” that recognizes that unintentional error is possible, but risky behavior like taking shortcuts requires (system) intervention, and negligence should be disciplined.  In addition, there should not be any bias in how penalties are handed out, e.g., based on status.

In sum, Ofri says healthcare will always be an imperfect system.  Ultimately, what patients want is acknowledgement of errors and apology for them from doctors.

Our Perspective

Ofri’s major contribution is her review of the evidence showing how pervasive medical errors are and how the healthcare industry works overtime to deny and avoid responsibility for them.

Her suggestions for a safer healthcare culture echo what we’ve been saying for years about the attributes of a strong safety culture.  Reducing the error rates will be hard for many reasons.  For example, Ofri observes medical training forges a lifelong personal identity and reverence for tradition; in our view, it also builds in resistance to change.  The biases in decision making that she mentions are not trivial.  For one discussion of such biases, see our Dec. 18, 2013 review of Daniel Kahneman’swork.

Bottom line: After you read this, you will be clutching your rosary a little tighter if you have to go to a hospital for a major injury or illness.  You are more responsible for your own care than you think.


*  D. Ofri, When We Do Harm (Boston: Beacon Press, 2020).

**  For example, a study reporting that almost 4% of hospitalizations resulted in medical injury, of which 14% were fatal, and doctors’ diagnostic accuracy is estimated to be in the range of 90%.

***  It has been suggested that the term “error” be replaced with “adverse medical event” to reduce the implicit focus on individuals.

****  Ofri believes genuine conversation with a patient is the doctor’s single most important diagnostic tool.

***** As an example of the power of money, when Medicare started fining hospitals for shortcomings, the hospitals started cleaning up their problems.

Wednesday, November 6, 2019

National Academies of Sciences, Engineering, and Medicine Systems Model of Medical Clinician Burnout, Including Culture Aspects

Source: Medical Academic S. Africa
We have been posting about preventable harm to health care patients, emphasizing how improved organizational mental models and attention to cultural attributes might reduce the incidence of such harm.  A new National Academies of Sciences, Engineering, and Medicine (NASEM) committee report* looks at one likely contributor to the patient harm problem: clinician burnout.**  The NASEM committee purports to use a systems model to analyze burnout and develop strategies for reducing burnout while fostering professional well-being and enhancing patient care.  This post summarizes the 300+ page report and offers our perspective on it.

The Burnout Problem and the Systems Model 


Clinician burnout is caused by stressors in the work environment; burnout can lead to behavioral and health issues for clinicians, clinicians prematurely leaving the healthcare field, and poorer treatment and outcomes for patients.  This widespread problem requires a “systemic approach to burnout that focuses on the structure, organization, and culture of health care.” (p. 3)

The NASEM committee’s systems model has three levels: frontline care delivery, the health care organization, and the external environment.  Frontline care delivery is the environment in which care is provided.  The health care organization includes the organizational culture, payment and reward systems, processes for managing human capital and human resources, the leadership and management style, and organizational policies. The external environment includes political, market, professional, and societal factors.

All three levels contribute to an individual clinician’s work environment, and ultimately boil down to a set of job demands and job resources for the clinician.

Recommendations

The report identifies multiple factors that need to be considered when developing interventions, including organizational values and leadership; a work system that provides adequate resources, facilitates team work, collaboration, communication, and professionalism; and an implementation approach that builds a learning organization, reward systems that align with organizational values, nurtures organizational culture, and uses human-centered design processes. (p. 7)

The report presents six recommendations for reducing clinician burnout and fostering professional well-being:

1. Create positive work environments,
2. Create positive learning environments,
3. Reduce administrative burdens,
4. Optimize the use of health information technologies,
5. Provide support to clinicians to prevent and alleviate burnout, and foster professional well-being, and
6. Invest in research on clinician professional well-being.

Our Perspective

We’ll ask and answer a few questions about this report.

Did the committee design an actual and satisfactory systems model?

We have promoted systems thinking since the inception of Safetymatters so we have some clear notions of what should be included in a systems model.  We see both positives and missing pieces in the NASEM committee’s approach.***

On the plus side, the tri-level model provides a useful and clear depiction of the health care system and leads naturally to an image of the work world each clinician faces.   We believe a model should address certain organizational realities—goal conflict, decision making, and compensation—and this model is minimally satisfactory in these areas.  A clinician’s potential goal conflicts, primarily maintaining a patient focus while satisfying the organization’s quality measures, managing limited resources, achieving economic goals, and complying with regulations, is mentioned once. (p. 54)  Decision making (DM) specifics are discussed in several areas, including evidenced-based DM (p. 25), the patient’s role in DM (p. 53), the burnout threat when clinicians lack input to DM (p. 101), the importance of participatory DM (pp. 134, 157, 288), and information technology as a contributor to DM (p. 201).  Compensation, which includes incentives, should align with organizational values (pp. 10, 278, 288), and should not be a stressor on the individual (p. 153).  Non-financial incentives such as awards and recognition are not mentioned.

On the downside, the model is static and two-dimensional.  The interrelationships and dynamics among model components are not discussed at all.  For example, the importance of trust in management is mentioned (p. 132) but the dynamics of trust are not discussed.  In our experience, “trust” is a multivariate function of, among other things, management’s decisions, follow-through, promise keeping, role modeling, and support of subordinates—all integrated over time.  In addition, model components feed back into one another, both positively and negatively.  In the report, the use of feedback is limited to clinicians’ experiences being fed back to the work designers (pp. 6, 82), continuous learning and improvement in the overall system (pp. 30, 47, 51, 157), and individual work performance recognition (pp. 103, 148).  It is the system dynamics that create homeostasis, fluctuations, and all levels of performance from superior to failure.

Does culture play an appropriate role in the model and recommendations?

We know that organizational culture affects performance.  And culture is mentioned throughout this report as a system component with the implication that it is an important factor, but it is not defined until a third of the way through the report.****  The NASEM committee apparently assumes everyone knows what culture is, and that’s a problem because groups, even in the same field, often do not share a common definition of culture.

But the lack of a definition doesn’t stop the authors from hanging all sorts of attributes on the culture tree.  For example, the recommendation details include “Nurture (establish and sustain) organizational culture that supports change management, psychological safety, vulnerability, and peer support.” (p. 7)  This is mostly related to getting clinicians to recognize their own burnout and seek help, and removing the social stigma associated with getting help.  There are a lot of moving parts in this recommendation, not the least of which is overcoming the long-held cultural ideal of the physician as a tough, all-knowing, powerful authority figure. 

Teamwork and participatory decision making are promoted (pp. 10, 51) but this can be a major change for organizations that traditionally have strong silos and value adherence to established procedures and protocols. 

There are bromides sprinkled through the report.  For example, “Leadership, policy, culture, and incentives are aligned at all system levels to achieve quality aims and promote integrity, stewardship, and accountability.” (p. 25)  That sounds worthy but is a huge task to specify and implement.  Same with calling for a culture of continuous learning and improvement, or in the committee’s words a “Leadership-instilled culture of learning—is stewarded by leadership committed to a culture of teamwork, collaboration, and adaptability in support of continuous learning as a core aim” (p. 51)

Are the recommendations useful?

We hope so.  We are not behavioral scientists but the recommendations appear to represent sensible actions.  They may help and probably won’t hurt—unless a health care organization makes promises that it cannot or will not keep.  That said, the recommendations are pretty vanilla and the NASEM committee cannot be accused of going out on any limbs.

Bottom line: Clinician burnout undoubtedly has a negative impact on patient care and outcomes.  Anything that can reduce burnout will improve the performance of the health care system.  However, this report does not appreciate the totality of cultural change required to implement the modest recommendations.


*  National Academies of Sciences, Engineering, and Medicine, “Taking Action Against Clinician Burnout: A Systems Approach to Professional Well-Being,” (Washington, DC: The National Academies Press, 2019). 


**  “Burnout is a syndrome characterized by high emotional exhaustion, high depersonalization (i.e., cynicism), and a low sense of personal accomplishment from work.” (p. 1)  “Clinician burnout is associated with an increased risk of patient safety incidents . . .” (p. 2)

***  As an aside, the word “systems” is mentioned over 700 times in the report.

****  “Organizational culture is defined by the fundamental artifacts, values, beliefs, and assumptions held by employees of an organization (Schein, 1992). An organization’s culture is manifested in its actions (e.g., decisions, resource allocation) and relayed through organizational structure, focus, mission and value alignment, and leadership behaviors” (p. 99)  This is good but it should have been presented earlier in the report.

Wednesday, October 9, 2019

More on Mental Models in Healthcare

Source: Clipart Panda
Our August 6, 2019 post discussed the appalling incidence of preventable harm in healthcare settings.  We suggested that a better mental model of healthcare delivery could contribute to reducing the incidence of preventable harm.  It will come as no surprise to Safetymatters readers that we are referring to a systems-oriented model.

We’ll use a 2014 article* by Nancy Leveson and Sidney Dekker to describe how a systems approach can lead to better understanding of why accidents and other negative outcomes occur.  The authors begin by noting that 70-90% of industrial accidents are blamed on individual workers.**  As a consequence, proposed fixes focus on disciplining, firing, or retraining individuals or, for groups, specifying their work practices in ever greater detail (the authors call this “rigidifying” work).  This is the Safety I mental model in a nutshell, limiting its view to the “what” and “who” of incidents.   

In contrast, systems thinking posits the behavior of individuals can only be understood by examining the context in which their behavior occurs.  The context includes management decision-making and priorities, regulatory requirements and deficiencies, and of course, organizational culture, especially safety culture.  Fixes that don’t consider the overall process almost guarantee that similar problems will arise in the future.  “. . . human error is a symptom of a system that needs to be redesigned.”  Systems thinking adds the “why” to incident analysis.

Every system has a designer, although they may not be identified as such and may not even be aware they’re “designing” when they specify work steps or flows, or define support processes, e.g., procurement or quality control.  Importantly, designers deal with an ideal system, not with the actual constructed system.  The actual system may differ from the designer's original specification because of inherent process variances, the need to address unforeseen conditions, or evolution over time.  Official procedures may be incomplete, e.g., missing unlikely but possible conditions or assume that certain conditions cannot occur.  However, the people doing the work must deal with the  constructed system, however imperfect, and the conditions that actually occur.

The official procedures present a doubled-edged threat to employees.  If they adapt procedures in the face of unanticipated conditions, and the adaptation turns out to be ineffective or leads to negative outcomes, employees can be blamed for not following the procedures.  On the other hand, if they stick to the procedures when conditions suggest they should be adapted and negative outcomes occur, the employees can be blamed for too rigidly following them.

Personal blame is a major problem in System I.  “Blame is the enemy of safety . . . it creates a culture where people are afraid to report mistakes . . . A safety culture that focuses on blame will never be very effective in preventing accidents.”

Our Perspective

How does the above relate to reducing preventable harm in healthcare?  We believe that structural and cultural factors impede the application of systems thinking in the healthcare field.  It keeps them stuck in a Safety I worldview no matter how much they pretend otherwise. 

The hospital as formal bureaucracy

When we say “healthcare” we are referring to a large organization that provides medical care, a hospital is the smallest unit of analysis.  A hospital is literally a textbook example of what organizational theorists call a formal bureaucracy.  It has specialized departments with an official division of authority among them—silos are deliberately created and maintained.  An administrative hierarchy mediates among the silos and attempts to guide them toward overall goals. The organization is deliberately impersonal to avoid favoritism and behavior is prescribed, proscribed and guided by formal rules and procedures.  It appears hospitals were deliberately designed to promote System I thinking and its inherent bias for blaming the individual for negative outcomes.

Employees have two major strategies for avoiding blame: strong occupational associations and plausible deniability. 

Powerful guilds and unions 


Medical personnel are protected by their silo and tribe.  Department heads defend their employees (and their turf) from outsiders.  The doctors effectively belong to a guild that jealously guards their professional authority; the nurses and other technical fields have their unions.  These unofficial and official organizations exist to protect their members and promote their interests.  They do not exist to protect patients although they certainly tout such interest when they are pushing for increased employee headcounts.  A key cultural value is members do not rat on other members of their tribe so problems may be observed but go unreported.

Hiding behind the procedures

In this environment, the actual primary goal is to conform to the rules, not to serve clients.  The safest course for the individual employee is to follow the rules and procedures, independent of the effect this may have on a patient.  The culture espouses a value of patient safety but what gets a higher value is plausible deniability, the ability to avoid personal responsibility, i.e., blame, by hiding behind the established practices and rules when negative outcomes occur.

An enabling environment 


The environment surrounding healthcare allows them to continue providing a level of service that literally kills patients.  Data opacity means it’s very difficult to get reliable information on patient outcomes.  Hospitals with high failure rates simply claim they are stuck with or choose to serve the sickest patients.  Weak malpractice laws are promoted by the doctors’ guild and maintained by the politicians they support.  Society in general is overly tolerant of bad medical outcomes.  Some families may make a fuss when a relative dies from inadequate care but settlements are paid, non-disclosure agreements are signed, and the enterprise moves on.

Bottom line: It will take powerful forces to get the healthcare industry to adopt true systems-oriented thinking and identify the real reasons why preventive harm occurs and what corrective actions could be effective.  Healthcare claims to promote evidence-based medicine; they need to add evidence-based harm reduction strategies.  Industry-wide adoption of the aviation industry’s confidential reporting system for errors would be a big step forward.    


*  N. Leveson and S. Dekker, “Get To The Root Of Accidents,” ChemicalProcessing.com (Feb 27, 2014).  Retrieved Oct. 7, 2019.  Leveson is an MIT professor and long-standing champion of systems thinking; Dekker has written extensively on Just Culture and Safety II concepts.  Click on their respective labels to pull up our other posts on their work.

**  The article is tailored for the process industry but the same thinking can be applied to service industries.

Tuesday, August 6, 2019

Safety II Lessons for Healthcare

Rod of Asclepius  Source: Wikipedia
We recently saw a journal article* about the incidence of preventable patient harm in medical care settings.  The rate of occurrence of harm is shocking, at least to someone new to the topic.  We wondered if healthcare providers and researchers being constrained by Safety I thinking could be part of the problem.  Below we provide a summary of the article, followed by our perspective on how Safety II thinking and practices might add value.

Incidence of preventable patient harm

The meta-analysis reviewed 70 studies and over 300,000 patients.  The overall incidence of patient harm (e.g., injury, suffering, disability or death) was 12% and half of that was deemed preventable.**  In other words, “Around one in 20 patients are exposed to preventable harm in medical care.”  12% of the preventable patient harm was severe or led to death.  25% of the preventable incidents were related to drugs and 24% to other treatments.  The authors did not observe any change in the preventable harm rate over the 19 years of data they reviewed.

Possible interventions

In fairness, the article’s focus was on calculating the incidence of preventable harm, not on identifying or fixing specific problems.  However, the authors do make several observations about possible ways to reduce the incidence rate.  The article had 11 authors so we assume these observations are not just one person’s to-do list but rather represent the collective thoughts of the author group.

The authors note “Key sources of preventable patient harm could include the actions of healthcare professionals (errors of omission or commission), healthcare system failures, or involve a combination of errors made by individuals, system failures, and patient characteristics.”  They believe occurrences could be avoided “by reasonable adaptation to a process, or adherence to guidelines, . . .” 

The authors suggest “A combination of individual-level measures (eg, educational interventions for practitioners), system-level*** measures (eg, human-centred design of healthcare tasks and work environments), and organisational-level measures (eg, introducing quality monitoring and improvement processes) are likely to be a promising strategy for mitigating preventable patient harm, . . .”

Our Perspective

Let’s get one thing out of the way: no other industry on the planet would be allowed to operate if it unnecessarily harmed people at the rate presented in this article.  As a global society, we accept, or at least tolerate, a surprising incidence of preventable harm to the people the healthcare system is supposed to be trying to serve.

We see a direct connection between this article and our Oct. 29, 2018 post where we reviewed Sydney Dekker’s analysis of patient harm in a health care facility.  Dekker’s report also highlighted the differences between the traditional Safety I approach to safety management and the more current Safety II approach.

As we stated in that post, in Safety I the root cause of imperfect results is the individual and constant efforts are necessary (e.g., training, monitoring, leadership, discipline) to create and maintain the individual’s compliance with work as designed.  In addition, the design of the work is subject to constant refinement (or “continuous improvement”).  In the preventable harm article, the authors’ observations look a lot like Safety I to us, with their emphasis on getting the individual to conform with work as designed, e.g, educational interventions (i.e., training), adherence to guidelines and quality monitoring, and improved design (i.e., specification) of healthcare tasks.

In contrast, in Safety II normal system functioning leads to mostly good and occasionally bad results.  The focus of Safety II interventions should be on activities that increase individual capacity to affect system performance and/or increase system robustness, i.e., error tolerance and an increased chance of recovery when errors inevitably occur.  When Dekker’s team reviewed cases with harm vs. cases with good outcomes, they observed that the good outcome cases “had more positive characteristics, including diversity of professional opinion and the possibility to voice dissent, keeping the discussion on risk alive and not taking past success as a guarantee for safety, deference to proven expertise, widely held authority to say “stop,” and pride of workmanship.”  We don’t see any evidence of this approach in the subject article.

Could Safety II thinking reduce the incidence of preventable harm in healthcare?  Possibly.  But what’s clear is that doing more of the same thing (more training, task specification and monitoring) has not improved the preventable harm rate over 19 years.  Maybe it’s time to think about the problems using a different mental model.

Afterword

In a subsequent interview,**** the lead author of the study said “providers and health-care systems need to “train and empower patients to be active partners” in their own care.”  This is a significant change in the model of the health care system, from the patient being the client of the system to an active component.  Such empowerment is especially important where the patient’s individual characteristics may make him/her more susceptible to harm.  The author’s advice to patients is tantamount to admitting that current approaches to diagnosing and treating patients are producing sub-standard results. 


*  M. Panagioti, K. Khan, R.N. Keers,  A. Abuzour, D. Phipps, E. Kontopantelis et al. “Prevalence, severity, and nature of preventable patient harm across medical care settings: systematic review and meta-analysis,” BMJ 2019; 366:l4185.  Retrieved July 30, 2019.

**  The goal for patient harm is not zero.  The authors accept that “some harms cannot be avoided in clinical practice.”

***  When the authors say “system” they are not referring to the term as we use it in Safetymatters, i.e., a complex collection of components, feedback loops and environmental interactions.  The authors appear to limit the “system” to the immediate context in which healthcare is provided.  They do offer a hint of a larger system when they comment about the “need to gain better insight about the systemic and cultural circumstances under which preventable patient harm occurs”.

****  M. Jagannathan, “In a review of 337,000 patient cases, this was the No. 1 most common preventable medical error,” MarketWatch (July 28, 2019).  Retrieved July 30, 2019.  This article included a list of specific steps patients can take to be more active, informed, and effective partners in obtaining health care.

Monday, October 29, 2018

Safety Culture: What are the Contributors to “Bad” Outcomes Versus “Good” Outcomes and Why Don’t Some Interventions Lead to Improved Safety Performance?

Why?
Sidney Dekker recently revisited* some interesting research he led at a large health care authority.  The authority’s track record was not atypical for health care: 1 out of 13 (7%) patients was hurt in the process of receiving care.  The authority investigated the problem cases and identified a familiar cluster of negative factors, including workarounds, shortcuts, violations, guidelines not followed, errors and miscalculations—the list goes on.  The interventions will also be familiar to you—identify who did what wrong, add more rules, try harder and get rid of bad apples—but were not reducing the adverse event rate.

Dekker’s team took a different perspective and looked at the 93% of patients who were not harmed.  What was going on in their cases?  To their surprise, the team found the same factors: workarounds, shortcuts, violations, guidelines not followed, errors and miscalculations, etc.** 

Dekker uses this research to highlight a key difference between the traditional view of safety management, Safety I, and the more contemporary view, Safety II.  At its heart, Safety I believes the source of problems lies with the individual so interventions focus on ways to make the individual’s work behavior more reliable, i.e., less likely to deviate from the idealized form specified by work designers.  Safety I ignores the fact that the same imperfections exist in work with both successful and problematic outcomes.

In contrast, Safety II sees the source of problems in the system, the dynamic combination of technology, environmental factors, organizational aspects, and individual cognition and choices.  Referencing the work of Diane Vaughan, Dekker says “the interior life of organizations is always messy, only partially well-coordinated and full of adaptations, nuances, sacrifices and work that is done in ways that is quite different from any idealized image of it.”

Revisiting the data revealed that the work with good outcomes was different.  This work had more positive characteristics, including diversity of professional opinion and the possibility to voice dissent, keeping the discussion on risk alive and not taking past success as a guarantee for safety, deference to proven expertise, widely held authority to say “stop,” and pride of workmanship.  As you know, these are important characteristics of a strong safety culture.

Our Perspective

Dekker’s essay is a good introduction to the differences between Safety I and Safety II thinking, most importantly their differing mental models of the way work is actually performed in organizations.  In Safety I, the root cause of imperfect results is the individual and constant efforts are necessary (e.g., training, monitoring, leadership, discipline) to create and maintain the individual’s compliance with work as designed.  In  Safety II, normal system functioning leads to mostly good and occasionally bad results.  The focus of Safety II interventions should be on activities that increase individual capacity to affect system performance and/or increase system robustness, i.e., error tolerance and an increased chance of recovery when errors occur.

If one applies Safety I thinking to a “bad” outcome then the most likely result from an effective intervention is that the exact same problem will not happen again.  This thinking sustains a robust cottage industry in root-cause analysis because new problems will always arise and no changes are made to the system itself.

We like Dekker’s (and Vaughan’s) work and have reported on it several times in Safetymatters (click on the Dekker and Vaughan labels to bring up related posts).  We have been emphasizing some of the same points, especially the need for a systems view, since we started Safetymatters almost ten years ago.

Individual Exercise: Again drawing on Vaughan, Dekker says “there is often no discernable difference between the organization that is about to have an accident or adverse event, and the one that won’t, or the one that just had one.”  Look around your organization and review your career experience; is that true?


*  S. Dekker, “Why Do Things Go Right?,” SafetyDifferently website (Sept. 28, 2018).  Retrieved Oct. 25, 2018.

**  This is actually rational.  People operate on feedback and if the shortcuts, workarounds and disregarding the guidelines did not lead to acceptable (or at least tolerable) results most of the time, folks would stop using them.