Thursday, March 31, 2022

The Criminalization of Safety in Healthcare?

On March 25, 2022 a former nurse at Vanderbilt University Medical Center (VUMC) was convicted of gross neglect of an impaired adult and negligent homicide as a consequence of a fatal drug error in 2017.* 

Criminal prosecutions for medical errors are rare, and healthcare stakeholders are concerned about what this conviction may mean for medical practice going forward.  A major concern is practitioners will be less likely to self-report errors for fear of incriminating themselves.

We have previously written about the intersection of criminal charges and safety management and practices.  In 2016 Safetymatters’ Bob Cudlin authored a 3-part series on this topic.  (See his May 24, May 31, and June 7 posts.)  Consistent with our historical focus on systems thinking, Bob reviewed examples in different industries and asked “where does culpability really lie - with individuals? culture? the corporation? or the complex socio-technical systems within which individuals act?”

“Corporations inherently, and often quite intentionally, place significant emphasis on achieving operational and business goals.  These goals at certain junctures may conflict with assuring safety.  The de facto reality is that it is up to the operating personnel to constantly rationalize those conflicts in a way that achieves acceptable safely.”

We are confident this is true in hospital nurses’ working environment.  They are often short-staffed, working overtime, and under pressure from their immediate task environments and larger circumstances such as the ongoing COVID pandemic.  The ceaseless evolution of medical technology means they have to adapt to constantly changing equipment, some of which is problematic.  Many/most healthcare professionals believe errors are inevitable.  See our August 6, 2019 and July 31, 2020 posts for more information about the extent, nature, and consequences of healthcare errors.

At VUMC, medicines are dispensed from locked cabinets after a nurse enters various codes.  The hospital had been having technical problems with the cabinets in early 2017 prior to the nurse’s error.  The nurse could not obtain the proper drug because she was searching using its brand name instead of its generic name.  She entered an override that allowed her to access additional medications and selected the wrong one, a powerful paralyzing agent.  The nurse and other medical personnel noted that entering overrides on the cabinets was a common practice.

VUMC’s problems extended well beyond troublesome medicine cabinets.  An investigator said VUMC had “a heavy burden of responsibility in this matter.”  VUMC did not report the medication error as required by law and told the local medical examiner’s office that the patient died of “natural” causes.  VUMC avoided criminal charges because prosecutors didn’t think they could prove gross negligence. 

Our Perspective

As Bob observed in 2016, “The reality is that criminalization is at its core a “disincentive.”  To be effective it would have to deter actions or decisions that are not consistent with safety but not create a minefield of culpability. . . .  Its best use is probably as an ultimate boundary, to deter intentional misconduct but not be an unintended trap for bad judgment or inadequate performance.”

In the instant case, the nurse did not intend to cause harm but her conduct definitely reflected bad judgment and unacceptable performance.  She probably sealed her own fate when she told law enforcement she “probably just killed a patient” and the licensing board that she had been “complacent” and “distracted.”   

But we see plenty of faults in the larger system, mainly that VUMC used cabinets that held dangerous substances and had a history of technical glitches but allowed users to routinely override cabinet controls to obtain needed medicines.  As far we can tell, VUMC did not implement any compensating safety measures, such as requiring double checking by a colleague or a supervisor’s presence when overrides were performed or “dangerous” medications were withdrawn.

In addition, VUMC’s organizational culture was on full display with their inadequate and misleading reporting of the patient’s death.  VUMC has made no comment on the nurse’s case.  In our view, their overall strategy was to circle the wagons, seal off the wound, and dispose of the bad apple.  Nothing to see here, folks.

Going forward, the remaining VUMC nurses will be on high alert for awhile but their day-to-day task demands will eventually force them to employ risky behaviors in an environment that requires such behavior to accomplish the mission but lacks defense in depth to catch errors before they have drastic consequences.  The nurses will/should be demanding a safer work environment.

Bottom line: Will this event mark a significant moment for accountability in healthcare akin to the George Floyd incident’s impact on U.S. police practices?  You be the judge.

For additional Safetymatters insights click the healthcare label below.


*  All discussion of the VUMC incident is based on reporting by National Public Radio (NPR).  See B. Kelman, “As a nurse faces prison for a deadly error, her colleagues worry: Could I be next?” NPR, March 22, 2022; “In Nurse’s Trial, Investigator Says Hospital Bears ‘Heavy’ Responsibility for Patient Death,” NPR, March 24, 2022; “Former nurse found guilty in accidental injection death of 75-year-old patient,” NPR, March 25, 2022.

Wednesday, February 2, 2022

A Massive Mental Model: Lessons from Principles for Dealing with the Changing World Order by Ray Dalio

At Safetymatters, we have emphasized several themes over the years, including the importance of developing complete and realistic mental models of systems, often large, complicated, socio-technical organizations, to facilitate their analysis.  A mental model includes the significant factors that comprise the system, their interrelationships, system dynamics (how the system functions over time), and system outputs and their associated metrics.

This post outlines an ambitious and grand mental model: the recurring historical arc exhibited by all the world’s great empires as described in Ray Dalio’s new book.* Dalio examined empires from ancient China through the 20th century United States.  He identified 18 factors that establish and demonstrate a great society’s rise and fall: 3 “Big Cycles,” 8 different types of power an empire can exhibit, and 7 other determinants.

Three Big Cycles 

The big cycles have a natural progression and are influenced by human innovation, technological development, and acts of nature.  They occur over an empire’s 250 year lifetime of emergence, rise, topping out, decline, and replacement by a new dominant power.

The financial cycle initially supports prosperity but debt builds over time, then governments accommodate it by printing more money** which eventually leads to a currency devaluation, debt restructuring (including defaults), and the cycle starts over.  These cycles typically last about 50 to 100 years so can occur repeatedly over an empire’s lifetime.

The political cycle starts with a new order and leadership, then resource allocation systems are built, productivity and prosperity grow, but lead to excessive spending and widening wealth gaps, then bad financial conditions (e.g., depressions), civil war or revolution, and the cycle starts over.

The international cycle is dominated by raw power dynamics.  Empires build power and, over time, have conflicts with other countries over trade, technology, geopolitics, and finances.  Some conflicts lead to wars.  Eventually, the competition becomes too costly, the empire weakens, and the cycle starts over.

Dimensions and measures of power

An empire can develop and exercise power in many ways; these are manifestations and measures of the empire’s competitive advantages relative to other countries.  The 8 areas are education, cost competitiveness, innovation and technology, economic output, share of world trade, military strength, financial center strength, and reserve currency status.

Other determinants

These include natural attributes and events, internal financial/political/legal practices, and measures of social success and satisfaction.  Specific dimensions are geology, resource allocation efficiency, acts of nature, infrastructure and investment, character/civility/determination, governance/rule of law, gaps in wealth, opportunity and cultural values.

The 18 factors interact with each other, typically positively reinforcing each other, with some leading others, e.g., a society must establish a strong education base to support innovation and technology development.  Existing conditions and determinants propel changes that create new conditions and determinants.

System dynamics

Evolution is the macro driving force that creates the system dynamic over time.  In Dalio’s view “Evolution is the biggest and only permanent force in the universe . . .” (p. 27)  He also considers other factors that shape an empire’s performance.  The most important of these are self-interest, the drive for wealth and power, the ability to learn from history, multi-generational differences, time frames for decision making, and human inventiveness.  Others include culture, leadership competence, and class relationships.  Each of these factors can wax and/or wane over the course of an empire’s lifetime, leading to changes in system performance.

Dalio uses his model to describe (and share) his version of the economic-political history of the world, and the never-ending struggles of civilizations over the accumulation and distribution of wealth and power.  Importantly, he also uses it to inform his worldwide investment strategies.  His archetype models are converted into algorithms to monitor conditions and inform investment decisions.  He believes all financial markets are driven by growth, inflation, risk premiums (e.g., to compensate for the risk of devaluation), and discount rates.

Our Perspective

Dalio’s model is ambitious, extensive, and complicated.  We offer it up an extreme example of mental modeling, i.e., identifying all the important factors in a system of interest and defining how they work together to produce something.  Your scope of interest may be more limited – a power plant, a hospital, a major corporation – but the concept is the same.

Dalio is the billionaire founder of hedge fund Bridgewater Associates.  He has no shortage of ego or self-confidence.  He name-drops prominent politicians and thinkers from around the world to add weight to his beliefs.  We reviewed his 2017 book Principles on April 17, 2018 to show an example of a hard-nosed, high performance business culture. 

He is basically a deterministic thinker who views the world as a large, complex machine.  His modeling emphasizes cause-effect relationships that evolve and repeat over time.  He believes a perfect model would perfectly forecast the future so we assume he views the probabilistic events that occur at network branching nodes as consequences of an incomplete, i.e., imperfect model.  In contrast, we believe that some paths are created by events that are essentially probabilistic (e.g., “surprising acts of nature”) or the result of human choices.  We agree that human adaptation, learning, and inventiveness are keys to productivity improvements and social progress, but we don’t think they can be completely described in mechanical cause-effect terms.  Some system conditions are emergent, i.e., the consequence of a system’s functioning, and other things occur simply by chance. 

This book is over 500 pages, full of data and tables.  Individual chapters detail the history of the Dutch, British, American, and Chinese empires over the last 500 years.  The book has no index so referring back to specific topics is challenging. Dalio is not a scholar and gives scant or no credit to thinkers who used some of the same archetypes long before him.

We offer no opinion on the accuracy or completeness of Dalio’s version of world history, or his prognostications about the future, especially U.S.-China relations.

Bottom line: this is an extensive model of world history, full of data; the analyses of the U.S. and China*** are worth reading.


*  R. Dalio, Principles for Dealing with the Changing World Order (New York: Avid Reader Press) 2021.

**  If the new money and credit goes into economic productivity, it can be good for the society.  But the new supply of money can also cheapen it, i.e., drive its value down, reducing the desire of people to hold it and pushing up asset prices.

***  Dalio summarizes the Chinese political-financial model as “Confucian values with capitalist practices . . .” (p. 364)

Friday, December 10, 2021

Prepping for Threats: Lessons from Risk: A User’s Guide by Gen. Stanley McChrystal.

Gen. McChrystal was a U.S. commander in Afghanistan; you may remember he was fired by President Obama for making, and allowing subordinates to make, disparaging comments about then-Vice President Biden.  However, McChrystal was widely respected as a soldier and leader, and his recent book* on strengthening an organization’s “risk immune system” caught our attention.  This post summarizes its key points, focusing on items relevant to formal civilian organizations.

McChrystal describes a system that can detect, assess, respond to, and learn from risks.**  His mental model consists of two major components: (1) ten Risk Control Factors, interrelated dimensions for dealing with risks and (2) eleven Solutions, strategies that can be used to identify and address weaknesses in the different factors.  His overall objective is to create a resilient organization that can successfully respond to challenges and threats. 

Risk Control Factors

These are things under the control of an organization and its leadership, including physical assets, processes, practices, policies, and culture.

Communication – The organization must have the physical ability and willingness to exchange clear, complete, and intelligible information, and identify and deal with propaganda or misinformation.

Narrative – An articulated organizational purpose and mission.  It describes Who we are, What we do, and Why we do it.  The narrative drives (and we’d say is informed by) values, beliefs, and action.

Structure – Organizational design defines decision spaces and communication networks, implies power (both actual and perceived authority), suggests responsibilities, and influences culture.

Technology – This is both the hardware/software and how the organization applies it.  It include an awareness of how much authority is being transferred to machines, our level of dependence on them, our vulnerability to interruptions, and the unintended consequences of new technologies.

Diversity – Leaders must actively leverage different perspectives and abilities, inoculate the organization against groupthink, i.e., norms of consensus, and encourage productive conflict and a norm of skepticism.  (See our June 29, 2020 post on A Culture that Supports Dissent: Lessons from In Defense of Troublemakers by Charlan Nemeth.)

Bias – Biases are assumptions about the world that affect our outlook and decision making, and cause us to ignore or discount many risks.  In McChrystal’s view “[B]ias is an invisible hand driven by self-interest.” (See our July 1, 2021 and Dec.18, 2013 posts on Daniel Kahneman’s work on identifying and handling biases.) 

Action – Leaders have to proactively overcome organizational inertia, i.e., a bias against starting something new or changing course.  Inertia manifests in organizational norms that favor the status quo and tolerate internal resistance to change.

Timing – Getting the “when” of action right.  Leaders have to initiate action at the right time with the right speed to yield optimum impact.

Adaptability – Organizations have to respond to changing risks and environments.  Leaders need to develop their organization’s willingness and ability to change.

Leadership – Leaders have to direct and inspire the overall system, and stimulate and coordinate the other Risk Control Factors.  Leaders must communicate the vision and personify the narrative.  In practice, they need to focus on asking the right questions and sense the context of a given situation, embracing the new before necessity is evident. (See our Nov. 9, 2018 post for an example of effective leadership.)


The Solutions are strategies or methods to identify weaknesses in and strengthen the risk control factors.  In McChrystal’s view, each Solution is particularly applicable to certain factors, as shown in Table 1.

Assumptions check – Assessment of the reasonableness and relative importance of assumptions that underlie decisions.  It’s the qualitative and quantitative analyses of strengths and weaknesses of supporting arguments, modified by the judgment of thoughtful people.

Risk review – Assessment of when hazards may arrive and the adequacy of the organization’s preparations.

Risk alignment check – Leaders should recognize that different perspectives on risks exist and should be considered in the overall response.

Gap analysis – Identify the space between current actions and desired goals.

Snap assessment – Short-term, limited scope analyses of immediate hazards.  What’s happening?  How well are we responding?

Communications check – Ensure processes and physical systems are in place and working.

Tabletop exercise – A limited duration simulation that tests specific aspects of the organization’s risk response.

War game (functional exercise) – A pressure test in real time to show how the organization comprehensively reacts to a competitor’s action or unforeseen event.

Red teaming – Exercises involving third parties to identify organizational vulnerabilities and blind spots.

Pre-mortem – A discussion focusing on the things mostly likely to go wrong during the execution of a plan. 

After-action review – A self-assessment that identifies things that went well and areas for improvement.


Table 1  Created by Safetymatters


Our Perspective

McChrystal did not invent any of his Risk Control Factors and we have discussed many of these topics over the years.***  His value-add is organizing them as a system and recognizing their interrelatedness.  The entire system has to perform to identify, prepare for, and respond to risks, i.e., threats that can jeopardize the organization’s mission success.

This review emphasizes McChrystal’s overall risk management model.  The book also includes many examples of risks confronted, ignored, or misunderstood in the military, government, and commercial arenas.  Some, like Blockbuster’s failure to acquire Netflix when it had the opportunity, had poor outcomes; others, like the Cuban missile crisis or Apollo 13, worked out better.

The book appears aimed at senior leaders but all managers from department heads on up can benefit from thinking more systematically about how their organizations respond to threats from, or changes in, the external environment. 

There are hundreds of endnotes to document the text but the references are more Psychology Today than the primary sources we favor.

Bottom line: This is an easy to read example of the “management cookbook” genre.  It has a lot of familiar information in one place.


*  S. McChrystal and A. Butrico, Risk: A User’s Guide (New York: Portfolio) 2021.  Butrico is McChrystal’s speechwriter.

**  Risk to McChrystal is a combination of a threat and one’s vulnerability to the threat.  Threats are usually external to the organization while vulnerabilities exist because of internal aspects.

***  For example, click on the Management or Decision Making labels to pull up posts in related areas.

Thursday, July 1, 2021

Making Better Decisions: Lessons from Noise by Daniel Kahneman, Oliver Sibony, and Cass R. Sunstein

The authors of Noise: A Flaw in Human Judgment* examine the random variations that occur in judgmental decisions and recommend ways to make more consistent judgments.  Variability is observed when two or more qualified decision makers review the same data or face the same situation and come to different judgments or conclusions.  (Variability can also occur when the same decision maker revisits a previous decision situation and arrives at a different judgment.)  The decision makers may be doctors making diagnoses, engineers designing structures, judges sentencing convicted criminals, or any other situation involving professional judgment.**  Judgments can vary because of two factors: bias and noise.

Bias is systematic, a consistent source of error in judgments.  It creates an observable average difference between actual judgments and theoretical judgments that would reflect a system’s actual or espoused goals and values.  Bias may be exhibited by an individual or a group, e.g., when the criminal justice system treats members of a certain race or class differently from others.

Noise is random scatter, a separate, independent cause of variability in decisions involving judgment.  It is similar to the residual error in a statistical equation, i.e., noise may have a zero average (because higher judgments are balanced by lower ones) but noise can create large variability in individual judgments.  Such inconsistency damages the credibility of the system.  Noise has three components: level, pattern, and occasion. 

Level refers to the difference in the average judgment made by different individuals, e.g., a magistrate may be tough or lenient. 

Pattern refers to the idiosyncrasies of individual judges, e.g., one magistrate may be severe with drunk drivers but easy on minor traffic offenses.  These idiosyncrasies include the internal values, principles, memories, and rules a judge brings to every case, consciously or not. 

Occasion refers to a random instability, e.g., where a fingerprint examiner looking at the same prints finds a match one day and no match on another day.  Occasion noise can be influenced by many factors including a judge’s mood, fatigue, and recent experience with other cases. 

Based on a review of the available literature and their own research, the authors suggest that noise can be a larger contributor to judgment variability than bias, with stable pattern noise larger than level noise or occasion noise.

Ways to reduce noise

Noise can be reduced through interventions at the individual or group level. 

For the individual, interventions include training to help people who make judgments realize how different psychological biases can influence decision making.  The long list of psychological biases in Noise builds on Kahneman’s work in Thinking, Fast and Slow which we reviewed on Dec. 18, 2013.  Such biases include overconfidence; denial of ignorance, which means not acknowledging that important relevant data isn’t known; base rate neglect, where outcomes in other similar cases are ignored; availability, which means the first solutions that come to mind are favored, with no further analysis; and anchoring of subsequent values to an initial offer.  Noise reduction techniques include active open-mindedness, which is the search for information that contradicts one’s initial hypothesis, or positing alternative interpretations of the available evidence; and the use of rankings and anchored scales rather than individual ratings based on vague, open-ended criteria.  Shared professional norms can also contribute to more consistent judgments.

At the group level, noise can be reduced through techniques the authors call decision hygiene.  The underlying belief is that obtaining multiple, independent judgments can increase accuracy, i.e., lead to an answer that is closer to the true or best answer.  For example, a complicated decision can be broken down into multiple dimensions, and each dimension assessed individually and independently.  Group members share their judgments for each dimension, then discus them, and only then combine their findings (and their intuition) into a final decision.  Trained decision observers can be used to watch for signs that familiar biases are affecting someone’s decisions or group dynamics involving position, power, politics, ambition and the like are contaminating the decision process and negating actual independence.

Noise can also be reduced or eliminated by the use of rules, guidelines, or standards. 

Rules are inflexible, thus noiseless.  However, rules (or algorithms) may also have biases coded into them or only apply to their original data set.  They may also drive discretion underground, e.g., where decision makers game the process to obtain the results they prefer.

Guidelines, such as sentencing guidelines for convicted criminals or templates for diagnosing common health problems, are less rigid but still reduce noise.  Guidelines decompose complex decisions into easier sub-judgments on predefined dimensions.  However, judges and doctors push back against mandatory guidelines that reduce their ability to deal with the unique factors of individual cases before them.

Standards are the least rigid noise reduction technique; they delegate power to professionals and are inherently qualitative.  Standards generally require that professionals make decisions that are “reasonable” or “prudent” or “feasible.”  They are related to the shared professional norms previously mentioned.  Judgments based on standards can invite controversy, disagreement, confrontation, and lawsuits.

The authors recognize that in some areas, it is infeasible, too costly, or even undesirable to eliminate noise.  One particular fear is a noise-free system might freeze existing values.  Rules and guidelines need to be flexible to adapt to changing social values or new data.

Our Perspective

We have long promoted the view that decision making (the process) and decisions (the artifacts) are crucial components of a socio-technical system, and have a significant two-way influence relationship with the organization’s culture.  Decision making should be guided by an organization’s policies and priorities, and the process should be robust, i.e., different decision makers should arrive at acceptably similar decisions. 

Many organizations examine (and excoriate) bad decisions and the “bad apples” who made them.  Organizations also need to look at “good” decisions to appreciate how much their professionals disagree when making generally acceptable judgments.  Does the process for making judgments develop the answer best supported by the facts, and then adjust it for preferences (e.g., cost) and values (e.g., safety), or do the fingers of the judges go on the scale at earlier steps?

You may be surprised at the amount of noise in your organization’s professional judgments.  On the other hand, is your organization’s decision making too rigid in some areas?  Decisions made using rules can be quicker and cheaper than prolonged analysis, but may lead to costly errors. which approach has a higher cost for errors?  Operators (or nurses or whoever) may follow the rules punctiliously but sometimes the train may go off the tracks. 

Bottom line: This is an important book that provides a powerful mental model for considering the many factors that influence individual professional judgments.

*  D. Kahneman, O. Sibony, and C.R. Sunstein, Noise: A Flaw in Human Judgment (New York: Little, Brown Spark) 2021.

**  “Professional judgment” implies some uncertainty about the answer, and judges may disagree, but there is a limit on how much disagreement is tolerable.

Friday, May 21, 2021

Healthcare Safety Culture and Interventions to Reduce Preventable Medical Errors

HSS OIG report cover

We have previously written about the shocking number of preventable errors in healthcare settings that result in injury or death to patients.  We have also discussed the importance of a strong safety culture (SC) in reducing healthcare error rates.  However, after 20 years of efforts, the needle has not significantly moved on overall injuries and deaths.  This post reviews healthcare’s concept of SC and research that ties SC to patient outcomes.  We offer our view on why interventions have not been more effective.

Healthcare’s Model of Safety Culture

Healthcare has a model for SC, shown in the SC primer on the Agency for Healthcare Research and Quality’s (AHRQ) Patient Safety Network website.*  The model contains these key cultural features:

  • acknowledgment of the high-risk nature of an organization's activities and the determination to achieve consistently safe operations
  • a blame-free environment** where individuals are able to report errors or near misses without fear of reprimand or punishment
  • encouragement of collaboration across ranks and disciplines to seek solutions to patient safety problems
  • organizational commitment of resources to address safety concerns.

We will critique this model later.

Healthcare Providers Believe Safety Culture is Important

A U.S. Department of Health and Human Services (HSS) report*** affirms healthcare providers’ belief that SC is important and can contribute to fewer errors and improved patient outcomes.

AHRQ administers the Patient Safety Organization (PSO) program which gathers data on patient safety events from healthcare providers.  In 2019, the HSS Office of Inspector General surveyed hospitals and PSOs to identify the PSO program’s value and challenges.  SC was one topic covered in the survey and the results confirm SC’s importance to providers.  “Among hospitals that work with PSOs, 80 percent find that feedback and analysis on patient safety events have helped prevent future events, and 72 percent find that such feedback has helped them understand the causes of events.” (p. 10)  Furthermore, “Nearly all (95 percent) hospitals that work with a PSO found that their PSOs have helped improve the culture of safety at their facilities.  A culture of safety is one that enables individuals to report errors without fear of reprimand and to collaborate on solutions.” (p. 11) 

Healthcare Research Connects SC to Interventions to Reduced Errors

AHRQ publishes the “Making Healthcare Safer” series of reports, which represent summaries of important research on selected patient safety practices (PSPs).  The most recent (2020) edition**** recognizes SC as a cross-cutting practice, i.e., SC impacts the effectiveness of many specific PSPs. 

The section on cross-cutting practices begins by noting that healthcare is trying to learn from the experience of high reliability organizations (HROs).  HROs have many safety-enhancing attributes included committed leaders, a SC where staff identify and correct all deviations that could lead to unsafe conditions, an environment where adverse events or near misses are reported without fear of blame or recrimination, and practices to identify a problem’s scope, root causes, and appropriate solutions. (p. 17-1) 

The report identified several categories of practices that are used to improve healthcare SC: Leadership WalkRounds, Team Training, Comprehensive Unit-based Safety Programs (CUSP), and interventions that implemented multiple methods. (p. 17-13)

WalkRounds “involves leaders “walking around” to engage in face to face, candid discussions with frontline staff about patient safety incidents or near-misses.” (p. 17-16)  Team training programs focus on enhancing teamwork skills and communication between healthcare providers . . .” (p. 17-17)  CUSP is a multi-step program to assess, intervene in, and reassess a healthcare unit’s SC. (p. 17-19)

The report also covers 17 specific areas where harm/errors can occur and highlights SC aspects associated with two such areas: developing rapid response teams and dealing with alarm fatigue in hospitals. 

Rapid response teams (RRTs) treat deteriorating hospital patients before adverse events occur. (p. 2-1)  Weak SC and healthcare hierarchies are barriers to successful implementation of RRTs. (p. 2-10)

Alarm fatigue occurs because of high exposure to medical device alarms, many of which are loud or false alarms, that lead to desensitization, missed alarms or delayed responses. (p. 13-1)  The cultural aspects of interventions focused on all staff members (not just nurses) assuming responsibility for addressing alarms. (p. 13-6) 

Our Perspective

We have three problems with healthcare’s efforts to reduce harm to patients: (1) the quasi-official healthcare mental model of safety culture is incomplete, (2) healthcare’s assumption that it can model itself on HROs ignores a critical systemic difference, and (3) an inadequate overall system model leads to fragmented, incremental improvement projects.

An inadequate model for SC

Healthcare does not have an adequate understanding of the necessary attributes of a strong SC.  

The features listed in the introduction of this post are necessary but not sufficient for a strong SC.  SC is more than good communications; it is part of the overall cultural system.  This system has feedback loops that can reinforce or extinguish attitudes and behaviors.  The attitudes of people in the system are heavily influenced by their trust in management to do the right thing.  Management’s behavior is influenced by their goals, policy constraints, environmental pressures, and incentives, including monetary compensation.

Top-to-bottom decision making in the system needs to be consistent, which means processes, priorities, practices, and rules should be defined and followed.  Goal conflicts must be consistently handled.  Decision makers must be identified to allow accountability.   Problems must be identified (without retribution except for cause), analyzed, and permanently fixed.

Lack of attention to the missing attributes is one reason that healthcare SC has been slow to strengthen and unfavorable patient outcomes are still at unacceptable levels. 

Healthcare is not a traditional HRO

The healthcare system looks to HROs for inspiration on SC but does not recognize one significant difference between a traditional HRO and healthcare.

When we consider other HROs, e.g., nuclear power plants, off-shore drilling operations, or commercial aviation, we understand that they have significant interactions with their respective environments, e.g., regulators, politicians, inspectors, suppliers, customers, activists, etc. 

Healthcare is different because its customers are basically the feedstock for the “factory” and healthcare has to accept those inputs “as is”; in other words, unlike a nuclear power plant, healthcare cannot define and enforce a set of specifications for its inputs.  The inputs (patients) arrive in a wide range of “as is” conditions, from simple injuries to multiple, interacting ailments.  The healthcare system has to accomplish two equally important objectives: (1) correctly identify a patient’s problem(s) and (2) fix them in a robust, cost-effective manner.  SC in the first phase should focus on obtaining the correct diagnosis; SC in the second phase should focus on performing the prescribed corrective actions according to approved procedures, and ensuring that expected results occur. 

Inadequate models lead to piecemeal interventions      

Healthcare’s simplistic mental model for SC is part of an inaccurate mental model for the overall system.  The current system model is fragmented and leads researchers and practitioners to think small (on silos) when they could be thinking big (on the enterprise).  An SC intervention that focuses on tightening process controls in one small area cannot move the needle on system-wide SC or overall patient outcomes.  For more on systems models, systemic challenges, and narrow interventions, see our Oct. 9, 2019 and Nov. 9,2020 posts.  Click on the healthcare label below to see all of the related posts.

Bottom line: Healthcare SC can have a direct impact on the probabilities that specific harms will occur, and their severity if they do but accurate models of culture are essential. 


*  Agency for Healthcare Research and Quality, Culture of Safety” (Sept. 2019).  Accessed May 4, 2021.  AHRQ is an organization within the U.S. Department of Health and Human Services.  Its mission includes producing evidence to make health care safer.

**  The “blame-free” environment has evolved into a “just culture” where human errors, especially those caused by the task system context, are tolerated but taking shortcuts and reckless behavior are disciplined.  Click on the just culture label for related posts.

***  U.S. Dept. of Health and Human Services Office of Inspector General, “Patient Safety Organizations: Hospital Participation, Value, and Challenges,” OEI-01-17-00420, Sept. 2019.

****  K.K. Hall et al, “Making Healthcare Safer III: A Critical Analysis of Existing and Emerging Patient Safety Practices,” AHRQ Pub. No. 20-0029-EF.  (Rockville, MD: AHRQ) March 2020.  This is a 1400 page report so we are only reporting relevant highlights.