Showing posts with label Healthcare. Show all posts
Showing posts with label Healthcare. Show all posts

Friday, August 4, 2023

Real Systems Pursue Goals

System Model Control Panel
System Model Control Panel
On March 10, 2023 we posted about a medical journal editorial that advocated for incorporating more systems thinking in hospital emergency rooms’ (ERs) diagnostic processes.  Consistent with Safetymatters’ core beliefs, we approved of using systems thinking in complicated decision situations such as those arising in the ER. 

The article prompted a letter to the editor in which the author said the approach described in the original editorial wasn’t a true systems approach because it wasn’t specifically goal-oriented.  We agree with that author’s viewpoint.  We often argue for more systems thinking and describe mental models of systems with components, dynamic relationships among the components, feedback loops, control functions such as rules and culture, and decision maker inputs.  What we haven’t emphasized as much, probably because we tend to take it for granted, is that a bona fide system is teleological, i.e., designed to achieve a goal. 

It’s important to understand what a system’s goal is.  This may be challenging because the system’s goal may contain multiple sub-goals.  For example, a medical clinician may order a certain test.  The lab has a goal: to produce accurate, timely, and reliable results for tests that have been ordered.  But the clinician’s goal is different: to develop a correct diagnosis of a patient’s condition.  The goal of the hospital of which the clinician and lab are components may be something else: to produce generally acceptable patient outcomes, at reasonable cost, without incurring undue legal problems or regulatory oversight.  System components (the clinician and the lab) may have goals which are hopefully supportive of, or at least consistent with, overall system goals.

The top-level system, e.g., a healthcare provider, may not have a single goal, it may have multiple, independent goals that can conflict with one another.  Achieving the best quality may conflict with keeping costs within budgets.  Achieving perfect safety may conflict with the need to make operational decisions under time pressure and with imperfect or incomplete information.  One of the most important responsibilities of top management is defining how the system recognizes and deals with goal conflict.

In addition to goals, we need to discuss two other characteristics of full-fledged systems: a measure of performance and a defined client.* 

The measure of performance shows the system designers, users, managers, and overseers how well the system’s goal(s) are being achieved through the functioning of system components as affected by the system’s decision makers.  Like goals, the measure of performance may have multiple dimensions or sub-measures.  In a well-designed system, the summation of the set of sub-measures should be sufficient to describe overall system performance.  

The client is the entity whose interests are served by the system.  Identifying the client can be tricky.  Consider a city’s system for serving its unhoused population.  The basic system consists of a public agency to oversee the services, entities (often nongovernmental organizations, or NGOs) that provide the services, suppliers (e.g., landlords who offer buildings for use as housing), and the unhoused population.  Who is the client of this system, i.e., who benefits from its functioning?  The politicians, running for re-election, who authorize and sustain the public agency?  The public agency bureaucrats angling for bigger budgets and more staff?  The NGOs who are looking for increased funding?  Landlords who want rent increases?  Or the unhoused who may be looking for a private room with a lockable door, or may be resistant to accepting any services because of their mental, behavioral, or social problems?  It’s easy to see that many system participants do better, i.e., get more pie, if the “homeless problem” is never fully resolved.

For another example, look at the average public school district in the U.S.  At first blush, the students are the client.  But what about the elected state commissioner of education and the associated bureaucracy that establish standards and curricula for the districts?  And the elected district directors and district bureaucracy?  And the parents’ rights organizations?  And the teachers’ unions?  All of them claim to be working to further the students’ interests but what do they really care about?  How about political or organizational power, job security, and money?  The students could be more of a secondary consideration.

We could go on.  The point is we are surrounded by many social-legal-political-technical systems and who and what they are actually serving may not be those they purport to serve.

  

*  These system characteristics are taken from the work of a systems pioneer, Prof. C. West Churchman of UC Berkeley.  For more information, see his The Design of Inquiring Systems (New York: Basic Books) 1971.

Thursday, May 25, 2023

The National Academies on Behavioral Economics

Report cover
A National Academies of Sciences, Engineering, and Medicine (NASEM) committee recently published a report* on the contributions of behavioral economics (BE) to public policy.  BE is “an approach to understanding human behavior and decision making that integrates knowledge from psychology and other behavioral fields with economic analysis.” (p. Summ-1)

The report’s first section summarizes the history and development of the field of behavioral economics.  Classical economics envisions the individual person as a decision maker who has all relevant information available, and makes rational decisions that maximize his overall, i.e. short- and long-term, self-interest.  In contrast, BE recognizes that actual people making real decisions have many built-in biases, limitations, and constraints.  The following five principles apply to the decision making processes behavioral economists study:

Limited Attention and Cognition - The extent to which people pay limited attention to relevant aspects of their environment and often make cognitive errors.

Inaccurate Beliefs - Individuals can have incorrect perceptions or information about situations, relevant incentives, their own abilities, and the beliefs of others.

Present Bias - People tend to disproportionately focus on issues that are in front of them in the present moment.

Reference Dependence and Framing - Individuals tend to consider how their decision options relate to a particular reference point, e.g., the status quo, rather than considering all available possibilities. People are also sensitive to the way decision problems are framed, i.e., how options are presented, and this affects what comes to their attention and can lead to different perceptions, reactions, and choices.

Social Preferences and Social Norms - Decision makers often consider how their decisions affect others, how they compare with others, and how their decisions imply values and conformance with social norms.

The task of policy makers is to acknowledge these limitations and present decision situations to people in ways that people can comprehend and help them make decisions that will serve their own and society’s interests.  In practice this means decision situations “can be designed to modify the habitual and unconscious ways that people act and make decisions.” (p. Summ-3)

Decision situation designers use various interventions to inform and guide individuals’ decision making.  The NASEM committee mapped 23 possible interventions against the 5 principles.  It’s impractical to list all the interventions here but the more graspable ones include:

Defaults – The starting decision option is the designer’s preferred choice; the decision maker must actively choose a different option.

De-biasing – Attempt to correct inaccurate beliefs by presenting salient information related to past performance of the individual decision maker or a relevant reference group.

Mental Models – Update or change the decision maker’s mental representation of how the world works.

Reminders – Use reminders to cut through inattention, highlight desired behavior, and focus the decision maker on a future goal or desired state.

Framing – Focus the decision maker on a specific reference point, e.g., a default option or the negative consequences of inaction (not choosing any option).

Social Comparison and Feedback - Explicitly compare an individual’s performance with a relevant comparison or reference group, e.g., the individual’s professional peers.

Interventions can range from “nudges” that alter people’s behavior without forbidding any options to designs that are much stronger than nudges and are, in effect, efforts to enforce conformity.

The bulk of the report describes the theory, research, and application of BE in six public policy domains: health, retirement benefits, social safety net benefits, climate change, education, and criminal justice.  The NASEM committee reviewed current research and interventions in each domain and recommended areas for future research activity.  There is too much material to summarize so we’ll provide a single illustrative sample.

Because we have written about culture and safety practices in the healthcare industry, we will recap the report’s discussion of efforts to modify or support medical clinicians’ behavior.  Clinicians often work in busy, sometimes chaotic, settings that place multiple demands on their attention and must make frequent, critical decisions under time pressure.  On occasion, they provide more (or less) health care than a patient’s clinical condition warrants; they also make errors.  Research and interventions to date address present bias and limited attention by changing defaults, and invoke social norms by providing information on an individual’s performance relative to others.  An example of a default intervention is to change mandated checklists from opt-in (the response for each item must be specified) to opt-out (the most likely answer for each item is pre-loaded; the clinician can choose to change it).  An example of using social norms is to provide information on the behavior and performance of peers, e.g., in the quantity and type of prescriptions written.

Overall recommendations

The report’s recommendations are typical for this type of overview: improve the education of future policy makers, apply the key principles in public policy formulation, and fund and emphasize future research.  Such research should include better linkage of behavioral principles and insights to specific intervention and policy goals, and realize the potential for artificial intelligence and machine learning approaches to improve tailoring and targeting of interventions.

Our Perspective

We have written about decision making for years, mostly about how organizational culture (values and norms) affect decision making.  We’ve also reviewed the insights and principles highlighted in the subject report.  For example, our December 18, 2013 post on Daniel Kahneman’s work described people’s built-in decision making biases.  Our June 6, 2022 post on Thaler and Sunstein’s book Nudge discussed the application of behavioral economic principles in the design of ideal (and ethical) decision making processes.  These authors’ works are recognized as seminal in the subject report.

On the subject of ethics, the NASEM committee’s original mission included considering ethical issues related to the use of behavioral economics but ethics’ mention is the report is not much more than a few cautionary notes.  This is thin gruel for a field that includes many public and private actors deciding what people should do instead of letting them decide for themselves.

As evidenced by the report, the application of behavioral economics is widespread and growing.  It’s easy to see its use being supercharged by artificial intelligence and machine learning.  “Behavioral economics” sounds academic and benign.  Maybe we should start calling it behavioral engineering.

Bottom line: Read this report.  You need to know about this stuff.


*  National Academies of Sciences, Engineering, and Medicine, “Behavioral Economics: Policy Impact and Future Directions,” (Washington, DC: The National Academies Press, 2023).

Friday, March 10, 2023

A Systems Approach to Diagnosis in Healthcare Emergency Departments

JAMA logo

A recent op-ed* in JAMA advocated greater use of systems thinking to reduce diagnostic errors in emergency departments (EDs).  The authors describe the current situation – diagnostic errors occur at an estimated 5.7% rate – and offer 3 insights why systems thinking may contribute to interventions that reduce this error rate.  We will summarize their observations and then provide our perspective.

First, they point out that diagnostic errors are not limited to the ED, in fact, such errors occur in all specialties and areas of health care.  Diagnosis is often complicated and practitioners are under time pressure to come up with an answer.  The focus of interventions should be on reducing incorrect diagnoses that result in harm to patients.  Fortunately, studies have shown that “just 15 clinical conditions accounted for 68% of diagnostic errors associated with high-severity harms,” which should help narrow the focus for possible interventions.  However, simply doing more of the current approaches, e.g., more “testing,” is not going to be effective.  (We’ll explain why later.)

Second, diagnostic errors are often invisible; if they were visible, they would be recognized and corrected in the moment.  The system needs “practical value-added ways to define and measure diagnostic errors in real time, . . .”

Third, “Because of the perception of personal culpability associated with diagnostic errors, . . . health care professionals have relied on the heroism of individual clinicians . . . to prevent diagnostic errors.”  Because humans are not error-free, the system as it currently exists will inevitably produce some errors.  Possible interventions include checklists, cognitive aids, machine learning, and training modules aimed at the Top 15 problematic clinical conditions. “The paradigm of how we interpret diagnostic errors must shift from trying to “fix” individual clinicians to creating systems-level solutions to reverse system errors.”

Our Perspective

It will come as no surprise that we endorse the authors’ point of view: healthcare needs to utilize more systems thinking to increase the safety and effectiveness of its myriad diagnostic and treatment processes.  Stakeholders must acknowledge that the current system for delivering healthcare services has error rates consistent with its sub-optimal design.  Because of that, tinkering with incremental changes, e.g., the well-publicized effort to reduce infections from catheters, will yield only incremental improvements in safety.  At best, they will only expose the next stratum of issues that are limiting system performance.

Incremental improvements are based on fragmented mental models of the healthcare system.  Proper systems thinking starts with a complete mental model of a healthcare system and how it operates.  We have described a more complete mental model in other posts so we will only summarize it here.  A model has components, e.g., doctors, nurses, support staff, and facilities.  And the model is dynamic, which means components are not fixed entities but ones whose quality and quantity varies over time.  In addition, the inter-relationships between and among the components can also vary over time.  Component behavior is directed by both relatively visible factors – policies, procedures, and practices – and softer control functions such as the level of trust between individuals, different groups, and hierarchical levels, i.e., bosses and workers.  Importantly, component behavior is also influenced by feedback from other components.  These feedback loops can be positive or negative, i.e., they can reinforce certain behaviors or seek to reduce or eliminate them.  For more on mental models, see our May 21, 2021, Nov. 6, 2019, and Oct. 9, 2019 posts.

One key control factor is organizational culture, i.e., the values and assumptions about reality shared by members.  In the healthcare environment, the most important subset of culture is safety culture (SC).  Safety should be a primary consideration in all activities in a healthcare organization.  For example, in a strong SC, the reporting of an adverse event such as an error should be regarded as a routine and ordinary task.  The reluctance of doctors to report errors because of their feelings of personal and professional shame, or fear of malpractice allegations or discipline, must be overcome.  For more on SC, see our May 21, 2021 and July 31, 2020 posts.

Organizational structure is another control factor, one that basically defines the upper limit of organizational performance.  Does the existing structure facilitate communication, learning, and performance improvement or do silos create barriers?  Do professional organizations and unions create focal points the system designer can leverage to improve performance or are they separate power structures whose interests and goals may conflict with those of the larger system?  What is the quality of management’s behavior, especially their decision making processes, and how is management influenced by their goals, policy constraints, environmental pressures (e.g., to advance equity and diversity) and compensation scheme?

As noted earlier, the authors observe that EDs depend on individual doctors to arrive at correct diagnoses in spite of inadequate information or time pressure and doctors who can do this well are regarded as heroes.  We note that doctors who are less effective may be shuffled off to the side or in egregious cases, labeled “bad apples” and tossed out of the organization.  This is an incorrect viewpoint.  Competent, dedicated individuals are necessary, of course, but the system designer should focus on making the system more error tolerant (so any errors cause no or minimal harm) and resilient (so errors are recognized and corrective actions implemented.)          

Bottom line: more systems thinking is needed in healthcare and articles like this help move the needle in the correct direction.


*  J.A. Edlow and P.J. Pronovost, “Misdiagnosis in the Emergency Department: Time for a System Solution,” JAMA (Journal of the American Medical Association), Vol. 329, No. 8 (Feb. 28, 2023), pp. 631-632.

Thursday, November 17, 2022

A Road Map for Reducing Diagnostic Errors in Healthcare

A recent article* about how to reduce diagnostic errors in healthcare caught our attention, for a couple of reasons.  First, it describes a fairly comprehensive checklist of specific practices to address diagnostic errors, and second, the practices include organizational culture and reflect systems thinking, both subjects dear to us.  The checklist’s purpose is to help an organization rate its current performance and identify areas for improvement.

The authors used a reasonable method to develop the checklist: they convened an anonymous Delphi group, identified and ranked initial lists of practices, shared the information among the group, then collected and organized the updated rankings.  The authors then sent the draft checklist to several hospital managers, i.e., the kind of people who would have to implement the approach, for their input on feasibility and clarity.  The final checklist was then published.

The checklist focuses on diagnostic errors, i.e., missed, delayed, or wrong diagnoses.  It does not address other major types of healthcare errors, e.g., botched procedures, drug mix-ups, or provider hygiene practices.

The authors propose 10 practices, summarized below, to assess current performance and direct interventions with respect to diagnostic errors:

1.    Senior leadership builds a “board-to-bedside” accountability framework to measure and improve diagnostic safety.

2.    Promote a just culture and create a psychologically safe environment that encourages clinicians and staff to share opportunities to improve diagnostic safety without fear of retribution.

3.    Create feedback loops to increase information flow about patients’ diagnostic and treatment-related outcomes after handoffs from one provider/department to another.

4.    Develop multidisciplinary perspectives to understand and address contributory factors in the analysis of diagnostic safety events.

5.    Seek patient and family feedback to identify and understand diagnostic safety concerns.

6.    Encourage patients to review their health records and ask questions about their diagnoses and related treatments.

7.    Prioritize equity in diagnostic safety efforts.

8-10.    Establish standardized systems and processes to (1) encourage direct, collaborative interactions between treating clinical teams and diagnostic specialties; (2) ensure reliable communication of diagnostic information between care providers and with patients and families; and (3) close the loop on communication and follow up on abnormal test results and referrals.

Our Perspective

We support the authors recognition that diagnostic errors are difficult to analyze; they can involve clinical uncertainty, the natural evolution of diagnosis as more information becomes available, and cognitive errors, all exacerbated by system vulnerabilities.  Addressing such errors requires a systems approach.  

The emphasis on a just culture and establishing feedback loops is good.  We would add the importance of management commitment to fixing and learning from identified problems, and a management compensation plan that includes monetary incentives for doing this.

However, we believe the probability of a healthcare organization establishing dedicated infrastructure to address diagnostic errors is very low.  First, the authors recognize there is no existing business case to address such errors.  In addition, we suspect there is some uncertainty around how often such errors occur.  The authors say these errors affect at least 5% of US adult outpatients annually but that number is based on a single mini-meta study.**

As a consequence, senior management is not currently motivated by either fear (e.g., higher costs, excessive losses to lawsuits, regulatory sanctions or fines, or reputational loss) or greed (e.g., professional recognition or monetary incentives) to take action.  So our recommended first step should be to determine which types of medical errors present the greatest threats to an institution, how many occur, and then determine what can be done to prevent them or minimize their consequences.  (See our July 31, 2020 post on Dr. Danielle Ofri’s book When We Do Harm for more on medical errors.)

Second, the organization has other competing goals demanding attention and resources so management’s inclination will be to minimize costs by simply extending any existing error identification and resolution program to include diagnostic errors.

Third, diagnosis is not a cut-and-dried process, like inserting a catheter, double-checking patients’ names, or hand washing.  The diagnostic process is essentially probabilistic, with different diagnoses possible from the same data, and to some degree, subjective.  Management probably does not want a stand-alone system that second guesses and retrospectively judges doctors’ decisions and opinions.  Such an approach could be perceived as intruding on doctors’ freedom to exercise professional judgment and is bad for morale.

Bottom line: The checklist is well-intentioned but a bit naïve.  It is a good guide for identifying weak spots and hazards in a healthcare organization, and the overall approach is not necessarily limited to diagnostic errors.   


*  Singh, H., Mushtaq, U., Marinez, A., Shahid, U., Huebner, J., McGaffigan, P., and Upadhyay, D.K., “Developing the Safer Dx Checklist of Ten Safety Recommendations for Health Care Organizations to Address Diagnostic Errors,” The Joint Commission Journal on Quality and Patient Safety, No. 48, Aug. 10, 2022, pp. 581–590.  The Joint Commission is an entity that inspects and accredits healthcare providers, mainly hospitals.

**  Singh, H., Meyer, A.N.D., and Thomas, E.J., “The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations,” BMJ Quality and Safety, Vol. 23, No. 9, April 2014, pp. 727–731.


Thursday, September 22, 2022

Culture in the Healthcare Industry

A couple of articles recognizing the importance of cultural factors in the healthcare space recently caught our attention.  The authors break no new ground but we’re reporting these articles because they appeared in a couple of the U.S.’s most prestigious medical journals.

We begin with an opinion piece in The Journal of the American Medical Association (JAMA).*  The authors’ focus is on clinician burnout (which we discussed on Nov. 6, 2019) but they cite earlier work on the importance of quality and culture in the healthcare workplace, including “the culture changes needed for effective teamwork and optimizing the authentic voice of every team member. . . . [and examining] the consequences of medical hierarchy and inequity.”  

One of the references in the JAMA piece is an earlier article by two of the authors in The New England Journal of Medicine.**  This article discusses how the National Academy of Medicine (NAM) and its predecessor entities have influenced the trajectory of the discussion of healthcare effectiveness, starting by documenting the wide scope of inappropriate care prescribed to patients, i.e., the overuse of ineffective medical practices.  Their seminal 1999 report, “To Err Is Human,” estimated that 44,000 to 98,000 Americans die in hospitals each year because of medical errors.  

Their 2001 report, “Crossing the Quality Chasm,” defined a framework for healthcare quality with six dimensions: safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity.  In practice, the quality of healthcare services has improved since then in several specific areas, e.g., reduced rates of acquired infections, but “wholesale, systemic improvement in quality of care has proven difficult to bring to scale.”  We wrote about the lack of progress on Nov. 9, 1920.  One significant ongoing problem is that efforts to increase provider accountability, e.g., ascertaining if providers are delivering appropriate care, has resulted in a negative impact on clinicians’ morale.  “The United States has yet to find for health care the wisest balance between accountability, which is critical, and supports for a trusting culture of growth and learning, which, as the NAM asserts, is the essential foundation for continual improvement.”

Our Perspective

None of this information is new.  What is worth noting is how cultural aspects have become important topics for discussion at the highest levels of healthcare policy. 

If you have been following our healthcare posts on Safetymatters, you know we have discussed the challenges and the progress, or lack thereof, in reducing errors and increasing effectiveness.  We have emphasized the role of a strong safety culture in the delivery of high quality services.  Click on the healthcare label to see all of our related posts.

Healthcare has a long way to go to catch up with other industries that have integrated high levels of safety and quality into their daily operations.  To illustrate healthcare’s current position, we will repurpose a recent McKinsey article on corporate ESG (Environment, Social, Governance) attributes.***  McKinsey uses a 3-category framework (Minimum, Common, and Next level practices) to describe a business’s ESG character.  For our purposes, we will replace ESG with safety and quality (S&C), and excerpt and adapt specific attributes that could and should exist in a healthcare organization.

Minimum practices – focus on risk mitigation and do no harm measures

•    React to external social-legal-political trends
•    Address obvious vulnerabilities
•    Meet baseline standards
•    Pledge to minimal commitment levels

Common practices – substantive efforts, more proactive than reactive

•    Track major trends and develop strategies to address them
•    Identify strengths and use them to move toward S&C goals
•    Comply with voluntary standards and perform above average
•    Engage with stakeholder groups to understand what matters to them

Next level practices – full integration of S&C into strategy and operations

•    View S&C as essential components of overall strategy
•    Link clearly articulated leadership areas with S&C goals
•    Embed S&C in capital and resource allocation
•    Tie S&C to employee incentives and evaluations
•    Ensure that S&C reports cover the entity’s full set of operations

Our judgment is that most healthcare entities, especially hospitals, demonstrate minimum practices and are trying to get ahead of the curve by implementing some common practices.  Some entities may claim to be using next level practices, but these are generally narrow or limited efforts.  The industry’s biggest challenge is getting the entrenched guilds of doctors and nurses, accustomed to working in protective silos, to fully embrace increased accountability.  At the same time, senior management must create, maintain, and manage a non-punitive work environment and a just culture.


*  Rotenstein, L.S., Berwick, D.M., and Cassel, C.K., “Addressing Well-being Throughout the Health Care Workforce: The Next Imperative,” JAMA, Vol. 328, No. 6 (Aug. 9, 2022), pp. 521-22.  Published online July 18, 2022.  JAMA is a peer-reviewed journal published by the American Medical Association.

**  Berwick, D.M., and Cassel, C.K., “The NAM and the Quality of Health Care — Inflecting a Field,” The New England Journal of Medicine, Vol. 383, No. 6 (Aug. 6, 2020), pp. 505-08.

***  Pérez, L., Hunt, V., Samandari, H, Nuttall, R., and Bellone, D., “How to make ESG real,” McKinsey Quarterly (Aug. 2022).


Thursday, March 31, 2022

The Criminalization of Safety in Healthcare?


On March 25, 2022 a former nurse at Vanderbilt University Medical Center (VUMC) was convicted of gross neglect of an impaired adult and negligent homicide as a consequence of a fatal drug error in 2017.* 

Criminal prosecutions for medical errors are rare, and healthcare stakeholders are concerned about what this conviction may mean for medical practice going forward.  A major concern is practitioners will be less likely to self-report errors for fear of incriminating themselves.

We have previously written about the intersection of criminal charges and safety management and practices.  In 2016 Safetymatters’ Bob Cudlin authored a 3-part series on this topic.  (See his May 24, May 31, and June 7 posts.)  Consistent with our historical focus on systems thinking, Bob reviewed examples in different industries and asked “where does culpability really lie - with individuals? culture? the corporation? or the complex socio-technical systems within which individuals act?”

“Corporations inherently, and often quite intentionally, place significant emphasis on achieving operational and business goals.  These goals at certain junctures may conflict with assuring safety.  The de facto reality is that it is up to the operating personnel to constantly rationalize those conflicts in a way that achieves acceptable safely.”

We are confident this is true in hospital nurses’ working environment.  They are often short-staffed, working overtime, and under pressure from their immediate task environments and larger circumstances such as the ongoing COVID pandemic.  The ceaseless evolution of medical technology means they have to adapt to constantly changing equipment, some of which is problematic.  Many/most healthcare professionals believe errors are inevitable.  See our August 6, 2019 and July 31, 2020 posts for more information about the extent, nature, and consequences of healthcare errors.

At VUMC, medicines are dispensed from locked cabinets after a nurse enters various codes.  The hospital had been having technical problems with the cabinets in early 2017 prior to the nurse’s error.  The nurse could not obtain the proper drug because she was searching using its brand name instead of its generic name.  She entered an override that allowed her to access additional medications and selected the wrong one, a powerful paralyzing agent.  The nurse and other medical personnel noted that entering overrides on the cabinets was a common practice.

VUMC’s problems extended well beyond troublesome medicine cabinets.  An investigator said VUMC had “a heavy burden of responsibility in this matter.”  VUMC did not report the medication error as required by law and told the local medical examiner’s office that the patient died of “natural” causes.  VUMC avoided criminal charges because prosecutors didn’t think they could prove gross negligence. 

Our Perspective

As Bob observed in 2016, “The reality is that criminalization is at its core a “disincentive.”  To be effective it would have to deter actions or decisions that are not consistent with safety but not create a minefield of culpability. . . .  Its best use is probably as an ultimate boundary, to deter intentional misconduct but not be an unintended trap for bad judgment or inadequate performance.”

In the instant case, the nurse did not intend to cause harm but her conduct definitely reflected bad judgment and unacceptable performance.  She probably sealed her own fate when she told law enforcement she “probably just killed a patient” and the licensing board that she had been “complacent” and “distracted.”   

But we see plenty of faults in the larger system, mainly that VUMC used cabinets that held dangerous substances and had a history of technical glitches but allowed users to routinely override cabinet controls to obtain needed medicines.  As far we can tell, VUMC did not implement any compensating safety measures, such as requiring double checking by a colleague or a supervisor’s presence when overrides were performed or “dangerous” medications were withdrawn.

In addition, VUMC’s organizational culture was on full display with their inadequate and misleading reporting of the patient’s death.  VUMC has made no comment on the nurse’s case.  In our view, their overall strategy was to circle the wagons, seal off the wound, and dispose of the bad apple.  Nothing to see here, folks.

Going forward, the remaining VUMC nurses will be on high alert for awhile but their day-to-day task demands will eventually force them to employ risky behaviors in an environment that requires such behavior to accomplish the mission but lacks defense in depth to catch errors before they have drastic consequences.  The nurses will/should be demanding a safer work environment.

Bottom line: Will this event mark a significant moment for accountability in healthcare akin to the George Floyd incident’s impact on U.S. police practices?  You be the judge.

For additional Safetymatters insights click the healthcare label below.

 

*  All discussion of the VUMC incident is based on reporting by National Public Radio (NPR).  See B. Kelman, “As a nurse faces prison for a deadly error, her colleagues worry: Could I be next?” NPR, March 22, 2022; “In Nurse’s Trial, Investigator Says Hospital Bears ‘Heavy’ Responsibility for Patient Death,” NPR, March 24, 2022; “Former nurse found guilty in accidental injection death of 75-year-old patient,” NPR, March 25, 2022.

Friday, May 21, 2021

Healthcare Safety Culture and Interventions to Reduce Preventable Medical Errors

HSS OIG report cover

We have previously written about the shocking number of preventable errors in healthcare settings that result in injury or death to patients.  We have also discussed the importance of a strong safety culture (SC) in reducing healthcare error rates.  However, after 20 years of efforts, the needle has not significantly moved on overall injuries and deaths.  This post reviews healthcare’s concept of SC and research that ties SC to patient outcomes.  We offer our view on why interventions have not been more effective.

Healthcare’s Model of Safety Culture

Healthcare has a model for SC, shown in the SC primer on the Agency for Healthcare Research and Quality’s (AHRQ) Patient Safety Network website.*  The model contains these key cultural features:

  • acknowledgment of the high-risk nature of an organization's activities and the determination to achieve consistently safe operations
  • a blame-free environment** where individuals are able to report errors or near misses without fear of reprimand or punishment
  • encouragement of collaboration across ranks and disciplines to seek solutions to patient safety problems
  • organizational commitment of resources to address safety concerns.

We will critique this model later.

Healthcare Providers Believe Safety Culture is Important

A U.S. Department of Health and Human Services (HSS) report*** affirms healthcare providers’ belief that SC is important and can contribute to fewer errors and improved patient outcomes.

AHRQ administers the Patient Safety Organization (PSO) program which gathers data on patient safety events from healthcare providers.  In 2019, the HSS Office of Inspector General surveyed hospitals and PSOs to identify the PSO program’s value and challenges.  SC was one topic covered in the survey and the results confirm SC’s importance to providers.  “Among hospitals that work with PSOs, 80 percent find that feedback and analysis on patient safety events have helped prevent future events, and 72 percent find that such feedback has helped them understand the causes of events.” (p. 10)  Furthermore, “Nearly all (95 percent) hospitals that work with a PSO found that their PSOs have helped improve the culture of safety at their facilities.  A culture of safety is one that enables individuals to report errors without fear of reprimand and to collaborate on solutions.” (p. 11) 

Healthcare Research Connects SC to Interventions to Reduced Errors

AHRQ publishes the “Making Healthcare Safer” series of reports, which represent summaries of important research on selected patient safety practices (PSPs).  The most recent (2020) edition**** recognizes SC as a cross-cutting practice, i.e., SC impacts the effectiveness of many specific PSPs. 

The section on cross-cutting practices begins by noting that healthcare is trying to learn from the experience of high reliability organizations (HROs).  HROs have many safety-enhancing attributes included committed leaders, a SC where staff identify and correct all deviations that could lead to unsafe conditions, an environment where adverse events or near misses are reported without fear of blame or recrimination, and practices to identify a problem’s scope, root causes, and appropriate solutions. (p. 17-1) 

The report identified several categories of practices that are used to improve healthcare SC: Leadership WalkRounds, Team Training, Comprehensive Unit-based Safety Programs (CUSP), and interventions that implemented multiple methods. (p. 17-13)

WalkRounds “involves leaders “walking around” to engage in face to face, candid discussions with frontline staff about patient safety incidents or near-misses.” (p. 17-16)  Team training programs focus on enhancing teamwork skills and communication between healthcare providers . . .” (p. 17-17)  CUSP is a multi-step program to assess, intervene in, and reassess a healthcare unit’s SC. (p. 17-19)

The report also covers 17 specific areas where harm/errors can occur and highlights SC aspects associated with two such areas: developing rapid response teams and dealing with alarm fatigue in hospitals. 

Rapid response teams (RRTs) treat deteriorating hospital patients before adverse events occur. (p. 2-1)  Weak SC and healthcare hierarchies are barriers to successful implementation of RRTs. (p. 2-10)

Alarm fatigue occurs because of high exposure to medical device alarms, many of which are loud or false alarms, that lead to desensitization, missed alarms or delayed responses. (p. 13-1)  The cultural aspects of interventions focused on all staff members (not just nurses) assuming responsibility for addressing alarms. (p. 13-6) 

Our Perspective

We have three problems with healthcare’s efforts to reduce harm to patients: (1) the quasi-official healthcare mental model of safety culture is incomplete, (2) healthcare’s assumption that it can model itself on HROs ignores a critical systemic difference, and (3) an inadequate overall system model leads to fragmented, incremental improvement projects.

An inadequate model for SC

Healthcare does not have an adequate understanding of the necessary attributes of a strong SC.  

The features listed in the introduction of this post are necessary but not sufficient for a strong SC.  SC is more than good communications; it is part of the overall cultural system.  This system has feedback loops that can reinforce or extinguish attitudes and behaviors.  The attitudes of people in the system are heavily influenced by their trust in management to do the right thing.  Management’s behavior is influenced by their goals, policy constraints, environmental pressures, and incentives, including monetary compensation.

Top-to-bottom decision making in the system needs to be consistent, which means processes, priorities, practices, and rules should be defined and followed.  Goal conflicts must be consistently handled.  Decision makers must be identified to allow accountability.   Problems must be identified (without retribution except for cause), analyzed, and permanently fixed.

Lack of attention to the missing attributes is one reason that healthcare SC has been slow to strengthen and unfavorable patient outcomes are still at unacceptable levels. 

Healthcare is not a traditional HRO

The healthcare system looks to HROs for inspiration on SC but does not recognize one significant difference between a traditional HRO and healthcare.

When we consider other HROs, e.g., nuclear power plants, off-shore drilling operations, or commercial aviation, we understand that they have significant interactions with their respective environments, e.g., regulators, politicians, inspectors, suppliers, customers, activists, etc. 

Healthcare is different because its customers are basically the feedstock for the “factory” and healthcare has to accept those inputs “as is”; in other words, unlike a nuclear power plant, healthcare cannot define and enforce a set of specifications for its inputs.  The inputs (patients) arrive in a wide range of “as is” conditions, from simple injuries to multiple, interacting ailments.  The healthcare system has to accomplish two equally important objectives: (1) correctly identify a patient’s problem(s) and (2) fix them in a robust, cost-effective manner.  SC in the first phase should focus on obtaining the correct diagnosis; SC in the second phase should focus on performing the prescribed corrective actions according to approved procedures, and ensuring that expected results occur. 

Inadequate models lead to piecemeal interventions      

Healthcare’s simplistic mental model for SC is part of an inaccurate mental model for the overall system.  The current system model is fragmented and leads researchers and practitioners to think small (on silos) when they could be thinking big (on the enterprise).  An SC intervention that focuses on tightening process controls in one small area cannot move the needle on system-wide SC or overall patient outcomes.  For more on systems models, systemic challenges, and narrow interventions, see our Oct. 9, 2019 and Nov. 9,2020 posts.  Click on the healthcare label below to see all of the related posts.

Bottom line: Healthcare SC can have a direct impact on the probabilities that specific harms will occur, and their severity if they do but accurate models of culture are essential. 

 

*  Agency for Healthcare Research and Quality, Culture of Safety” (Sept. 2019).  Accessed May 4, 2021.  AHRQ is an organization within the U.S. Department of Health and Human Services.  Its mission includes producing evidence to make health care safer.

**  The “blame-free” environment has evolved into a “just culture” where human errors, especially those caused by the task system context, are tolerated but taking shortcuts and reckless behavior are disciplined.  Click on the just culture label for related posts.

***  U.S. Dept. of Health and Human Services Office of Inspector General, “Patient Safety Organizations: Hospital Participation, Value, and Challenges,” OEI-01-17-00420, Sept. 2019.

****  K.K. Hall et al, “Making Healthcare Safer III: A Critical Analysis of Existing and Emerging Patient Safety Practices,” AHRQ Pub. No. 20-0029-EF.  (Rockville, MD: AHRQ) March 2020.  This is a 1400 page report so we are only reporting relevant highlights.


Monday, November 9, 2020

Setting the Bar for Healthcare: Patient Care Goals from the Joint Commission

Joint Commission HQ
The need for a more effective safety culture (SC) in the field of healthcare is acute: every year tens of thousands of patients are injured or unnecessarily die while in U.S. hospitals. The scope of the problem became widely known known with the publication of “To Err is Human: Building a Safer Health System”* in 2000. This report included two key observations: (1) the cause of the injuries and deaths is not bad people in health care, rather the people are working in bad systems that need to be made safer and (2) legitimate liability concerns discourage the reporting of errors, which means less feedback to the system and less learning from mistakes.

It's 20 years later. Is the healthcare system safer than it was in 2000? Yes. Is safety performance at a satisfactory level? No.

For evidence, we need look no further than a Nov. 18, 2019 blog post** byDr. Mark Chassin, president and CEO of the Joint Commission (JC), the entity responsible for establishing standards for healthcare functions and patient care, and evaluating, accrediting, and certifying healthcare organizations based on their compliance with the standards.

Dr. Chassin summarized the current situation as follows: “The health care industry has directed a substantial amount of time, effort, and resources at solving the problems, and we have seen some progress. That progress has typically occurred one project at a time, with hard-working quality professionals applying a “one-size-fits-all” best practice to address each problem. The resulting improvements have been pretty modest, difficult to sustain, and even more difficult to spread.”

Going forward, he says the industry can make substantial progress by committing to zero harm, overhauling the organizational culture, and utilizing proven process improvement techniques. He singles out the aviation and nuclear power industries for having similar commitments.

But achieving substantial, sustained improvement is a big lift. To get a feel for how big, let's look at the 2020 goals and strategies the JC has established for patient care in hospitals, in other words, where the performance bar is set today.*** We will try to inform your own judgment about their scope and sufficiency by comparing them with corresponding activities in the nuclear power industry.

1. Identify patients correctly by using at least two ways to identify them.

This is a major challenge in a hospital where many patients are entering and leaving the system every day, being transferred to and from different departments, and being treated by multiple individuals who have different roles and ranks, and are treating patients at different levels of intensity for different periods of time. There is really no analogue in the closed, controlled personnel environment of a power plant.

2. Improve staff communication by getting important test results to the right staff person on time.

This should be a familiar challenge to people in any organization, including a power plant, where functions may exist in different organizational silos with their own procedures, vocabulary, and priorities.

3. Use medicines safely by labeling medicines that are not labeled, taking extra care with patients on blood thinners, and managing patients' medicine records for accuracy, completeness, and possible interactions.

This is similar to requirements to accurately label, control, and manage the use of all chemicals used in an industrial facility.

4. Use alarms safely by ensuring that alarms on medical equipment are heard and responded to on time.

In a hospital, it is a problem when multiple alarms are going off at the same time, with differing degrees of urgency for personnel attention and response. In power plants, operators have been known to turn off alarms that are reporting too many false positives. These situations call out for operating and maintenance standards and practices that ensure all activated alarms are valid and deserving of a response.

5. Prevent infection by adhering to Centers for Disease Control or World Health Organization hand cleaning guidelines.

The aim is to keep bad bugs from circulating. Compare this prctice to the myriad procedures, personnel, and equipment dedicated to ensuring nuclear power plant radioactivity is kept in an identified, controlled, and secure environment.

6. Identify patient safety risks by reducing the risk for suicide.

Compare this with the wellness, fitness for duty, and behavioral observation programs at every nuclear power plant.

7. Prevent mistakes in surgery by making sure that the correct surgery is done on the correct patient and at the correct place on the patient’s body, and pausing before the surgery to make sure that a mistake is not being made.

This is similar to tailgate meetings before maintenance activities and using the STAR (Stop-Think-Act-Review) approach before and during work. Think of the potential for error in mirror-image plants; people are bi-lateral but subject to the similar risks.

Our Perspective

The JC's set of goals is thin gruel to show after 20 years. In our view, efforts to date reflect two major shortcomings: a lack of progress in defining and strengthening SC, and a lack of any shared understanding of what the relevant system consists of, how it functions, and how to improve it.

Safety Culture

Our July 31, 2020 post on When We Do Harm by Dr. Danielle Ofri discussed the key attributes for a strong healthcare SC, i.e., one where the probability of errors is much lower than it is today. In Ofri's view, the primary cultural attribute for reducing errors is a willingness of individuals to assume ownership and get the necessary things done, even if it's not in their specific job description, amid a diffusion of responsibility in their task environment. Secondly, all members of the organization, regardless of status, should have the ability (or duty even) to point out problems and errors without fear of retribution. The culture should regard reporting an adverse event as a routine and ordinary task. Third, organizational leaders, including but not limited to senior managers, must encourage criticism, forbid scapegoating, and not allow hierarchy and egos to overrule what is right and true. There should be deference to proven expertise and widely held authority to say “stop” when problems become apparent.

The Healthcare System

The healthcare system includes the providers, the supporting infrastructure, external environmental factors, e.g., regulators and insurance companies, the patients and their families, and all the interrelationships and dynamics between these components. An important dynamic is feedback, where the quality and quantity of output from one component influences performance in other system components. System dynamics create homeostasis, fluctuations, and all levels of performance from superior to failure. Other organizational variables, e.g., management decision-making practices and priorities, and the compensation scheme, provide context for system functioning. For more on system attributes, please see our Oct.9, 2019 post or click the healthcare label.

Bottom line: Compare the JC's efforts with the vast array of safety and SC-related policies, procedures, practices, activities, and dedicated personnel in your workplace. Healthcare has a long way to go.


* Institute of Medicine (L.T. Kohn et al), “To Err Is Human: Building a Safer Health System” (Washington, D.C.: The National Academies Press) 2000. Retrieved Nov. 5, 2020.

** M. Chassin, “To Err is Human: The Next 20 Years,” blog post (Nov. 18, 2019).  Retrieved Nov. 1, 2020.

*** The Joint Commission, “2020Hospital National Patient Safety Goals,” simplified version (July, 2020). Retrieved Nov. 1, 2020.