Showing posts with label Safety Culture. Show all posts
Showing posts with label Safety Culture. Show all posts

Saturday, March 2, 2024

Boeing’s Safety Culture Under the FAA’s Microscope

The Federal Aviation Administration (FAA) recently released its report* on the safety culture (SC) at Boeing.  The FAA Expert Panel was tasked with reviewing SC after two crashes involving the latest models of Boeing’s 737 MAX airplanes.  The January 2024 door plug blowout happened as the report was nearing completion and reinforces the report’s findings.

737 MAXX door plug

The report has been summarized and widely reported in mainstream media and we will not review all its findings and recommendations here.  We want to focus on two parts of the report that address topics we have long promoted as being keys to understanding how strong (or weak) an organization’s SC is, viz., an organization’s decision-making processes and executive compensation.  In addition, we will discuss a topic that’s new to us, how to ensure the independence of employees whose work includes assessing company work products from the regulator’s perspective.

Decision-making

An organization’s decision-making processes create some of the most visible artifacts of the organization’s culture: a string of decisions (guided by policies, procedures, and priorities) and their consequences.

The report begins with a clear FAA description of decision-making’s important role in a Safety Management System (SMS) and an organization’s overall management.  In part, an “SMS is all about decision-making. Thus it has to be a decision-maker's tool, not a traditional safety program separate and distinct from business and operational decision making.” (p. 10)

However, the panel’s finding on Boeing’s SMS is a mixed bag.  “Boeing provided evidence that it is using its SMS to evaluate product safety decisions and some business decisions. The Expert Panel’s review of Boeing’s SMS documentation revealed detailed procedures on how to use SMS to evaluate product safety decisions, but there are no detailed procedures on how to determine which business decisions affect safety or how they should be evaluated under SMS.” (emphasis added) (p. 35)

The associated recommendation is “Develop detailed procedures to determine which business activities should be evaluated under SMS and how to evaluate those decisions.” (ibid.)  We think the recommendation addresses the specific problem identified in the finding.

One of the major inputs to a decision-making system is an organization’s priorities.  The FAA says safety should always be the top priority but Boeing’s commitment to safety has arguably weakened over time.

“Boeing provided the Expert Panel with a copy of the Boeing Safety Management System Policy, dated April 2022, which states, in part, “… we make safety our top priority.” Boeing revised this policy in August 2023 with . . .  a change to the message “we make safety our top priority” to “safety is our foundation.”” (p. 29)

Lowering the bar did not help.  “The [Expert] panel observed documentation, survey responses, and employee interviews that did not provide objective evidence of a foundational commitment to safety that matched Boeing’s descriptions of that objective.” (p. 22)

Boeing also created seeds of confusion for its safety decision makers.  Boeing implemented its SMS to operate alongside (and not replace or integrate with) its existing safety program.

“During interviews, Boeing employees highlighted that SMS implementation was not to disrupt existing safety program or systems.  SMS operating procedure documents spoke of SMS as the overarching safety program but then also provided segregation of SMS-focused activities from legacy safety activities . . .” (p. 24)

Executive compensation

We have long said that if safety performance is important to an organization then their senior managers’ compensation should have a safety performance-related component. 

Boeing has included safety in its executive financial incentive program.  Safety is one of five factors comprising operational performance which, in turn, is combined with financial performance to determine company-level performance.  Because of the weights used in the incentive model, “The Product Safety measure comprised approximately 4% of the overall 2022 Annual Incentive Award.” (p. 28)

Is 4% enough to influence executive behavior?  You be the judge.

Employee independence from undue management influence   

Boeing’s relationship with the FAA has an aspect that we don’t see in other industries. 

Boeing holds an Organization Designation Authorization (ODA) from the FAA. This allows Boeing to “make findings and issue certificates, i.e., perform discretionary functions in engineering, manufacturing, operations, airworthiness, or maintenance on behalf of the [FAA] Administrator.” (p. 12)

Basically, the FAA delegates some of its authority to Boeing employees, the ODA Unit Members (UMs), who then perform certain assessment and certification tasks.  “When acting as a representative of the Administrator, an individual is required to perform in a manner consistent with the policies, guidelines, and directives of the FAA. When performing a delegated function, an individual is legally distinct from, and must act independent of, the ODA holder.” (ibid.)  These employees are supposed to take the FAA’s view of situations and apply the FAA’s rules even if the FAA’s interests are in conflict with Boeing’s business interests. 

This might work in a perfect world but in Boeing’s world, it’s had and has problems, primarily “Boeing’s restructuring of the management of the ODA unit decreased opportunities for interference and retaliation against UMs, and provides effective organizational messaging regarding independence of UMs. However, the restructuring, while better, still allows opportunities for retaliation to occur, particularly with regards to salary and furlough ranking.” (emphasis added) (p. 5)  In addition, “The ability to comply with the ODA’s approved procedures is present; however, the integration of the SMS processes, procedures, and data collection requirements has not been accomplished.” (p. 26)

To an outsider, this looks like bad organizational design and practices. 

The U.S. commercial nuclear industry offers a useful contrast.  The regulator (Nuclear Regulatory Commission) expects its licensees to follow established procedures, perform required tests and inspections, and report any problems to the NRC.  Self-reporting is key to an effective relationship built on a base of trust.  However, it’s “trust but verify.”  The NRC has their own full-time employees in all the power plants, performing inspections, monitoring licensee operations, and interacting with licensee personnel.  The inspectors’ findings can lead, and have led, to increased oversight of licensee activities by the NRC.

Our perspective

It’s obvious that Boeing has emphasized production over safety.  The problems described above are evidence of broad systemic issues which are not amenable to quick fixes.  Integrating SC into everyday decision-making is hard work of the “continuous improvement” variety; it will not happen by management fiat.  Adjusting the compensation plan will require the Board to take safety more seriously.  Reworking the ODA program to eliminate all pressures and goal conflicts may not be possible; this is a big problem because the FAA has effectively deputized 1,000 people to perform FAA functions at Boeing. (p. 25)

The report only covers the most visible SC issues.  Complacency, normalization of deviation, the multitude of biases that can affect decision-making, and other corrosive factors are perennial threats to a strong SC and can affect “the natural drift in organizations.” (p. 40)  Such drift may lead to everything from process inefficiencies to tragic safety failures.

Boeing has taken one step: they fired the head of the 737 MAX program.**  Organizations often toss a high-level executive into a volcano to appease the regulatory gods and buy some time.  Boeing’s next challenge is that the FAA has given Boeing 90 days to fix its quality problems highlighted by the door plug blowout.***

Bottom line: Grab your popcorn, the show is just starting.  Boeing is probably too big to fail but it is definitely going to be pulled through the wringer. 


*  Section 103Organization Designation Authorizations (ODA) for Transport Airplanes Expert Panel Review Report,” Federal Aviation Administration (Feb. 26, 2024). 

**  N. Robertson, “Boeing fires head of 737 Max program,” The Hill (Feb. 21, 2024).

***  D. Shepardson and V. Insinna, “FAA gives Boeing 90 days to develop plan to address quality issues,” Reuters (Feb. 28, 2024).

Friday, March 10, 2023

A Systems Approach to Diagnosis in Healthcare Emergency Departments

JAMA logo

A recent op-ed* in JAMA advocated greater use of systems thinking to reduce diagnostic errors in emergency departments (EDs).  The authors describe the current situation – diagnostic errors occur at an estimated 5.7% rate – and offer 3 insights why systems thinking may contribute to interventions that reduce this error rate.  We will summarize their observations and then provide our perspective.

First, they point out that diagnostic errors are not limited to the ED, in fact, such errors occur in all specialties and areas of health care.  Diagnosis is often complicated and practitioners are under time pressure to come up with an answer.  The focus of interventions should be on reducing incorrect diagnoses that result in harm to patients.  Fortunately, studies have shown that “just 15 clinical conditions accounted for 68% of diagnostic errors associated with high-severity harms,” which should help narrow the focus for possible interventions.  However, simply doing more of the current approaches, e.g., more “testing,” is not going to be effective.  (We’ll explain why later.)

Second, diagnostic errors are often invisible; if they were visible, they would be recognized and corrected in the moment.  The system needs “practical value-added ways to define and measure diagnostic errors in real time, . . .”

Third, “Because of the perception of personal culpability associated with diagnostic errors, . . . health care professionals have relied on the heroism of individual clinicians . . . to prevent diagnostic errors.”  Because humans are not error-free, the system as it currently exists will inevitably produce some errors.  Possible interventions include checklists, cognitive aids, machine learning, and training modules aimed at the Top 15 problematic clinical conditions. “The paradigm of how we interpret diagnostic errors must shift from trying to “fix” individual clinicians to creating systems-level solutions to reverse system errors.”

Our Perspective

It will come as no surprise that we endorse the authors’ point of view: healthcare needs to utilize more systems thinking to increase the safety and effectiveness of its myriad diagnostic and treatment processes.  Stakeholders must acknowledge that the current system for delivering healthcare services has error rates consistent with its sub-optimal design.  Because of that, tinkering with incremental changes, e.g., the well-publicized effort to reduce infections from catheters, will yield only incremental improvements in safety.  At best, they will only expose the next stratum of issues that are limiting system performance.

Incremental improvements are based on fragmented mental models of the healthcare system.  Proper systems thinking starts with a complete mental model of a healthcare system and how it operates.  We have described a more complete mental model in other posts so we will only summarize it here.  A model has components, e.g., doctors, nurses, support staff, and facilities.  And the model is dynamic, which means components are not fixed entities but ones whose quality and quantity varies over time.  In addition, the inter-relationships between and among the components can also vary over time.  Component behavior is directed by both relatively visible factors – policies, procedures, and practices – and softer control functions such as the level of trust between individuals, different groups, and hierarchical levels, i.e., bosses and workers.  Importantly, component behavior is also influenced by feedback from other components.  These feedback loops can be positive or negative, i.e., they can reinforce certain behaviors or seek to reduce or eliminate them.  For more on mental models, see our May 21, 2021, Nov. 6, 2019, and Oct. 9, 2019 posts.

One key control factor is organizational culture, i.e., the values and assumptions about reality shared by members.  In the healthcare environment, the most important subset of culture is safety culture (SC).  Safety should be a primary consideration in all activities in a healthcare organization.  For example, in a strong SC, the reporting of an adverse event such as an error should be regarded as a routine and ordinary task.  The reluctance of doctors to report errors because of their feelings of personal and professional shame, or fear of malpractice allegations or discipline, must be overcome.  For more on SC, see our May 21, 2021 and July 31, 2020 posts.

Organizational structure is another control factor, one that basically defines the upper limit of organizational performance.  Does the existing structure facilitate communication, learning, and performance improvement or do silos create barriers?  Do professional organizations and unions create focal points the system designer can leverage to improve performance or are they separate power structures whose interests and goals may conflict with those of the larger system?  What is the quality of management’s behavior, especially their decision making processes, and how is management influenced by their goals, policy constraints, environmental pressures (e.g., to advance equity and diversity) and compensation scheme?

As noted earlier, the authors observe that EDs depend on individual doctors to arrive at correct diagnoses in spite of inadequate information or time pressure and doctors who can do this well are regarded as heroes.  We note that doctors who are less effective may be shuffled off to the side or in egregious cases, labeled “bad apples” and tossed out of the organization.  This is an incorrect viewpoint.  Competent, dedicated individuals are necessary, of course, but the system designer should focus on making the system more error tolerant (so any errors cause no or minimal harm) and resilient (so errors are recognized and corrective actions implemented.)          

Bottom line: more systems thinking is needed in healthcare and articles like this help move the needle in the correct direction.


*  J.A. Edlow and P.J. Pronovost, “Misdiagnosis in the Emergency Department: Time for a System Solution,” JAMA (Journal of the American Medical Association), Vol. 329, No. 8 (Feb. 28, 2023), pp. 631-632.

Thursday, March 31, 2022

The Criminalization of Safety in Healthcare?


On March 25, 2022 a former nurse at Vanderbilt University Medical Center (VUMC) was convicted of gross neglect of an impaired adult and negligent homicide as a consequence of a fatal drug error in 2017.* 

Criminal prosecutions for medical errors are rare, and healthcare stakeholders are concerned about what this conviction may mean for medical practice going forward.  A major concern is practitioners will be less likely to self-report errors for fear of incriminating themselves.

We have previously written about the intersection of criminal charges and safety management and practices.  In 2016 Safetymatters’ Bob Cudlin authored a 3-part series on this topic.  (See his May 24, May 31, and June 7 posts.)  Consistent with our historical focus on systems thinking, Bob reviewed examples in different industries and asked “where does culpability really lie - with individuals? culture? the corporation? or the complex socio-technical systems within which individuals act?”

“Corporations inherently, and often quite intentionally, place significant emphasis on achieving operational and business goals.  These goals at certain junctures may conflict with assuring safety.  The de facto reality is that it is up to the operating personnel to constantly rationalize those conflicts in a way that achieves acceptable safely.”

We are confident this is true in hospital nurses’ working environment.  They are often short-staffed, working overtime, and under pressure from their immediate task environments and larger circumstances such as the ongoing COVID pandemic.  The ceaseless evolution of medical technology means they have to adapt to constantly changing equipment, some of which is problematic.  Many/most healthcare professionals believe errors are inevitable.  See our August 6, 2019 and July 31, 2020 posts for more information about the extent, nature, and consequences of healthcare errors.

At VUMC, medicines are dispensed from locked cabinets after a nurse enters various codes.  The hospital had been having technical problems with the cabinets in early 2017 prior to the nurse’s error.  The nurse could not obtain the proper drug because she was searching using its brand name instead of its generic name.  She entered an override that allowed her to access additional medications and selected the wrong one, a powerful paralyzing agent.  The nurse and other medical personnel noted that entering overrides on the cabinets was a common practice.

VUMC’s problems extended well beyond troublesome medicine cabinets.  An investigator said VUMC had “a heavy burden of responsibility in this matter.”  VUMC did not report the medication error as required by law and told the local medical examiner’s office that the patient died of “natural” causes.  VUMC avoided criminal charges because prosecutors didn’t think they could prove gross negligence. 

Our Perspective

As Bob observed in 2016, “The reality is that criminalization is at its core a “disincentive.”  To be effective it would have to deter actions or decisions that are not consistent with safety but not create a minefield of culpability. . . .  Its best use is probably as an ultimate boundary, to deter intentional misconduct but not be an unintended trap for bad judgment or inadequate performance.”

In the instant case, the nurse did not intend to cause harm but her conduct definitely reflected bad judgment and unacceptable performance.  She probably sealed her own fate when she told law enforcement she “probably just killed a patient” and the licensing board that she had been “complacent” and “distracted.”   

But we see plenty of faults in the larger system, mainly that VUMC used cabinets that held dangerous substances and had a history of technical glitches but allowed users to routinely override cabinet controls to obtain needed medicines.  As far we can tell, VUMC did not implement any compensating safety measures, such as requiring double checking by a colleague or a supervisor’s presence when overrides were performed or “dangerous” medications were withdrawn.

In addition, VUMC’s organizational culture was on full display with their inadequate and misleading reporting of the patient’s death.  VUMC has made no comment on the nurse’s case.  In our view, their overall strategy was to circle the wagons, seal off the wound, and dispose of the bad apple.  Nothing to see here, folks.

Going forward, the remaining VUMC nurses will be on high alert for awhile but their day-to-day task demands will eventually force them to employ risky behaviors in an environment that requires such behavior to accomplish the mission but lacks defense in depth to catch errors before they have drastic consequences.  The nurses will/should be demanding a safer work environment.

Bottom line: Will this event mark a significant moment for accountability in healthcare akin to the George Floyd incident’s impact on U.S. police practices?  You be the judge.

For additional Safetymatters insights click the healthcare label below.

 

*  All discussion of the VUMC incident is based on reporting by National Public Radio (NPR).  See B. Kelman, “As a nurse faces prison for a deadly error, her colleagues worry: Could I be next?” NPR, March 22, 2022; “In Nurse’s Trial, Investigator Says Hospital Bears ‘Heavy’ Responsibility for Patient Death,” NPR, March 24, 2022; “Former nurse found guilty in accidental injection death of 75-year-old patient,” NPR, March 25, 2022.

Friday, May 21, 2021

Healthcare Safety Culture and Interventions to Reduce Preventable Medical Errors

HSS OIG report cover

We have previously written about the shocking number of preventable errors in healthcare settings that result in injury or death to patients.  We have also discussed the importance of a strong safety culture (SC) in reducing healthcare error rates.  However, after 20 years of efforts, the needle has not significantly moved on overall injuries and deaths.  This post reviews healthcare’s concept of SC and research that ties SC to patient outcomes.  We offer our view on why interventions have not been more effective.

Healthcare’s Model of Safety Culture

Healthcare has a model for SC, shown in the SC primer on the Agency for Healthcare Research and Quality’s (AHRQ) Patient Safety Network website.*  The model contains these key cultural features:

  • acknowledgment of the high-risk nature of an organization's activities and the determination to achieve consistently safe operations
  • a blame-free environment** where individuals are able to report errors or near misses without fear of reprimand or punishment
  • encouragement of collaboration across ranks and disciplines to seek solutions to patient safety problems
  • organizational commitment of resources to address safety concerns.

We will critique this model later.

Healthcare Providers Believe Safety Culture is Important

A U.S. Department of Health and Human Services (HSS) report*** affirms healthcare providers’ belief that SC is important and can contribute to fewer errors and improved patient outcomes.

AHRQ administers the Patient Safety Organization (PSO) program which gathers data on patient safety events from healthcare providers.  In 2019, the HSS Office of Inspector General surveyed hospitals and PSOs to identify the PSO program’s value and challenges.  SC was one topic covered in the survey and the results confirm SC’s importance to providers.  “Among hospitals that work with PSOs, 80 percent find that feedback and analysis on patient safety events have helped prevent future events, and 72 percent find that such feedback has helped them understand the causes of events.” (p. 10)  Furthermore, “Nearly all (95 percent) hospitals that work with a PSO found that their PSOs have helped improve the culture of safety at their facilities.  A culture of safety is one that enables individuals to report errors without fear of reprimand and to collaborate on solutions.” (p. 11) 

Healthcare Research Connects SC to Interventions to Reduced Errors

AHRQ publishes the “Making Healthcare Safer” series of reports, which represent summaries of important research on selected patient safety practices (PSPs).  The most recent (2020) edition**** recognizes SC as a cross-cutting practice, i.e., SC impacts the effectiveness of many specific PSPs. 

The section on cross-cutting practices begins by noting that healthcare is trying to learn from the experience of high reliability organizations (HROs).  HROs have many safety-enhancing attributes included committed leaders, a SC where staff identify and correct all deviations that could lead to unsafe conditions, an environment where adverse events or near misses are reported without fear of blame or recrimination, and practices to identify a problem’s scope, root causes, and appropriate solutions. (p. 17-1) 

The report identified several categories of practices that are used to improve healthcare SC: Leadership WalkRounds, Team Training, Comprehensive Unit-based Safety Programs (CUSP), and interventions that implemented multiple methods. (p. 17-13)

WalkRounds “involves leaders “walking around” to engage in face to face, candid discussions with frontline staff about patient safety incidents or near-misses.” (p. 17-16)  Team training programs focus on enhancing teamwork skills and communication between healthcare providers . . .” (p. 17-17)  CUSP is a multi-step program to assess, intervene in, and reassess a healthcare unit’s SC. (p. 17-19)

The report also covers 17 specific areas where harm/errors can occur and highlights SC aspects associated with two such areas: developing rapid response teams and dealing with alarm fatigue in hospitals. 

Rapid response teams (RRTs) treat deteriorating hospital patients before adverse events occur. (p. 2-1)  Weak SC and healthcare hierarchies are barriers to successful implementation of RRTs. (p. 2-10)

Alarm fatigue occurs because of high exposure to medical device alarms, many of which are loud or false alarms, that lead to desensitization, missed alarms or delayed responses. (p. 13-1)  The cultural aspects of interventions focused on all staff members (not just nurses) assuming responsibility for addressing alarms. (p. 13-6) 

Our Perspective

We have three problems with healthcare’s efforts to reduce harm to patients: (1) the quasi-official healthcare mental model of safety culture is incomplete, (2) healthcare’s assumption that it can model itself on HROs ignores a critical systemic difference, and (3) an inadequate overall system model leads to fragmented, incremental improvement projects.

An inadequate model for SC

Healthcare does not have an adequate understanding of the necessary attributes of a strong SC.  

The features listed in the introduction of this post are necessary but not sufficient for a strong SC.  SC is more than good communications; it is part of the overall cultural system.  This system has feedback loops that can reinforce or extinguish attitudes and behaviors.  The attitudes of people in the system are heavily influenced by their trust in management to do the right thing.  Management’s behavior is influenced by their goals, policy constraints, environmental pressures, and incentives, including monetary compensation.

Top-to-bottom decision making in the system needs to be consistent, which means processes, priorities, practices, and rules should be defined and followed.  Goal conflicts must be consistently handled.  Decision makers must be identified to allow accountability.   Problems must be identified (without retribution except for cause), analyzed, and permanently fixed.

Lack of attention to the missing attributes is one reason that healthcare SC has been slow to strengthen and unfavorable patient outcomes are still at unacceptable levels. 

Healthcare is not a traditional HRO

The healthcare system looks to HROs for inspiration on SC but does not recognize one significant difference between a traditional HRO and healthcare.

When we consider other HROs, e.g., nuclear power plants, off-shore drilling operations, or commercial aviation, we understand that they have significant interactions with their respective environments, e.g., regulators, politicians, inspectors, suppliers, customers, activists, etc. 

Healthcare is different because its customers are basically the feedstock for the “factory” and healthcare has to accept those inputs “as is”; in other words, unlike a nuclear power plant, healthcare cannot define and enforce a set of specifications for its inputs.  The inputs (patients) arrive in a wide range of “as is” conditions, from simple injuries to multiple, interacting ailments.  The healthcare system has to accomplish two equally important objectives: (1) correctly identify a patient’s problem(s) and (2) fix them in a robust, cost-effective manner.  SC in the first phase should focus on obtaining the correct diagnosis; SC in the second phase should focus on performing the prescribed corrective actions according to approved procedures, and ensuring that expected results occur. 

Inadequate models lead to piecemeal interventions      

Healthcare’s simplistic mental model for SC is part of an inaccurate mental model for the overall system.  The current system model is fragmented and leads researchers and practitioners to think small (on silos) when they could be thinking big (on the enterprise).  An SC intervention that focuses on tightening process controls in one small area cannot move the needle on system-wide SC or overall patient outcomes.  For more on systems models, systemic challenges, and narrow interventions, see our Oct. 9, 2019 and Nov. 9,2020 posts.  Click on the healthcare label below to see all of the related posts.

Bottom line: Healthcare SC can have a direct impact on the probabilities that specific harms will occur, and their severity if they do but accurate models of culture are essential. 

 

*  Agency for Healthcare Research and Quality, Culture of Safety” (Sept. 2019).  Accessed May 4, 2021.  AHRQ is an organization within the U.S. Department of Health and Human Services.  Its mission includes producing evidence to make health care safer.

**  The “blame-free” environment has evolved into a “just culture” where human errors, especially those caused by the task system context, are tolerated but taking shortcuts and reckless behavior are disciplined.  Click on the just culture label for related posts.

***  U.S. Dept. of Health and Human Services Office of Inspector General, “Patient Safety Organizations: Hospital Participation, Value, and Challenges,” OEI-01-17-00420, Sept. 2019.

****  K.K. Hall et al, “Making Healthcare Safer III: A Critical Analysis of Existing and Emerging Patient Safety Practices,” AHRQ Pub. No. 20-0029-EF.  (Rockville, MD: AHRQ) March 2020.  This is a 1400 page report so we are only reporting relevant highlights.


Monday, November 9, 2020

Setting the Bar for Healthcare: Patient Care Goals from the Joint Commission

Joint Commission HQ
The need for a more effective safety culture (SC) in the field of healthcare is acute: every year tens of thousands of patients are injured or unnecessarily die while in U.S. hospitals. The scope of the problem became widely known known with the publication of “To Err is Human: Building a Safer Health System”* in 2000. This report included two key observations: (1) the cause of the injuries and deaths is not bad people in health care, rather the people are working in bad systems that need to be made safer and (2) legitimate liability concerns discourage the reporting of errors, which means less feedback to the system and less learning from mistakes.

It's 20 years later. Is the healthcare system safer than it was in 2000? Yes. Is safety performance at a satisfactory level? No.

For evidence, we need look no further than a Nov. 18, 2019 blog post** byDr. Mark Chassin, president and CEO of the Joint Commission (JC), the entity responsible for establishing standards for healthcare functions and patient care, and evaluating, accrediting, and certifying healthcare organizations based on their compliance with the standards.

Dr. Chassin summarized the current situation as follows: “The health care industry has directed a substantial amount of time, effort, and resources at solving the problems, and we have seen some progress. That progress has typically occurred one project at a time, with hard-working quality professionals applying a “one-size-fits-all” best practice to address each problem. The resulting improvements have been pretty modest, difficult to sustain, and even more difficult to spread.”

Going forward, he says the industry can make substantial progress by committing to zero harm, overhauling the organizational culture, and utilizing proven process improvement techniques. He singles out the aviation and nuclear power industries for having similar commitments.

But achieving substantial, sustained improvement is a big lift. To get a feel for how big, let's look at the 2020 goals and strategies the JC has established for patient care in hospitals, in other words, where the performance bar is set today.*** We will try to inform your own judgment about their scope and sufficiency by comparing them with corresponding activities in the nuclear power industry.

1. Identify patients correctly by using at least two ways to identify them.

This is a major challenge in a hospital where many patients are entering and leaving the system every day, being transferred to and from different departments, and being treated by multiple individuals who have different roles and ranks, and are treating patients at different levels of intensity for different periods of time. There is really no analogue in the closed, controlled personnel environment of a power plant.

2. Improve staff communication by getting important test results to the right staff person on time.

This should be a familiar challenge to people in any organization, including a power plant, where functions may exist in different organizational silos with their own procedures, vocabulary, and priorities.

3. Use medicines safely by labeling medicines that are not labeled, taking extra care with patients on blood thinners, and managing patients' medicine records for accuracy, completeness, and possible interactions.

This is similar to requirements to accurately label, control, and manage the use of all chemicals used in an industrial facility.

4. Use alarms safely by ensuring that alarms on medical equipment are heard and responded to on time.

In a hospital, it is a problem when multiple alarms are going off at the same time, with differing degrees of urgency for personnel attention and response. In power plants, operators have been known to turn off alarms that are reporting too many false positives. These situations call out for operating and maintenance standards and practices that ensure all activated alarms are valid and deserving of a response.

5. Prevent infection by adhering to Centers for Disease Control or World Health Organization hand cleaning guidelines.

The aim is to keep bad bugs from circulating. Compare this prctice to the myriad procedures, personnel, and equipment dedicated to ensuring nuclear power plant radioactivity is kept in an identified, controlled, and secure environment.

6. Identify patient safety risks by reducing the risk for suicide.

Compare this with the wellness, fitness for duty, and behavioral observation programs at every nuclear power plant.

7. Prevent mistakes in surgery by making sure that the correct surgery is done on the correct patient and at the correct place on the patient’s body, and pausing before the surgery to make sure that a mistake is not being made.

This is similar to tailgate meetings before maintenance activities and using the STAR (Stop-Think-Act-Review) approach before and during work. Think of the potential for error in mirror-image plants; people are bi-lateral but subject to the similar risks.

Our Perspective

The JC's set of goals is thin gruel to show after 20 years. In our view, efforts to date reflect two major shortcomings: a lack of progress in defining and strengthening SC, and a lack of any shared understanding of what the relevant system consists of, how it functions, and how to improve it.

Safety Culture

Our July 31, 2020 post on When We Do Harm by Dr. Danielle Ofri discussed the key attributes for a strong healthcare SC, i.e., one where the probability of errors is much lower than it is today. In Ofri's view, the primary cultural attribute for reducing errors is a willingness of individuals to assume ownership and get the necessary things done, even if it's not in their specific job description, amid a diffusion of responsibility in their task environment. Secondly, all members of the organization, regardless of status, should have the ability (or duty even) to point out problems and errors without fear of retribution. The culture should regard reporting an adverse event as a routine and ordinary task. Third, organizational leaders, including but not limited to senior managers, must encourage criticism, forbid scapegoating, and not allow hierarchy and egos to overrule what is right and true. There should be deference to proven expertise and widely held authority to say “stop” when problems become apparent.

The Healthcare System

The healthcare system includes the providers, the supporting infrastructure, external environmental factors, e.g., regulators and insurance companies, the patients and their families, and all the interrelationships and dynamics between these components. An important dynamic is feedback, where the quality and quantity of output from one component influences performance in other system components. System dynamics create homeostasis, fluctuations, and all levels of performance from superior to failure. Other organizational variables, e.g., management decision-making practices and priorities, and the compensation scheme, provide context for system functioning. For more on system attributes, please see our Oct.9, 2019 post or click the healthcare label.

Bottom line: Compare the JC's efforts with the vast array of safety and SC-related policies, procedures, practices, activities, and dedicated personnel in your workplace. Healthcare has a long way to go.


* Institute of Medicine (L.T. Kohn et al), “To Err Is Human: Building a Safer Health System” (Washington, D.C.: The National Academies Press) 2000. Retrieved Nov. 5, 2020.

** M. Chassin, “To Err is Human: The Next 20 Years,” blog post (Nov. 18, 2019).  Retrieved Nov. 1, 2020.

*** The Joint Commission, “2020Hospital National Patient Safety Goals,” simplified version (July, 2020). Retrieved Nov. 1, 2020.


Monday, June 15, 2020

IAEA Working Paper on Safety Culture Traits and Attributes

Working paper cover
The International Atomic Energy Agency (IAEA) has released a working paper* that attempts to integrate (“harmonize”) the efforts by several different entities** to identify and describe desirable safety culture (SC) traits and attributes.  The authors have also tried to make the language of SC less nuclear power specific, i.e., more general and thus helpful to other fields that deal with ionizing radiation, such as healthcare.  Below we list the 10 traits and highlight the associated attributes that we believe are most vital for a strong SC.  We also offer our suggestions for enhancing the attributes to broaden and strengthen the associated trait’s presence in the organization.

Individual Responsibility 


All individuals associated with an organization know and adhere to its standards and expectations.  Individuals promote safe behaviors in all situations, collaborate with other individuals and groups to ensure safety, and “accept the value of diverse thinking in optimizing safety.”

We applaud the positive mention of “diverse thinking.”  We also believe each individual should have the duty to report unsafe situations or behavior to the appropriate authority and this duty should be specified in the attributes.

Questioning Attitude 


Individuals watch for anomalies, conditions, behaviors or activities that can adversely impact safety.  They stop when they are uncertain and get advice or help.  They try to avoid complacency.  “They understand that the technologies are complex and may fail in unforeseen ways . . .” and speak up when they believe something is incorrect.

Acknowledging that technology may “fail in unforeseen ways” is important.  Probabilistic Risk Assessments and similar analyses do not identify all the possible ways bad things can happen. 

Communication

Individuals communicate openly and candidly throughout the organization.  Communication with external organizations and the public is accurate.  The reasons for decisions are communicated.  The expectation that safety is emphasized over competing goals is regularly reinforced.

Leader Responsibility

Leaders place safety above competing goals, model desired safety behaviors, frequently visit work areas, involve individuals at all levels in identifying and resolving issues, and ensure that resources are available and adequate.

“Leaders ensure rewards and sanctions encourage attitudes and behaviors that promote safety.”  An organization’s reward system is a hot button issue for us.  Previous SC framework documents have never addressed management compensation and this one doesn’t either.  If SC and safety performance are important then people from top executives to individual workers should be rewarded (by which we mean paid money) for doing it well.

Leaders should also address work backlogs.  Backlogs send a signal to the organization that sub-optimal conditions are tolerated and, if such conditions continue long enough,  are implicitly acceptable.  Backlogs encourage workarounds and lack of attention to detail, which will eventually create challenges to the safety management system.  

Decision-Making

“Individuals use a consistent, systematic approach to evaluate relevant factors, including risk, when making decisions.”  Organizations develop the ability to adapt in anticipation of unforeseen situations where no procedure or plan applies.

We believe the decision making process should be robust, i.e., different individuals or groups facing the same issue should come up with the same or an equally effective solution.  The organization’s approach to decision making (goals, priorities, steps, etc.) should be documented to the extent practical.  Robustness and transparency support efficient, effective communication of the reasons for decisions.

Work Environment 


“Trust and respect permeate the organization. . . . Differing opinions are encouraged, discussed, and thoughtfully considered.”

In addition, senior managers need to be trusted to tell the truth, do the right things, and not sacrifice subordinates to evade the managers’ own responsibilities.

Continuous Learning 


The organization uses multiple approaches to learn including independent and self-assessments, lessons learned from their own experience, and benchmarking other organizations.

Problem Identification and Resolution

“Issues are thoroughly evaluated to determine underlying causes and whether the issue exists in other areas. . . . The effectiveness of the actions is assessed to ensure issues are adequately addressed. . . . Issues are analysed to identify possible patterns and trends. A broad range of information is evaluated to obtain a holistic view of causes and results.”

This is good but could be stronger.  Leaders should ensure the most knowledgeable individuals, regardless of their role or rank, are involved in addressing an issue. Problem solvers should think about the systemic relationships of issues, e.g., is an issue caused by activity in or feedback from some other sub-system, the result of a built-in time delay, or performance drift that exceeded the system’s capacities?  Will the proposed fix permanently address the issue or is it just a band-aid?

Raising Concerns

The organization encourages personnel to raise safety concerns and does not tolerate harassment, intimidation, retaliation or discrimination for raising safety concerns. 

This is the essence of a Safety Conscious Work Environment and is sine qua non for any high hazard undertaking.

Work Planning 


“Work is planned and conducted such that safety margins are preserved.”

Our Perspective

We have never been shy about criticizing IAEA for some of its feckless efforts to get out in front of the SC parade and pretend to be the drum major.***  However, in this case the agency has been content, so far, to build on the work of others.  It’s difficult for any organization to develop, implement, and maintain a strong, robust SC and the existence of many different SC guidebooks has never been helpful.  This is one step in the right direction.  We’d like to see other high hazard industries, in particular healthcare organizations such as hospitals, take to heart SC lessons learned from the nuclear industry.

Bottom line: This concise paper is worth checking out.


*  IAEA Working Document, “A Harmonized Safety Culture Model” (May 5, 2020).  This document is not an official IAEA publication.

**  Including IAEA, WANO, INPO, and government institutions from the United States, Japan, and Finland.

***  See, for example, our August 1, 2016 post on IAEA’s document describing how to perform safety culture self-assessments.  Click on the IAEA label to see all posts related to IAEA.

Tuesday, November 21, 2017

Any Lessons for Nuclear Safety Culture from VW’s Initiative to Improve Its Compliance Culture?

VW Logo (Source: Wikipedia)
The Wall Street Journal (WSJ) recently published an interview* with the head of the new compliance department in Volkswagen’s U.S. subsidiary.  The new executive outlined the department’s goals and immediate actions related to improving VW’s compliance culture.  They will all look familiar to you, including a new organization (headed by a former consultant) reporting directly to the CEO and with independent access to the board; mandatory compliance training; a new code of conduct; and developing a questioning attitude among employees.  One additional attribute deserves a brief expansion.  VW aims to improve employees’ decision making skills.  We’re not exactly sure what that means but if it includes providing more information about corporate policies and legal, social and regulatory expectations (in other words, the context of decisions) then we approve.

Our Perspective 


These interventions could be from a first generation nuclear safety culture (NSC) handbook on efforts to demonstrate management interest and action when a weak culture is recognized.  Such activities are necessary but definitely not sufficient to strengthen culture.  Some specific shortcomings follow.

First, the lack of reflection.  When asked about the causes of VW’s compliance failures, the executive said “I can’t speculate on the failures . . .”  Well, she should have had something to say on the matter, even party line bromides.  We’re left with the impression she doesn’t know, or care, about the specific and systemic causes of VW’s “Dieselgate” problems that are costing the company tens of billions of dollars.  After all, this interview was in the WSJ, available to millions of critical readers, not some trade rag.

Second, the trust issue.  VW wants employees who can be trusted by the organization, presumably to do “the right thing” as they go about their business.  That’s OK but it’s even more important to have senior managers who can be trusted to do the right thing.  This is especially relevant for VW because it’s pretty clear the cheating problems were tolerated, if not explicitly promoted, by senior management; in other words, there was a top-down issue in addition to lower-level employee malfeasance.

Next, the local nature of the announced interventions.  The new compliance department is for VW-USA only.  The Volkswagen Group of America includes one assembly plant, sales and maintenance support functions, test centers and VW’s consumer finance entity.  It’s probably safe to say that VW’s most important decisions regarding corporate practices and product engineering are made in Wolfsburg, Lower Saxony and not Herndon, Virginia.

Finally, the elephant in the room.  There is no mention of VW’s employee reward and recognition system or the senior management compensation program.  We have long argued that employees focus on actions that will secure their jobs (and perhaps lead to promotions) while senior managers focus on what they’re being paid to accomplish.  For the latter group in the nuclear industry, that’s usually production with safety as a should-do but with little, if any, money attached.  We don’t believe VW is significantly different.

Bottom line: If this WSJ interview is representative of the auto industry’s understanding of culture, then once again nuclear industry thought leaders have a more sophisticated and complete grasp of cultural dynamics and nuances.

We have commented before on the VW imbroglio.  See our Dec. 20, 2015 and May 31, 2016 posts or click on the VW label.


*B. DiPietro, “Working to Change Compliance Culture at Volkswagen,” Wall Street Journal (Nov. 16, 2017).

Friday, October 6, 2017

WANO and NEA to Cooperate on Nuclear Safety Culture

World Nuclear News Oct. 4, 2017
According to an item* in World Nuclear News, the World Association of Nuclear Operators (WANO) and the Organisation for Economic Co-operation and Development’s Nuclear Energy Agency (NEA) signed a memorandum of understanding to cooperate on "the further development of approaches, practices and methods in order to proactively strengthen global nuclear safety."

One objective is to “enhance the common understanding of nuclear safety culture challenges . . .”  In addition, the parties have identified safety culture (SC) as a "fundamental subject of common interest" and plan to launch a series of "country-specific discussions to explore the influence of national culture on the safety culture".

Our Perspective

As usual, the press release touts all the benefits that are going to flow from the new relationship.  We predict the flow will be at best a trickle based on what we’ve seen from the principals over the years.  Following is our take on the two entities.

WANO is an association of the world's nuclear power operators.  Their objective is to exchange safety knowledge and operating experience among its members.  We have mentioned WANO in several Safetymatters posts, including Jan. 23, 2015, Jan. 7, 2015, Jan. 21, 2014 and May 1, 2010.  Their public contributions are generally shallow and insipid.  WANO may be effective at facilitating information sharing but it has no real authority over operators.  It is, however, an overhead cost for the economically uncompetitive commercial nuclear industry. 

NEA is an intergovernmental agency that facilitates cooperation among countries with nuclear technology infrastructures.  In our March 3, 2016 post we characterized NEA as an “empty suit” that produces cheerleading and blather.  We stand by that assessment.  In Safetymatters’ history, we have come across only one example of NEA adding value—when they published a document that encouraged regulators to take a systems view of SC.  See our Feb. 10, 2016 post for details.

No one should expect this new arrangement to lead to any breakthroughs in SC theory or insights into SC practice.  It will lead to meetings, conferences, workshops and boondoggles.  One hopes it doesn’t indirectly raise the industry’s costs or, more importantly, distract WANO from its core mission of sharing safety information and operating experience across the international nuclear industry. 


*  “WANO, NEA enhance cooperation in nuclear safety,” World Nuclear News (Oct. 4, 2017).

Tuesday, September 26, 2017

“New” IAEA Nuclear Safety Culture Self-Assessment Methodology

IAEA report cover
The International Atomic Energy Agency (IAEA) touted its safety culture (SC) self-assessment methodology at the Regulatory Cooperation Forum held during the recent IAEA 61st General Conference.  Their press release* refers to the methodology as “new” but it’s not exactly fresh from the factory.  We assume the IAEA presentation was based on a publication titled “Performing Safety Culture Self-assessments”** which was published in June 2016 and we reviewed on Aug. 1, 2016.  We encourage you to read our full review; it is too lengthy to reasonably summarize in this post.  Suffice to say the publication includes some worthwhile SC information and descriptions of relevant SC assessment practices but it also exhibits some execrable shortcomings.


*  IAEA, “New IAEA Self-Assessment Methodology and Enhancing SMR Licensing Discussed at Regulatory Cooperation Forum” (Sept. 22, 2017).

**  IAEA, “Performing Safety Culture Self-assessments,” Safety Reports Series no. 83 (Vienna: IAEA, 2016).