Monday, December 3, 2018

Nuclear Safety Culture: Lessons from Factfulness by Hans Rosling

This book* is about biases that prevent us from making fact-based decisions.  It is based on the author’s world-wide work as a doctor and public health researcher.  We saw it on Bill Gates’ 2018 summer reading list.

Rosling discusses ten instincts (or reasons) why our individual worldviews (or mental models) are systematically wrong and prevent us from seeing situations are they truly are and making fact-based decisions about them.

Rosling mostly addresses global issues but the same instincts can affect our approach to work-related decision making from the enterprise level down to the individual.  We briefly discuss each instinct and highlight how it may hinder us from making good decisions during everyday work and one-off investigations.

The gap instinct

This is “that irresistible temptation we have to divide all kinds of things into two distinct and often conflicting groups, with an imagined gap—a huge chasm of injustice—in between.” (p. 26)  This is reinforced by our “strong dramatic instinct toward binary thinking . . .” (p. 42)  The gap instinct can apply to our thinking about safety, e.g., in the Safety I mental model there is acceptable performance and intolerable performance, with no middle ground and no normal transitions back and forth.  Rosling notes that usually there is no clear cleavage between two groups, even if it seems like that from the averages.  We saw this in Dekker's analysis of health provider data (reviewed Oct. 29, 2018) where both favorable and unfavorable patient outcomes exhibited the same negative work process traits.

The negativity instinct

This is “our tendency to notice the bad more than the good.” (p. 51)  We do not perceive  improvements that are “too slow, too fragmented, or too small one-by-one to ever qualify as news.” (p. 54)  “There are three things going on here: the misremembering of the past [erroneously glorifying the “good old days”]; selective reporting by journalists and activists; and the feeling that as long as things are bad it’s heartless to say they are getting better.” (p. 70)  To tell the truth, we don’t see this instinct inside the nuclear world where facilities with long-standing cultural problems (i.e., bad) are constantly reporting progress (i.e., getting better) while their cultural conditions still remain unacceptable.

The straight line instinct

This is the expectation that a line of data will continue straight into the future.  Most of you have technical training or exposure and know that accurate extrapolations can take many shapes including straight, s-bends, asymptotes, humps or exponential growth. 

The fear instinct

“[F]ears are hardwired deep in our brains for obvious evolutionary reasons.” (p. 105)  “The media cannot resist tapping into our fear instinct. It is such an easy way to grab our attention.” (p. 106)  Rosling observes that hundreds of elderly people who fled Fukushima to escape radiation ended up dying “because of the mental and physical stresses of the evacuation itself or of life in evacuation shelters.” (p. 114)  In other words, they fled something frightening (a perceived risk) and ended up in danger (a real risk).  How often does fear, e.g., fear of bad press, enter into your organization’s decision making?

The size instinct 


We overweight things that look big to us.  “It is instinctive to look at a lonely number and misjudge its importance.  It is also instinctive . . . to misjudge the importance of a single instance or an identifiable victim.” (p. 125)  Does the nuclear industry overreact to some single instances?

The generalization instinct

“[T]he generalization instinct makes “us” think of “them” as all the same.” (p. 140)  At the macro level, this is where the bad “isms” exist: racism, sexism, ageism, classism, etc.  But your coworkers may practice generalization on a more subtle, micro level.  How many people do you work with who think the root cause of most incidents is human error?  Or somewhat more generously, human error, inadequate procedures and/or equipment malfunctions— but not the larger socio-technical system?  Do people jump to conclusions based on an inadequate or incorrect categorization of a problem?  Are categories, rather than facts, used as explanations?  Are vivid examples used to over-glamorize alleged progress or over-dramatize poor outcomes?

The destiny instinct

“The destiny instinct is the idea that innate characteristics determine the destinies of people, countries, religions, or cultures.” (p. 158)  Culture includes deep-seated beliefs, where feelings can be disguised as facts.  Does your work culture assume that some people are naturally bad apples?

The single perspective instinct

This is preference for single causes and single solutions.  It is the fundamental weakness of Safety I where the underlying attitude is that problems arise from individuals who need to be better controlled.  Rosling advises us to “Beware of simple ideas and simple solutions. . . . Welcome complexity.” (p. 189)  We agree.

The blame instinct

“The blame instinct is the instinct to find a clear, simple reason for why something bad has happened. . . . when things go wrong, it must be because of some bad individual with bad intentions. . . . This undermines our ability to solve the problem, or prevent it from happening again, . . . To understand most of the world’s significant problems we have to look beyond a guilty individual and to the system.” (p. 192)  “Look for causes, not villains. When something goes wrong don’t look for an individual or a group to blame. Accept that bad things can happen without anyone intending them to.  Instead spend your energy on understanding the multiple interacting causes, or system, that created the situation.  Look for systems, not heroes.” (p. 204)  We totally agree with Rosling’s endorsement of a systems approach.

The urgency instinct

“The call to action makes you think less critically, decide more quickly, and act now.” (p. 209)  In a true emergency, people will fall back on their training (if any) and hope for the best.  However, in most situations, you should seek more information.  Beware of data that is relevant but inaccurate, or accurate but irrelevant.  Be wary of predictions that fail to acknowledge that the future is uncertain.

Our Perspective

The series of decisions an organization makes is a visible artifact of its culture and its decision making process internalizes culture.  Because of this linkage, we have long been interested in how organizations and individuals can make better decisions, where “better” means fact- and reality-based and consistent with the organization’s mission and espoused values.

We have reviewed many works that deal with decision making.  This book adds value because it is based on the author’s research and observations around the world; it is not based on controlled studies in a laboratory or observations in a single organization.  It uses very good graphics to illustrate various data sets, including changes, e.g., progress, over time.

Rosling believed “it has never been easier or more important for business leaders and employees to act on a fact-based worldview.” (p. 228)   His book is engagingly written and easy to read.  It is Rosling’s swan song; he died in 2017.

Bottom line: Rosling advocates for robust decision making, accurate mental models, and a systems approach.  We like it.


*  H. Rosling, O. Rosling and A.R. Rönnlund, Factfulness, 1st ed. ebook (New York: Flatiron, 2018).

Friday, November 9, 2018

Nuclear Safety Culture: Lessons from Turn the Ship Around! by L. David Marquet

Turn the Ship Around!* was written by a U.S. Navy officer who was assigned to command a submarine with a poor performance history.  He adopted a management approach that was radically different from the traditional top-down, leader-follower, “I say, you do” Navy model for controlling people.  The new captain’s primary method was to push decision making down to the lowest practical organizational levels; he supported his crew’s new authorities (and maintained control of the overall situation) with strategies to increase their competence and provide clarity on the organization’s purpose and goals.

Specific management practices were implemented or enhanced to support the overall approach.  For example, decision making guidelines were developed and disseminated.  Attaining goals was stressed over unconsciously following procedures.  Crew members were instructed to “think out loud” before initiating action; this practice communicated intent and increased organizational resilience because it created opportunities for others to identify potential errors before they could occur and propagate.  Pre-job briefs were changed from the supervisor reciting the procedure to asking participants questions about their roles and preparation.

As a result, several organizational characteristics that we have long promoted became more evident, including deferring to expertise (getting the most informed, capable people involved with a decision), increased trust, and a shared mental model of vision, purpose and organizational functioning.

As you can surmise, his approach worked.  (If it hadn’t, Marquet would have had a foreshortened career and there would be no book.)  All significant operational and personnel metrics improved under his command.  His subordinates and other crew members became highly promotable.  Importantly, the boat’s performance continued at a high level after he completed his tour; in other words, he established a system for success that could live on without his personal involvement.

Our Perspective 


This book provides a sharp contrast to nuclear industry folklore that promotes strong, omniscient leadership as the answer to every problem situation.  Marquet did not act out the role of the lone hero, instead he built a management system that created superior performance while he was in command and after he moved on.  There can be valuable lessons here for nuclear managers but one has to appreciate the particular requirements for undertaking this type of approach.

The manager’s attitude

You have to be willing to share some (maybe a lot) of your authority with your subordinates, their subordinates and so forth on down the line while still being held to account by your bosses for your unit’s performance.  Not everyone can do this.  It requires faith in the new system and your people and a certain detachment from short-term concerns about your own career.  You also need to have sufficient self-awareness to learn from mistakes as you move forward and recognize when you are failing to walk the talk with your subordinates.

In Marquet’s case, there were two important precursors to his grand experiment.  First, he had seen on previous assignments how demoralizing top-down micromanagement could be vs. how liberating and motivating it was for him (as a subordinate officer) to actually be allowed to make decisions.  Second, he had been training for a year on how to command a sub of a design different from the boat to which he was eventually assigned; he couldn’t go in and micromanage everyone from the get-go, he didn’t have sufficient technical knowledge.

The work environment

Marquet had one tremendous advantage: from a social perspective, a submarine is largely a self-contained world.  He did not have to worry about what people in the department next door were doing; he only had to get his remote boss to go along with his plan.  If you’re a nuclear plant department head and you want to adopt this approach but the rest of the organization runs top-down, it may be rough sledding unless you do lots of prep work to educate your superiors and get them to support you, perhaps for a pilot or trial project.

The book is easy reading, with short chapters, lots of illustrative examples (including some interesting information on how the Navy and nuclear submarines work), sufficient how-to lists, and discussion questions at the end of chapters.  Marquet did not invent his approach or techniques out of thin air.  As an example, some of his ideas and prescriptions, including rejecting the traditional Navy top-down leadership model, setting clear goals, providing principles for guiding decision making, enforcing reflection after making mistakes, giving people tools and advantages but holding them accountable, and culling people who can’t get with the program** are similar to points in Ray Dalio’s Principles, which we reviewed on April 17, 2018.  This is not surprising.  Effective, self-aware leaders should share some common managerial insights.

Bottom line: Read this book to see a real-world example of how authentic employee empowerment can work.


*  L.D. Marquet, Turn the Ship Around! (New York: Penguin, 2012).  This book was recommended to us by a Safetymatters reader.  Please contact us if you have any material you would like us to review.

**  People have different levels of appetite for empowerment or other forms of participatory management.  Not everyone wants to be fully empowered, highly self-motivated or expected to show lots of initiative.  You may end up with employees who never buy into your new program and, in the worst case, you won’t be allowed to get rid of them.

Monday, October 29, 2018

Safety Culture: What are the Contributors to “Bad” Outcomes Versus “Good” Outcomes and Why Don’t Some Interventions Lead to Improved Safety Performance?

Why?
Sidney Dekker recently revisited* some interesting research he led at a large health care authority.  The authority’s track record was not atypical for health care: 1 out of 13 (7%) patients was hurt in the process of receiving care.  The authority investigated the problem cases and identified a familiar cluster of negative factors, including workarounds, shortcuts, violations, guidelines not followed, errors and miscalculations—the list goes on.  The interventions will also be familiar to you—identify who did what wrong, add more rules, try harder and get rid of bad apples—but were not reducing the adverse event rate.

Dekker’s team took a different perspective and looked at the 93% of patients who were not harmed.  What was going on in their cases?  To their surprise, the team found the same factors: workarounds, shortcuts, violations, guidelines not followed, errors and miscalculations, etc.** 

Dekker uses this research to highlight a key difference between the traditional view of safety management, Safety I, and the more contemporary view, Safety II.  At its heart, Safety I believes the source of problems lies with the individual so interventions focus on ways to make the individual’s work behavior more reliable, i.e., less likely to deviate from the idealized form specified by work designers.  Safety I ignores the fact that the same imperfections exist in work with both successful and problematic outcomes.

In contrast, Safety II sees the source of problems in the system, the dynamic combination of technology, environmental factors, organizational aspects, and individual cognition and choices.  Referencing the work of Diane Vaughan, Dekker says “the interior life of organizations is always messy, only partially well-coordinated and full of adaptations, nuances, sacrifices and work that is done in ways that is quite different from any idealized image of it.”

Revisiting the data revealed that the work with good outcomes was different.  This work had more positive characteristics, including diversity of professional opinion and the possibility to voice dissent, keeping the discussion on risk alive and not taking past success as a guarantee for safety, deference to proven expertise, widely held authority to say “stop,” and pride of workmanship.  As you know, these are important characteristics of a strong safety culture.

Our Perspective

Dekker’s essay is a good introduction to the differences between Safety I and Safety II thinking, most importantly their differing mental models of the way work is actually performed in organizations.  In Safety I, the root cause of imperfect results is the individual and constant efforts are necessary (e.g., training, monitoring, leadership, discipline) to create and maintain the individual’s compliance with work as designed.  In  Safety II, normal system functioning leads to mostly good and occasionally bad results.  The focus of Safety II interventions should be on activities that increase individual capacity to affect system performance and/or increase system robustness, i.e., error tolerance and an increased chance of recovery when errors occur.

If one applies Safety I thinking to a “bad” outcome then the most likely result from an effective intervention is that the exact same problem will not happen again.  This thinking sustains a robust cottage industry in root-cause analysis because new problems will always arise and no changes are made to the system itself.

We like Dekker’s (and Vaughan’s) work and have reported on it several times in Safetymatters (click on the Dekker and Vaughan labels to bring up related posts).  We have been emphasizing some of the same points, especially the need for a systems view, since we started Safetymatters almost ten years ago.

Individual Exercise: Again drawing on Vaughan, Dekker says “there is often no discernable difference between the organization that is about to have an accident or adverse event, and the one that won’t, or the one that just had one.”  Look around your organization and review your career experience; is that true?


*  S. Dekker, “Why Do Things Go Right?,” SafetyDifferently website (Sept. 28, 2018).  Retrieved Oct. 25, 2018.

**  This is actually rational.  People operate on feedback and if the shortcuts, workarounds and disregarding the guidelines did not lead to acceptable (or at least tolerable) results most of the time, folks would stop using them.

Friday, July 6, 2018

WANO Publicizes Projects That Promote Safety But Short-Changes Nuclear Safety Culture

NOT WANO's world headquarters
The World Association of Nuclear Operators (WANO) recently announced* the completion and delivery of 12 post-Fukushima projects intended to enhance safety in the world’s commercial nuclear power plants.  It appears the projects were accomplished by a combination of WANO and member personnel.  An addendum to the press release describes how WANO has revised its own practices to more effectively deliver its services in the 12 project areas to members.  The projects address emergency preparedness, emergency support plan, severe accident management, early event notification, onsite fuel storage, design safety fundamentals, peer review frequency and equivalency, corporate peer reviews, WANO assessment, transparency and visibility, and WANO internal assessment. 

Our Perspective

We usually don’t waste time with WANO because it has never developed or promoted any insight into the systemic interactions of the management and cultural variables that create ongoing nuclear organizational performance.  And the results they are touting are based on their familiar, inadequate worldview, viz. promoting more development for leaders and more detail to functional areas.

That said, we recognize that incremental improvements in the project areas might add some modest value and hopefully do not hurt performance.  (Performance may be “hurt” when personnel punctiliously and mindlessly follow policies, rules and procedures without considering if they are actually appropriate for the situation at hand.)

Most of WANO’s claims for improving its own services are typical chest-thumping but a few items perpetuate long-standing industry shortcomings, especially excessive secrecy.  For example, under design safety fundamentals WANO peer reviews assess whether safety-related design features are appropriately managed but “WANO does not make design-change recommendations or evaluate the design of the plant itself.”  WANO assessments of utility/plant performance are confidential to the subject CEOs.  And WANO’s concept of improving transparency means “effectively sharing information and best practices within the membership.”  Looks like WANO’s prime directive is to shield the dues-paying members from any hard questions or external criticism.

Our biggest gripe is WANO’s treatment, or lack thereof, of nuclear safety culture (NSC).  In the press release, culture is mentioned once: Mid-to-senior level “managers at nuclear power plants play a vital part in delivering excellence and a strong nuclear safety culture, due to their positional influence throughout the organisation.”  That’s true, but culture is much more pervasive, systemic and important than that.

We find it surreal that WANO has been busy organizing worldwide resources to polish the bowling ball** and then claim they have made the industry safer post-Fukushima.  Linking their putative progress to Fukushima ignores a fundamental truth: while weaknesses in various functional areas were causal factors that made a bad situation worse, the root cause of the Fukushima disaster was the deep-seated, value-driven unwillingness of people who knew to speak truth to power about the tsunami design inadequacies.  It was culture that killed the plant.


*  WANO press release, “WANO calls on industry to build on progress after post-Fukushima improvements” (June 26, 2018).  Retrieved July 5, 2018.

**  “polish a bowling ball” - A phrase we use to describe activities that make an existing construct shinier but have no impact on its fundamental nature or effectiveness.

Wednesday, June 20, 2018

Catching Up with Nuclear Safety Culture’s Bad Boys: Entergy and TVA

Entergy Headquarters
TVA Headquarters
We haven’t reported for awhile on the activities of the two plant operators who dominate the negative news in the Nuclear Safety Culture (NSC) space, viz., Entergy and TVA.  Spoiler alert: there is nothing novel or unexpected to report, only the latest chapters in their respective ongoing sagas.

Entergy

On March 12, 2018 the NRC issued a Confirmatory Order* (CO) to Entergy for violations at the Grand Gulf plant: (1) an examination proctor provided assistance to trainees and (2) nonlicensed operators did not tour all required watch station areas and entered inaccurate information into the operator logs.  The NRC characterized these as willful violations.  As has become customary, Entergy requested Alternative Dispute Resolution (ADR).  Entergy agreed to communicate fleet-wide the company’s intolerance for willful misconduct, evaluate why prior CO-driven corrective actions failed to prevent the current violations, conduct periodic effectiveness reviews of corrective actions, and conduct periodic “organizational health surveys” to identify NSC concerns that could contribute to willful misconduct.

On March 29, 2018 the NRC reported** on Arkansas Nuclear One’s (ANO’s) progress in implementing actions required by a June 17, 2016 Confirmatory Action Letter (CAL).  (We reported at length on ANO’s problems on June 25, 2015 and June 16, 2016.)  A weak NSC has been a major contributor to ANO’s woes.  The NRC inspection team concluded that all but one corrective actions were implemented and effective and closed those items.  The NRC also concluded that actions taken to address two inspection focus areas and two Yellow findings were also satisfactory.

On April 20, 2018 the NRC reported*** on ANO’s actions to address a White inspection finding.  They concluded the actions were satisfactory and noted that ANO’s root cause evaluation had identified nine NSC aspects with weaknesses.  Is that good news because they identified the weaknesses or bad news because they found so many?  You be the judge.


On June 18, 2018 the NRC closed**** ANO's CAL and moved the plant into column 1 of the Reactor Oversight Process Action Matrix.

TVA

The International Atomic Energy Agency (IAEA) conducted an Operational Safety Review Team (OSART) review***** of Sequoyah during August 14-31, 2017.  The team reviewed plant operational safety performance
vis-à-vis IAEA safety standards and made appropriate recommendations and suggestions.  Two of the three significant recommendations have an NSC component: (1) “improve the performance of management and staff in challenging inappropriate behaviours” and “improve the effectiveness of event investigation and corrective action implementation . . .” (p. 2)

Focusing on NSC, the team observed: “The procedure for nuclear safety culture self-assessments does not include a sufficiently diverse range of tools necessary to gather all the information required for effective analysis. The previous periodic safety culture self-assessment results were based on surveys but other tools, such as interviews, focus groups and observations, were only used if the survey revealed any gaps.” (p. 60)

On March 14, 2018 the NRC reported^ on Watts Bar’s progress in addressing NRC CO EA-17-022 and Chilling Effect Letter (CEL) EA-16-061, and licensee action to establish and maintain a safety-conscious work environment (SCWE).  (We discussed the CEL on March 25, 2016 and NSC/SCWE problems on Nov. 14, 2016.)  Licensee actions with NSC-related components were noted throughout the report including the discussions on plant communications, training, work processes and independent oversight.  The sections on assessing NSC/SCWE and “Safety Over Production” included inspection team observations (aka opportunities for improvement) which were shared with the licensee. (pp. 10-11, 17, 24-27)  One TVA corrective action was to establish a Fleet Safety Culture Peer Team, which has been done.  The overall good news is the report had no significant NSC-related negative findings.  Focus group participants were generally positive about NSC and SCWE but expressed concern about “falling back into old patterns” and “declaring success too soon.” (p. 27)

Our Perspective

For Entergy, it looks like business as usual, i.e., NSC
Whac-A-Mole.  They get caught or self-report an infraction, go to ADR, and promise to do better at the affected site and fleet-wide.  Eventually a new problem arises somewhere else.  The strength of their overall NSC appears to be floating in a performance band below satisfactory but above intolerable.

We are a bit more optimistic with respect to TVA.  It would be good if TVA could replicate some of Sequoyah’s (which has managed to keep its nose generally clean) values and practices at Browns Ferry and Watts Bar.  Perhaps their fleet wide initiative will be a mechanism for making that happen.

We applaud the NRC inspection team for providing specific information to Watts Bar on actions the plant could take to strengthen its NSC.

Bottom line: The Sequoyah OSART report is worth reviewing for its detailed reporting of the team’s observations of unsafe (or at least questionable) employee work behaviors.


*  K.M. Kennedy (NRC) to J.A. Ventosa (Entergy), “Confirmatory Order, NRC Inspection Report 05000416/2017014, and NRC Investigation Reports 4-2016-004 AND 4-2017-021” (Mar. 12, 2018).  ADAMS ML18072A191.

**  N.F. O’Keefe (NRC) to R.L. Anderson (Entergy), “Arkansas Nuclear One – NRC Confirmatory Action Letter (EA-16-124) Follow-up Inspection Report 05000313/2018012 AND 05000368/2018012” (Mar. 29, 2018).  ADAMS ML18092A005.

***  N.F. O’Keefe (NRC) to R.L. Anderson (Entergy), “Arkansas Nuclear One, Unit 2 – NRC Supplemental Inspection Report 05000368/2018040” (Apr. 20, 2018).  ADAMS ML18110A304.


****  K.M. Kennedy (NRC) to R.L. Anderson (Entergy), "Arkansas Nuclear One – NRC Confirmatory Action Letter (EA-16-124) Follow-up Inspection Report 05000313/2018013 AND 05000368/2018013 and Assessment Follow-up Letter" (Jun. 18, 2018)  ADAMS ML18165A206.

 *****  IAEA Operational Safety Review Team (OSART), Report of the Mission to the Sequoyah Nuclear Power Plant Aug. 14-31, 2017, IAEA-NSNI/OSART/195/2017.  ADAMS ML18061A036. The document date in the NRC library is Mar. 2, 2018.

^  A.D. Masters (NRC) to J.W. Shea “Watts Bar Nuclear Plant – Follow-up for NRC Confirmatory Order EA-17-022 and Chilled Work Environment Letter EA-16-061; NRC INSPECTION REPORT 05000390/2017009, 05000391/2017009” (Mar. 14, 2018).  ADAMS ML18073A202.

Thursday, May 3, 2018

Nuclear Safety Culture and the Hanford Waste Treatment Plant: the Saga Continues

WTP at Hanford
In April 2018 the U.S. Government Accountability Office (GAO) released a report* on shortcomings in the quality assurance (QA) program at the Department of Energy’s (DOE) Waste Treatment Plant (WTP aka the Vit Plant) in Hanford, Washington.  QA problems exist at both Bechtel, the prime contractor since 2000, and the DOE’s Office of River Protection (ORP), the on-site overseer of the WTP project.

The report describes DOE actions to identify and address QA problems at the WTP, and examines the extent to which (a) DOE has ensured that all QA problems have been identified and will not recur and (b) ORP’s organizational structure provides sufficient independence to effectively oversee Bechtel’s QA program.

Why do we care about QA?  The GAO investigation did not target culture and there is only one specific mention of culture in the report.**  However, the entire report reflects the weak nuclear safety culture (NSC) at Hanford.

There is a lot of history here (GAO has been ragging DOE about the need for effective oversight at DOE facilities since 2008) but let’s begin with ORP’s 2012 stop work order to address WTP’s most significant technical challenges. Then, in 2013, ORP’s QA division issued two Priority One findings with respect to Bechtel’s QA program, viz., both the program and Bechtel’s Corrective Action Program to address QA problems were “not fully effective.” (p. 3)  This was followed by a DOE Office of Enforcement investigation which, in turn, led to a 2015 Consent Order and Bechtel Management Improvement Program (MIP).  The Order specified all corrective actions had to be implemented by April 20, 2016.  Currently, 13 of 52 total corrective measures have not been completed and some of the ones where Bechtel claimed completion are not yet completed.  In addition, “. . . in some areas where [Bechtel] has stated that corrective measures are now in place, ORP continues to encounter quality assurance problems similar to those it encountered in the past.” (p. 25)

Why doesn’t ORP stop work again?  Because ORP senior managers plan to evaluate the extent of Bechtel’s implementation of MIP corrective measures over the next year and have allowed work to continue because they believe Bechtel’s QA is “generally adequate.” (p. 22)  We’ll reveal the real reason later.

The shortcomings are not limited to Bechtel.  “ORP’s actions have not ensured that all quality assurance problems have been identified at the WTP, and some previously identified problems are recurring.” (p. 16)  “When and where problems have recurred, ORP has not always required [Bechtel] to determine the extent to which the problems may affect all parts of the WTP.” (p. 25)  Why not?  Here’s a hint: ORP’s “Quality Assurance Division is not fully separate and independent from the upper management of the WTP project, which manages cost and schedule performance.” (p. 22)

Our Perspective

An article*** in the local Hanford newspaper summarizes the report’s contents.  However, the problems described are not new news.  Technical, quality and culture problems have swirled around the WTP for years.  In 2011 we started reporting on WTP issues and the sluggish responses from both DOE and Bechtel.  Click on the Vit Plant label to see our previous posts.

Goal conflict (cost and schedule vs. QA and a strong NSC) has always been the overarching issue at the WTP.  Through fiscal year 2017, DOE spent $11 billion on WTP construction.  It will cost approximately $16.8 billion to complete the first phase of the WTP, which transfers low-level radioactive waste to the low-level vitrification facility.  No one knows how much it will cost to complete the WTP or when it will be functioning.

GAO gives their subjects an opportunity to respond to GAO’s reports and recommendations.  The DOE response is an unsurprising continuation of their traditional rope-a-dope strategy: concur with GAO recommendations, rationalize or minimize the current extent of condition, exaggerate current corrective actions, promise to investigate identified issues and do better in the future, wait for GAO’s attention to turn elsewhere, then continue with business as usual.  What DOE needs to do is issue a stop order for the money train—that would get the attention of everyone, especially Bechtel and ORP managers.

How does your QA department stack up?  Does it add value by identifying and helping to solve real problems?  Is it a distracting irritant, enamored of its own authority and administrivia?  Or is it simply impotent?


*  U.S. Government Accountability Office, “Hanford Waste Treatment Plant: DOE Needs to Take Further Actions to Address Weaknesses in Its Quality Assurance Program,” GAO-18-241 (April, 2018).

**  “One [ORP] quality assurance expert specified that ORP’s culture does not encourage staff to identify quality assurance problems or ineffective corrective measures. This expert said that people who discover problems are not rewarded; rather, their findings are met with resistance, which has created a culture where quality assurance staff are hesitant to identify quality assurance problems or problems with corrective measures.” (p. 24)  This quote exposes the core NSC issue at the WTP.

***  A. Cary, “Feds bash Hanford nuclear waste plant troubles, question DOE priorities,” Tri-City Herald (April 24, 2018).  Retrieved May 1, 2018.

Tuesday, April 17, 2018

Nuclear Safety Culture: Insights from Principles by Ray Dalio

Book cover
Ray Dalio is the billionaire founder/builder of Bridgewater Associates, an investment management firm.  Principles* catalogs his policies, practices and lessons-learned for understanding reality and making decisions for achieving goals in that reality.  The book appears to cover every possible aspect of managerial and organizational behavior.  Our plan is to focus on two topics near and dear to us—decision making and culture—for ideas that could help strengthen nuclear safety culture (NSC).  We will then briefly summarize some of Dalio’s other thoughts on management.  Key concepts are shown in italics.

Decision Making

We’ll begin with Dalio’s mental model of reality.  Reality is a system of universal cause-effect relationships that repeat and evolve like a perpetual motion machine.  The system dynamic is driven by evolution (“the single greatest force in the universe” (p. 142)) which is the process of adaptation.

Because many situations repeat themselves, principles (policies or rules) advance the goal of making decisions in a systematic, repeatable way.  Any decision situation has two major steps: learning (obtaining and synthesizing data about the current situation) and deciding what to do.  Logic, reason and common sense are the primary decision making mechanisms, supported by applicable existing principles and tools, e.g., expected value calculations or evidence-based decision making tools.  The lessons learned from each decision situation can be incorporated into existing or new principles.  Practicing the principles develops good habits, i.e., automatic, reflexive behavior in the specified situations.  Ultimately, the principles can be converted into algorithms that can be computerized and used to support the human decision makers.

Believability weighting can be applied during the decision making process to obtain data or opinions about solutions.  Believable people can be anyone in the organization but are limited to those “who 1) have repeatedly and successfully accomplished the thing in question, and 2) . . . can logically explain the cause-effect relationships behind their conclusions.” (p. 371)  Believability weighting supplements and challenges responsible decision makers but does not overrule them.  Decision makers can also make use of thoughtful disagreement where they seek out brilliant people who disagree with them to gain a deeper understanding of decision situations.

The organization needs a process to get beyond disagreement.  After all discussion, the responsible party exercises his/her decision making authority.  Ultimately, those who disagree have to get on board (“get in sync”) and support the decision or leave the organization.

The two biggest barriers to good decision making are ego and blind spots.  Radical open-mindedness recognizes the search for what’s true and the best answer is more important than the need for any specific person, no matter their position in the organization, to be right.

Culture

Organizations and the individuals who populate them should also be viewed as machines.  Both are imperfect but capable of improving. The organization is a machine made up of culture and people that produces outcomes that provide feedback from which learning can occur.  Mistakes are natural but it is unacceptable to not learn from them.  Every problem is an opportunity to improve the machine.  

People are generally imperfect machines.  People are more emotional than logical.   They suffer from ego (subconscious drivers of thoughts) and blind spots (failure to see weaknesses in themselves).  They have different character attributes.  In short, people are all “wired” differently.  A strong culture with clear principles is needed to get and keep everyone in sync with each other and in pursuit of the organization’s goals.

Mutual adjustment takes place when people interact with culture.  Because people are different and the potential to change their wiring is low** it is imperative to select new employees who will embrace the existing culture.  If they can’t or won’t, or lack ability, they have to go.  Even with its stringent hiring practices, about a third of Bridgewater’s new hires are gone by the end of eighteen months.

Human relations are built on meaningful relationships, radical truth and tough love.  Meaningful relationships means people give more consideration to others than themselves and exhibit genuine caring for each other.  Radical truth means you are “transparent with your thoughts and open-mindedly accepting the feedback of others.” (p. 268)  Tough love recognizes that criticism is essential for improvement towards excellence; everyone in the organization is free to criticize any other member, no matter their position in the hierarchy.  People have an obligation to speak up if they disagree. 

“Great cultures bring problems and disagreements to the surface and solve them well . . .” (p. 299)  The culture should support a five-step management approach: Have clear goals, don’t tolerate problems, diagnose problems when they occur, design plans to correct the problems, and do what’s necessary to implement the plans, even if the decisions are unpopular.  The culture strives for excellence so it’s intolerant of folks who aren’t excellent and goal achievement is more important than pleasing others in the organization.

More on Management 


Dalio’s vision for Bridgewater is “an idea meritocracy in which meaningful work and meaningful relationships are the goals and radical truth and radical transparency are the ways of achieving them . . .” (p. 539)  An idea meritocracy is “a system that brings together smart, independent thinkers and has them productively disagree to come up with the best possible thinking and resolve their disagreements in a believability-weighted way . . .” (p. 308)  Radical truth means “not filtering one’s thoughts and one’s questions, especially the critical ones.” (ibid.)  Radical transparency means “giving most everyone the ability to see most everything.” (ibid.)

A person is a machine operating within a machine.  One must be one’s own machine designer and manager.  In managing people and oneself, take advantage of strengths and compensate for weaknesses via guardrails and soliciting help from others.  An example of a guardrail is assigning a team member whose strengths balance another member’s weaknesses.  People must learn from their own bad decisions so self-reflection after making a mistake is essential.  Managers must ascertain if mistakes are evidence of a weakness and whether compensatory action is required or, if the weakness is intolerable, termination.  Because values, abilities and skills are the drivers of behavior management should have a full profile for each employee.

Governance is the system of checks and balances in an organization.  No one is above the system, including the founder-owner.  In other words, senior managers like Dalio can be subject to the same criticism as any other employee.

Leadership in the traditional sense (“I say, you do”) is not so important in an idea meritocracy because the optimal decisions arise from a group process.  Managers are seen as decision makers, system designers and shapers who can visualize a better future and then build it.   Leaders “must be willing to recruit individuals who are willing to do the work that success requires.” (p. 520)

Our Perspective

We recognize international investment management is way different from nuclear power management so some of Dalio’s principles can only be applied to the nuclear industry in a limited way, if at all.  One obvious example of a lack of fit is the area of risk management.  The investing environment is extremely competitive with players evolving rapidly and searching for any edge.  Timely bets (investments) must be made under conditions where the risk of failure is many orders of magnitude greater than what acceptable in the nuclear industry.  Other examples include the relentless, somewhat ruthless, pursuit of goals and a willingness to jettison people that is foreign to the utility world.

But we shouldn’t throw the baby out with the bath.  While Dalio’s approach may be too extreme for wholesale application in your environment it does provide a comparison (note we don’t say “standard”) for your organization’s performance.  Does your decision making process measure up to Dalio’s in terms of robustness, transparency and the pursuit of truth?  Does your culture really strive for excellence (and eliminate those who don’t share that vision) or is it an effort constrained by hierarchical, policy or political realities?

This is a long book but it’s easy to read and key points are repeated often.  Not all of it is novel; many of the principles are based on observations or techniques that have been around for awhile and should be familiar to you.  For example, ideas about how human minds work are drawn, in part, from Daniel Kahneman; an integrated hierarchy of goals looks like Management by Objectives; and a culture that doesn’t automatically punish people for making mistakes or tolerable errors sounds like a “just culture” albeit with some mandatory individual learning attached.

Bottom line: Give this book a quick look.  It can’t hurt and might help you get a clearer picture of how your own organization actually operates.



*  R. Dalio, Principles (New York: Simon & Schuster, 2017).  This book was recommended to us by a Safetymatters reader.  Please contact us if you have any material you would like us to review.

**  A person’s basic values and abilities are relatively fixed, although skills may be improved through training.