Wednesday, March 26, 2014

NRC "National Report" to IAEA

A March 25, 2014 NRC press release* announced that Chairman Macfarlane presented the Sixth National Report for the Convention on Nuclear Safety** to International Atomic Energy Agency (IAEA) member countries.  The report mentions safety culture (SC) several times, as discussed below.  There is no breaking news in a report like this.  We’re posting about it only because it provides an encyclopedic review of NRC activities including a description of how SC fits into their grand scheme of things.  We also tie the report’s contents to related posts on Safetymatters.  The numbers shown below are section numbers in the report.

6.3.11 Public Participation 

This section describes how the NRC engages with stakeholders and the broader public.  As part of such engagement, the NRC says it expects employers to maintain an open environment where workers are free to raise safety concerns. “These expectations are communicated through the NRC’s Safety Culture Policy Statement” and other regulatory directives and tools. (p. 72)  This is pretty straightforward and we have no comment.

8.1.6.2 Human Resources

Section 8 describes the NRC, from its position in the federal government to how it runs its internal activities.  One such activity is the NRC Inspector General’s triennial General Safety Culture and Climate Survey for NRC employees.  Reporting on the most recent (2012) survey, “the NRC scored above both Federal and private sector benchmarks, although in 2012 the agency did not perform as strongly as it had in the past.” (p. 96)  We posted on the internal SC survey back on April 6, 2013; we felt the survey raised a few significant issues.

10.4 Safety Culture

Section 10 covers activities that ensure that safety receives its “due priority” from licensees and the NRC itself.  Sub-section 10.4 provides an in-depth description of the NRC’s SC-related policies and practices so we excerpt from it at length.

The discussion begins with the SC policy statement and the traits of a positive (sic) SC, including Leadership, Problem identification and resolution, Personal accountability, etc.

The most interesting part is 10.4.1 NRC Monitoring of Licensee Safety Culture which covers “the policies, programs, and practices that apply to licensee safety culture.” (p. 118)  It begins with the Reactor Oversight Process (ROP) and its SC-related enhancements.  NRC staff identified 13 components as important to SC, including decision making, resources, work control, etc.  “All 13 safety culture components are applied in selected baseline, event followup, and supplemental IPs [inspection procedures].” (p. 119)

“There are no regulatory requirements for licensees to perform safety culture assessments routinely. However, depending on the extent of deterioration of licensee performance, the NRC has a range of expectations [emphasis added] about regulatory actions and licensee safety culture assessments, . . .” (p. 119)

“In the routine or baseline inspection program, the inspector will develop an inspection finding and then identify whether an aspect of a safety culture component is a significant causal factor of the finding. The NRC communicates the inspection findings to the licensee along with the associated safety culture aspect. 

“When performing the IP that focuses on problem identification and resolution, inspectors have the option to review licensee self-assessments of safety culture. The problem identification and resolution IP also instructs inspectors to be aware of safety culture components when selecting samples.” (p. 119)

“If, over three consecutive assessment periods (i.e., 18 months), a licensee has the same safety culture issue with the same common theme, the NRC may ask [emphasis added] the licensee to conduct a safety culture self-assessment.” (p. 120)

If the licensee performance degrades to Column 3 of the ROP Action Matrix and “the NRC determines that the licensee did not recognize that safety culture components caused or significantly contributed to the risk-significant performance issues, the NRC may request [emphasis added] the licensee to complete an independent assessment of its safety culture.” (p. 120)

For licensees in Column 4 of the ROP “the NRC will expect [emphasis added] the licensee to conduct a third-party independent assessment of its safety culture. The NRC will review the licensee’s assessment and will conduct an independent assessment of the licensee’s safety culture . . .” (p. 120)

ROP SC considerations “provide the NRC staff with (1) better opportunities to consider safety culture weaknesses . . . (2) a process to determine the need to specifically evaluate a licensee’s safety culture . . . and (3) a structured process to evaluate the licensee’s safety culture assessment and to independently conduct a safety culture assessment for a licensee . . . .  By using the existing Reactor Oversight Process framework, the NRC’s safety culture oversight activities are based on a graded approach and remain transparent, understandable, objective, risk-informed, performance-based, and predictable.” (p. 120)

We described this hierarchy of NRC SC-related activities in a post on May 24, 2013.  We called it de facto regulation of SC.  Reading the above only confirms that conclusion.  When the NRC asks, requests or expects the licensee to do something, it’s akin to a military commander’s “wishes,” i.e., they’re the same as orders.

10.4.2 The NRC Safety Culture 


This section covers the NRC’s actions to strengthen its internal SC.  This actions include appointing an SC Program Manager; integrating SC into the NRC’s Strategic Plan; developing training; evaluating the NRC’s problem identification, evaluation and resolution processes; and establishing clear expectations and accountability for maintaining current policies and procedures. 

We would ask how SC affects (and is affected by) the NRC’s decision making and resource allocation processes, work practices, operating experience integration and establishing personal accountability for maintaining the agency’s SC.  What’s good for the goose (licensee) is good for the gander (regulator).

Institute of Nuclear Power Operations (INPO) 


INPO also provided content for the report.  Interestingly, it is a 39-page Part 3 in the body of the report, not an appendix.  Part 3 covers INPO’s mission, organization, etc. and includes a section on SC.

6. Priority to Safety (Safety Culture)

The industry and INPO have their own definition of SC: “An organization’s values and behaviors—modeled by its leaders and internalized by its members—that serve to make nuclear safety the overriding priority.” (p. 230)

“INPO activities reinforce the primary obligation of the operating organizations’ leadership to establish and foster a healthy safety culture, to periodically assess safety culture, to address shortfalls in an open and candid fashion, and to ensure that everyone from the board room to the shop floor understands his or her role in safety culture.” (p. 231)

We believe our view of SC is broader than INPO’s.  As we said in our July 24, 2013 post “We believe culture, including SC, is an emergent organizational property created by the integration of top-down activities with organizational history, long-serving employees, and strongly held beliefs and values, including the organization's “real” priorities.  In other words, SC is a result of the functioning over time of the socio-technical system.  In our view, a CNO can heavily influence, but not unilaterally define, organizational culture including SC.” 

Conclusion

This 341 page report appears to cover every aspect of the NRC’s operations but, as noted in our introduction, it does not present any new information.  It’s a good reference document to cite if someone asks you what the NRC is or what it does.

We found it a bit odd that the definition of SC in the report is not the definition promulgated in the NRC SC Policy Statement.  Specifically, the report says the NRC uses the 1991 INSAG definition of SC: “that assembly of characteristics and attitudes in organizations and individuals which establishes that, as an overriding priority, nuclear safety issues receive the attention warranted by their significance.” (p. 118)

The Policy Statement says “Nuclear safety culture is the core values and behaviors resulting from a collective commitment by leaders and individuals to emphasize safety over competing goals to ensure protection of people and the environment.”***

Of course, both definitions are different from the INPO definition provided above.  We’ll leave it as an exercise for the reader to figure out what this means.


*  NRC Press Release No: 14-021, “NRC Chairman Macfarlane Presents U.S. National Report to IAEA’s Convention on Nuclear Safety” (Mar. 25, 2014).  ADAMS ML14084A303.

**  NRC NUREG-1650 Rev. 5, “The United States of America Sixth National Report for the Convention on Nuclear Safety” (Oct. 2013).  ADAMS ML13303B021. 

***  NUREG/BR 0500 Rev 1, “Safety Culture Policy Statement” (Dec 2012).  ADAMS ML12355A122.  This definition comports with the one published in the Federal Register Vol. 76, No. 114 (June 14, 2011) p. 34777.

Wednesday, March 19, 2014

Safety Culture at Tohoku Electric vs. Tokyo Electric Power Co. (TEPCO)

Fukushima No. 1 (Daiichi)
An op-ed* in the Japan Times asserts that the root cause of the Fukushima No. 1 (Daiichi) plant’s failures following the March 11, 2011 earthquake and tsunami was TEPCO’s weak corporate safety culture (SC).  This post summarizes the op-ed then provides some background information and our perspective.

Op-Ed Summary 

According to the authors, Tohoku Electric had a stronger SC than TEPCO.  Tohoku had a senior manager who strongly advocated safety, company personnel participated in seminars and panel discussions about earthquake and tsunami disaster prevention, and the company had strict disaster response protocols in which all workers were trained.  Although their Onagawa plant was closer to the March 11, 2011 quake epicenter and experienced a higher tsunami, it managed to shut down safely.

SC-related initiatives like Tohoku’s were not part of TEPCO’s culture.  Fukushima No. 1’s problems date back to its original siting and early construction.  TEPCO removed 25 meters off the 35 meter natural seawall of the plant site and built its reactor buildings at a lower elevation of 10 meters (compared to 14.7m for Onagawa).  Over the plant’s life, as research showed that tsunami levels had been underestimated, TEPCO “resorted to delaying tactics, such as presenting alternative scientific studies and lobbying”** rather than implementing countermeasures.

Background and Our Perspective

The op-ed is a condensed version of the authors’ longer paper***, which was adapted from a research paper for an engineering class, presumably written by Ms. Ryu.  The op-ed is basically a student paper based on public materials.  You should read the longer paper, review the references and judge for yourself if the authors have offered conclusions that go beyond the data they present.

I suggest you pay particular attention to the figure that supposedly compares Tohoku and TEPCO using INPO’s ten healthy nuclear SC traits.  Not surprisingly, TEPCO doesn’t fare very well but note the ratings were based on “the author’s personal interpretations and assumptions” (p. 26)

Also note that the authors do not mention Fukushima No. 2 (Daini), a four-unit TEPCO plant about 15 km south of Fukushima No. 1.  Fukushima No. 2 also experienced damage and significant challenges after being hit by a 9m tsunami but managed to reach shutdown by March 18, 2011.  What could be inferred from that experience?  Same corporate culture but better luck?

Bottom line, by now it’s generally agreed that TEPCO SC was unacceptably weak so the authors plow no new ground in that area.  However, their description of Tohoku Electric’s behavior is illuminating and useful.


*  A. Ryu and N. Meshkati, “Culture of safety can make or break nuclear power plants,” Japan Times (Mar. 14, 2014).  Retrieved Mar. 19, 2014.

**  Quoted in the op-ed but taken from “The official report of the Fukushima Nuclear Accident Independent Investigation Commission [NAIIC] Executive Summary” (The National Diet of Japan, 2012), p. 28.  The NAIIC report has a longer Fukushima root cause explanation than the op-ed, viz, “the root causes were the organizational and regulatory systems that supported faulty rationales for decisions and actions, . . .” (p. 16) and “The underlying issue is the social structure that results in “regulatory capture,” and the organizational, institutional, and legal framework that allows individuals to justify their own actions, hide them when inconvenient, and leave no records in order to avoid responsibility.” (p. 21)  IMHO, if this were boiled down, there wouldn’t be much SC left in the bottom of the pot.

***  A. Ryu and N. Meshkati, “Why You Haven’t Heard About Onagawa Nuclear Power Station after the Earthquake and Tsunami of March 11, 2011” (Rev. Feb. 26, 2014).

Friday, March 14, 2014

Deficient Safety Culture at Metro-North Railroad

A new Federal Railroad Administration (FRA) report* excoriates the safety performance of the Metro-North Commuter Railroad which serves New York, Connecticut and New Jersey.  The report highlights problems in the Metro-North safety culture (SC), calling it “poor”, “deficient” and “weak”.  Metro-North’s fundamental problem, which we have seen elsewhere, is putting production ahead of safety.  The report’s conclusion concisely describes the problem: “The findings of Operation Deep Dive demonstrate that Metro-North has emphasized on-time performance to the detriment of safe operations and adequate maintenance of its infrastructure. This led to a deficient safety culture that has manifested itself in increased risk and reduced safety on Metro-North.” (p. 4)

The proposed fixes are likewise familiar: “. . . senior leadership must prioritize safety above all else, and communicate and implement that priority throughout Metro-North. . . . submit to FRA a plan to improve the Safety Department’s mission and effectiveness. . . . [and] submit to FRA a plan to improve the training program. (p. 4)**

Our Perspective 


This report is typical.  It’s not bad, but it’s incomplete and a bit misguided.

The directive for senior management to establish safety as the highest priority and implement that priority is good but incomplete.  There is no discussion of how safety is or should be appropriately considered in decision-making throughout the agency, from its day-to-day operations to strategic considerations.  More importantly, Metro-North’s recognition, reward and compensation practices (keys to shaping behavior at all organizational levels) are not even mentioned.

The Safety Department discussion is also incomplete and may lead to incorrect inferences.  The report says “Currently, no single department or office, including the Safety Department, proactively advocates for safety, and there is no effort to look for, identify, or take ownership of safety issues across the operating departments. An effective Safety Department working in close communication and collaboration with both management and employees is critical to building and maintaining a good safety culture on any railroad.” (p. 13)  A competent Safety Department is certainly necessary to create a hub for safety-related problems but is not sufficient.  In a strong SC, the “effort to look for, identify, or take ownership of safety issues” is everyone’s responsibility.  In addition, the authors don’t appear to appreciate that SC is part of a loop—the deficiencies described in the report certainly influence SC, but SC provides the context for the decision-making that currently prioritizes on-time performance over safety.

Metro-North training is fragmented across many departments and the associated records system is problematic.  The proposed fix focuses on better organization of the training effort.  There is no mention of the need for training content to include any mention of safety or SC.

Not included in the report (but likely related to it) is that Metro-North’s president retired last January.  His replacement says Metro-North is implementing “aggressive actions to affirm that safety is the most important factor in railroad operations.”***

We have often griped about SC assessments where the recommended corrective actions are limited to more training, closer oversight and selective punishment.  How did the FRA do?   


*  Federal Railroad Administration, “Operation Deep Dive Metro-North Commuter Railroad Safety Assessment” (Mar. 2014).  Retrieved Mar. 14, 2014.  The FRA is an agency in the U.S. Department of Transportation.

**  The report also includes a laundry list of negative findings and required/recommended corrective actions in several specific areas.

***  M. Flegenheimer, “Report Finds Punctuality Trumps Safety at Metro-North,” New York Times (Mar. 14, 2014).  Retrieved Mar. 14, 2014)

Thursday, March 13, 2014

Eliminate the Bad Before Attempting the Good

An article* in the McKinsey Quarterly suggests executives work at rooting out destructive behaviors before attempting to institute best practices.  The reason is simple: “research has found that negative interactions with bosses and coworkers [emphasis added] have five times more impact than positive ones.” (p. 81)  In other words, a relatively small amount of bad behavior can keep good behavior, i.e., improvements, from taking root.**  The authors describe methods for removing bad behavior and warning signs that such behavior exists.  This post focuses on their observations that might be useful for nuclear managers and their organizations.

Methods

Nip Bad Behavior in the Bud — Bosses and coworkers should establish zero tolerance for bad behavior but feedback or criticism should be delivered while treating the target employee with respect.  This is not about creating a climate of fear, it’s about seeing and responding to a “broken window” before others are also broken.  We spoke a bit about the broken window theory here.

Put Mundane Improvements Before Inspirational Ones/Seek Adequacy Before Excellence — Start off with one or more meaningful objectives that the organization can achieve in the short term without transforming itself.  Recognize and reward positive behavior, then build on successes to introduce new values and strategies.  Because people are more than twice as likely to complain about bad customer service as to mention good customer service, management intervention should initially aim at getting the service level high enough to staunch complaints, then work on delighting customers.

Use Well-Respected Staff to Squelch Bad Behavior — Identify the real (as opposed to nominal or official) group leaders and opinion shapers, teach them what bad looks like and recruit them to model good behavior.  Sounds like co-opting (a legitimate management tool) to me.

Warning Signs

Fear of Responsibility — This can be exhibited by employees doing nothing rather than doing the right thing, or their ubiquitous silence.  It is related to bystander behavior, which we posted on here.

Feelings of Injustice or Helplessness — Employees who believe they are getting a raw deal from their boss or employer may act out, in a bad way.  Employees who believe they cannot change anything may shirk responsibility.

Feelings of Anonymity — This basically means employees will do what they want because no one is watching.  This could lead to big problems in nuclear plants because they depend heavily on self-management and self-reporting of problems at all organizational levels.  Most of the time things work well but incidents, e.g., falsification of inspection reports or test results, do occur.

Our Perspective

The McKinsey Quarterly is a forum for McKinsey people and academics whose work has some practical application.  This article is not rocket science but sometimes a simple approach can help us appreciate basic lessons.  The key takeaway is that an overconfident new manager can sometimes reach too far, and end up accomplishing very little.  The thoughtful manager might spend some time figuring out what’s wrong (the “bad” behavior) and develop a strategy for eliminating it and not simply pave over it with a “get better” program that ignores underlying, systemic issues.  Better to hit a few singles and get the bad juju out of the locker room before swinging for the fences.


*  H. Rao and R.I. Sutton, “Bad to great: The path to scaling up excellence,” McKinsey Quarterly, no. 1 (Feb. 2014), pp. 81-91.  Retrieved Mar. 13, 2014.

**  Even Machiavelli recognized the disproportionate impact of negative interactions.  “For injuries should be done all together so that being less tasted they will give less offence.  Benefits should be granted little by little so that they may be better enjoyed.”  The Prince, ch. VIII.

Tuesday, March 4, 2014

Declining Safety Culture at the Waste Isolation Pilot Plant?

DOE WIPP
Here’s another nuclear-related facility you may or may not know about: The Department of Energy’s (DOE) Waste Isolation Pilot Plant (WIPP) located near Carlsbad, NM.  WIPP’s mission is to safely dispose of defense-related transuranic radioactive waste.  “Transuranic” refers to man-made elements that are heavier than uranium; in DOE’s waste the most prominent of these elements is plutonium but waste also includes others, e.g., americium.*

Recently there have been two incidents at WIPP.  On Feb. 5, 2014 a truck hauling salt underground caught fire.  There was no radiation exposure associated with this incident.  But on Feb. 14, 2014 a radiation alert activated in the area where newly arrived waste was being stored.  Preliminary tests showed thirteen workers suffered some radiation exposure.


It will come as no surprise to folks associated with nuclear power plants that WIPP opponents have amped up after these incidents.  For our purposes, the most interesting quote comes from Don Hancock of the Southwest Research and Information Center: “I’d say the push for expansion is part of the declining safety culture that has resulted in the fire and the radiation release.”  Not surprisingly, WIPP management disputes that view.**


Our Perspective


So, are these incidents an early signal of a nascent safety culture (SC) problem?  After all, SC issues are hardly unknown at DOE facilities.  Or is the SC claim simply the musing of an opportunistic anti?  Who knows.  At this point, there is insufficient information available to say anything about WIPP’s SC.  However, we’ll keep an eye on this situation.  A bellwether event would be if the Defense Nuclear Facilities Safety Board decides to get involved.



See the WIPP and Environmental Protection Agency (EPA) websites for project information.  If the WIPP site is judged suitable, the underground storage area is expected to expand to 100 acres.

The EPA and the New Mexico Environmental Department have regulatory authority over WIPP.  The NRC has regulatory authority over the containers used to ship waste.  See National Research Council, “Improving the Characterization Program for Contact-Handled Transuranic Waste Bound for the Waste Isolation Pilot Plant” (Washington, DC: The National Academies Press, 2004), p. 27.


**  J. Clausing, “Nuclear dump leak raises questions about cleanup,” Las Vegas Review-Journal (Mar. 1, 2014).  Retrieved Mar. 3, 2014.

Wednesday, February 12, 2014

Left Brain, Right Stuff: How Leaders Make Winning Decisions by Phil Rosenzweig

In this new book* Rosenzweig extends the work of Kahneman and other scholars to consider real-world decisions.  He examines how the content and context of such decisions is significantly different from controlled experiments in a decision lab.  Note that Rosenzweig’s advice is generally aimed at senior executives, who typically have greater latitude in making decisions and greater responsibility for achieving results than lower-level professionals, but all managers can benefit from his insights.  This review summarizes the book and explores its lessons for nuclear operations and safety culture. 

Real-World Decisions

Decision situations in the real world can be more “complex, consequential and laden with uncertainty” than those described in laboratory experiments. (p. 6)  A combination of rigorous analysis (left brain) and ambition (the right stuff—high confidence and a willingness to take significant risks) is necessary to achieve success. (pp. 16-18)  The executive needs to identify the important characteristics of the decision he is facing.  Specifically,

Can the outcome following the decision be influenced or controlled?

Some real-world decisions cannot be controlled, e.g., the price of Apple stock after you buy 100 shares.  In those situations the traditional advice to decision makers, viz., be rational, detached, analyze the evidence and watch out for biases, is appropriate. (p. 32)

But for many decisions, the executive (or his team) can influence outcomes through high (but not excessive) confidence, positive illusions, calculated risks and direct action.  The knowledgeable executive understands that individuals perceived as good executives exhibit a bias for action and “The essence of management is to exercise control and influence events.” (p. 39)  Therefore, “As a rule of thumb, it's better to err on the side of thinking we can get things done rather than assuming we cannot.  The upside is greater and the downside less.” (p. 43)

Think about your senior managers.  Do they under or over-estimate their ability to influence future performance through their decisions?

Is the performance based on the decision(s) absolute or relative?

Absolute performance is described using some system of measurement, e.g., how many free throws you make in ten attempts or your batting average over a season.  It is not related to what anyone else does. 

But in competition performance is relative to rivals.  Ten percent growth may not be sufficient if a rival grows fifty percent.**  In addition, payoffs for performance may be highly skewed: in the Olympics, there are three medals and the others get nothing; in many industries, the top two or three companies make money, the others struggle to survive; in the most extreme case, it's winner take all and the everyone else gets nothing.  It is essential to take risks to succeed in highly skewed competitive situations.

Absolute and relative performance may be connected.  In some cases, “a small improvement in absolute performance can make an outsize difference in relative performance, . . .” (p. 66)  For example, if a well-performing nuclear plant can pick up a couple percentage points of annual capacity factor (CF), it can make a visible move up the CF rankings thus securing bragging rights (and possibly bonuses) for its senior managers.

For a larger example, remember when the electricity markets deregulated and many utilities rushed to buy or build merchant plants?  Note how many have crawled back under the blanket of regulation where they only have to demonstrate prudence (a type of absolute performance) to collect their guaranteed returns, and not compete with other sellers.  In addition, there is very little skew in the regulated performance curve; even mediocre plants earn enough to carry on their business.  Lack of direct competition also encourages sharing information, e.g., operating experience in the nuclear industry.  If competition is intense, sharing information is irresponsible and possibly dangerous to one's competitive position. (p. 61)

Do your senior managers compare their performance to some absolute scale, to other members of your fleet (if you're in one), to similar plants, to all plants, or the company's management compensation plan?

Will the decision result in rapid feedback and be repeated or is it a one-off or will it take a long time to see results? 


Repetitive decisions, e.g., putting at golf, can benefit from deliberate practice, where performance feedback is used to adjust future decisions (action, feedback, adjustment, action).  This is related to the extensive training in the nuclear industry and the familiar do, check and adjust cycle ingrained in all nuclear workers.

However, most strategic decisions are unique or have consequences that will only manifest in the long-term.  In such cases, one has to make the most sound decision possible then take the best shot. 

Executives Make Decisions in a Social Setting

Senior managers depend on others to implement decisions and achieve results.  Leadership (exaggerated confidence, emphasizing certain data and beliefs over others, consistency, fairness and trust is indispensable to inspire subordinates and shape culture.  Quoting Jack Welch, “As a leader, your job is to steer and inspire.” (p. 146)  “Effective leadership . . . means being sincere to a higher purpose and may call for something less than complete transparency.” (p. 158)

How about your senior managers?  Do they tell the whole truth when they are trying to motivate the organization to achieve performance goals?  If not, how does that impact trust over the long term?  
    
The Role of Confidence and Overconfidence

There is a good discussion of the overuse of the term “overconfidence,” which has multiple meanings but whose meaning in a specific application is often undefined.  For example, overconfidence can refer to being too certain that our judgment is correct, believing we can perform better than warranted by the facts (absolute performance) or believing we can outperform others (relative performance). 

Rosenzweig conducted some internet research on overconfidence.  The most common use in the business press was to explain, after the fact, why something had gone wrong. (p. 85)  “When we charge people with overconfidence, we suggest that they contributed to their own demise.” (p. 87)  This sounds similar to the search for the “bad apple” after an incident occurs at a nuclear plant.

But confidence is required to achieve high performance.  “What's the best level of confidence?  An amount that inspires us to do our best, but not so much that we become complacent, or take success for granted, or otherwise neglect what it takes to achieve high performance.” (p. 95)

Other Useful Nuggets

There is a good extension of the discussion (introduced in Kahneman) of base rates and conditional probabilities including the full calculations from two of the conditional probability examples in Kahneman's Thinking, Fast and Slow (reviewed here).

The discussion on decision models notes that such models can be useful for overcoming common biases, analyzing large amounts of data and predicting elements of the future beyond our influence.  However, if we have direct influence, “Our task isn't to predict what will happen, but to make it happen.” (p. 189)

Other chapters cover decision making in a major corporate acquisition (focusing on bidding strategy) and in start-up businesses (focusing on a series of start-up decisions)

Our Perspective

Rosenzweig acknowledges that he is standing on the shoulders of Kahneman and others students of decision making.  But “An awareness of common errors and cognitive biases is only a start.” (p. 248)  The executive must consider the additional decision dimensions discussed above to properly frame his decision; in other words, he has to decide what he's deciding.

The direct applicability to nuclear safety culture may seem slight but we believe executives' values and beliefs, as expressed in the decisions they make over time, provide a powerful force on the shape and evolution of culture.  In other words, we choose to emphasize the transactional nature of leadership.  In contrast, Rosenzweig emphasizes its transformational nature: “At its core, however, leadership is not a series of discrete decisions, but calls for working through other people over long stretches of time.” (p. 164)  Effective leaders are good at both.

Of course, decision making and influence on culture is not the exclusive province of senior managers.  Think about your organization's middle managers—the department heads, program and project managers, and process owners.  How do they gauge their performance?  How open are they to new ideas and approaches?  How much confidence do they exhibit with respect to their own capabilities and the capabilities of those they influence? 

Bottom line, this is a useful book.  It's very readable, with many clear and engaging examples,  and has the scent of academic rigor and insight; I would not be surprised if it achieves commercial success.


*  P. Rosenzweig, Left Brain, Right Stuff: How Leaders Make Winning Decisions (New York: Public Affairs, 2014).

**  Referring to Lewis Carroll's Through the Looking Glass, this situation is sometimes called “Red Queen competition [which] means that a company can run faster but fall further behind at the same time.” (p. 57)

Tuesday, January 21, 2014

Lessons Learned from “Lessons Learned”: The Evolution of Nuclear Power Safety after Accidents and Near-Accidents by Blandford and May

This publication appeared on a nuclear safety online discussion board.*  It is a high-level review of significant commercial nuclear industry incidents and the subsequent development and implementation of related lessons learned.  This post summarizes and evaluates the document then focuses on its treatment of nuclear safety culture (SC). 

The authors cover Three Mile Island (1979), Chernobyl (1986), Le Blayais [France] plant flooding (1999), Davis-Besse (2002), U.S. Northeast Blackout (2003) and Fukushima-Daiichi (2011).  There is a summary of each incident followed by the major lessons learned, usually gleaned from official reports on the incident. 

Some lessons learned led to significant changes in the nuclear industry, other lessons learned were incompletely implemented or simply ignored.  In the first category, the creation of INPO (Institute of Nuclear Power Operations) after TMI was a major change.**  On the other hand, lessons learned from Chernobyl were incompletely implemented, e.g., WANO (World Association of Nuclear Operators, a putative “global INPO”) was created but it has no real authority over operators.  Fukushima lessons learned have focused on design, communication, accident response and regulatory deficiencies; implementation of any changes remains a work in progress.

The authors echo some concerns we have raised elsewhere on this blog.  For example, they note “the likelihood of a rare external event at some site at some time over the lifetime of a reactor is relatively high.” (p. 16)  And “the industry should look at a much higher probability of problems than is implied in the “once in a thousand years” viewpoint.” (p. 26)  Such cautions are consistent with Taleb's and Dédale's warnings that we have discussed here and here.

The authors also say “Lessons can also be learned from successes.” (p. 3)  We agree.  That's why our recommendation that managers conduct periodic in-depth analyses of plant decisions includes decisions that had good outcomes, in addition to those with poor outcomes.

Arguably the most interesting item in the report is a table that shows deaths attributable to different types of electricity generation.  Death rates range from 161 (per TWh) for coal to 0.04 for nuclear.  Data comes from multiple sources and we made no effort to verify the analysis.***

On Safety Culture

The authors say “. . . a culture of safety must be adopted by all operating entities. For this to occur, the tangible benefits of a safety culture must become clear to operators.” (p. 2, repeated on p. 25)  And “The nuclear power industry has from the start been aware of the need for a strong and continued emphasis on the safety culture, . . .” (p. 24)  That's it for the direct mention of SC.

Such treatment is inexcusably short shrift for SC.  There were obvious, major SC issues at many of the plants the authors discuss.  At Chernobyl, the culture permitted, among other things, testing that violated the station's own safety procedures.  At Davis-Besse, the culture prioritized production over safety—a fact the authors note without acknowledging its SC significance.  The combination of TEPCO's management culture which simply ignored inconvenient facts and their regulator's “see no evil” culture helped turn a significant plant event at Fukushima into an abject disaster.

Our Perspective


It's not clear who the intended audience is for this document.  It was written by two professors under the aegis of the American Academy of Arts and Sciences, an organization that, among other things, “provides authoritative and nonpartisan policy advice to decision-makers in government, academia, and the private sector.”****  While it is a nice little history paper, I can't see it moving the dial in any public policy discussion.  The scholarship in this article is minimal; it presents scant analysis and no new insights.  Its international public policy suggestions are shallow and do not adequately recognize disparate, even oppositional, national interests.  Perhaps you could give it to non-nuclear folks who express interest in the unfavorable events that have occurred in the nuclear industry. 


*  E.D. Blandford and M.M. May, “Lessons Learned from “Lessons Learned”: The Evolution of Nuclear Power Safety after Accidents and Near-Accidents” (Cambridge, MA: American Academy of Arts and Sciences, 2012).  Thanks to Madalina Tronea for publicizing this article on the LinkedIn Nuclear Safety group discussion board.  Dr. Tronea is the group's founder/moderator.

**  This publication is a valentine for INPO and, to a lesser extent, the U.S. nuclear navy.  INPO is hailed as “extraordinarily effective” (p. 12) and “a well-balanced combination of transparency and privacy; . . .” (p. 25)

***  It is the only content that demonstrates original analysis by the authors.

****  American Academy of Arts and Sciences website (retrieved Jan. 20, 2014).

Thursday, January 9, 2014

Safety Culture Training Labs

Not a SC Training Lab
This post highlights a paper* Carlo Rusconi presented at the American Nuclear Society meeting last November.  He proposes the use of “training labs” to develop improved safety culture (SC) through the use of team-building exercises, e.g., role play, and table-top simulations.  Team building increases (a) participants' awareness of group dynamics, e.g., feedback loops, and how a group develops shared beliefs and (b) sensitivity to the viewpoints of others, viewpoints that may differ greatly based on individual experience and expectations.  The simulations pose evolving scenarios that participants must analyze and develop a team approach for addressing.  A key rationale for this type of training is “team interactions, if properly developed and trained, have the capacity to counter-balance individual errors.” (p. 2155)

Rusconi's recognition of goal conflict in organizations, the weakness of traditional methods (e.g., PRA) for anticipating human reactions to emergent issues, the need to recognize different perspectives on the same problem and the value of simulation in training are all familiar themes here at Safetymatters.

Our Perspective

Rusconi's work also reminds us how seldom new approaches for addressing SC concepts, issues, training and management appear in the nuclear industry.  Per Rusconi, “One of the most common causes of incidents and accidents in the industrial sector is the presence of hidden or clear conflicts in the organization. These conflicts can be horizontal, in departments or in working teams, or vertical, between managers and workers.” (p. 2156)  However, we see scant evidence of the willingness of the nuclear industry to acknowledge and address the influence of goal conflicts.

Rusconi focuses on training to help recognize and overcome conflicts.  This is good but one needs to be careful to clearly identify how training would do this and its limitations. For example, if promotion is impacted by raising safety issues or advocating conservative responses, is training going to be an effective remedy?  The truth is there are some conflicts which are implicit (but very real) and hard to mitigate. Such conflicts can arise from corporate goals, resource allocation policies and performance-based executive compensation schemes.  Some of these conflicts originate high in the organization and are not really amenable to training per se.

Both Rusconi's approach and our NuclearSafetySim tool attempt to stimulate discussion of conflicts and develop rules for resolving them.  Creating a measurable framework tied to the actual decisions made by the organization is critical to dealing with conflicts.  Part of this is creating measures for how well decisions embody SC, as done in NuclearSafetySim.

Perhaps this means the only real answer for high risk industries is to have agreement on standards for safety decisions.  This doesn't mean some highly regimented PRA-type approach.  It is more of a peer type process incorporating scales for safety significance, decision quality, etc.  This should be the focus of the site safety review committees and third-party review teams.  And the process should look at samples of all decisions not just those that result in a problem and wind up in the corrective action program (CAP).

Nuclear managers would probably be very reluctant to embrace this much transparency.  A benign view is they are simply too comfortable believing that the "right" people will do the "right" thing.  A less charitable view is their lack of interest in recognizing goal conflicts and other systemic issues is a way to effectively deny such issues exist.

Instead of interest in bigger-picture “Why?” questions we see continued introspective efforts to refine existing methods, e.g., cause analysis.  At its best, cause analysis and any resultant interventions can prevent the same problem from recurring.  At its worst, cause analysis looks for a bad component to redesign or a “bad apple” to blame, train, oversee and/or discipline.

We hate to start the new year wearing our cranky pants but Dr. Rusconi, ourselves and a cadre of other SC analysts are all advocating some of the same things.  Where is any industry support, dialogue, or interaction?  Are these ideas not robust?  Are there better alternatives?  It is difficult to understand the lack of engagement on big-picture questions by the industry and the regulator.


*  C. Rusconi, “Training labs: a way for improving Safety Culture,” Transactions of the American Nuclear Society, Vol. 109, Washington, D.C., Nov. 10–14, 2013, pp. 2155-57.  This paper reflects a continuation of Dr. Rusconi's earlier work which we posted on last June 26, 2013.