Thursday, October 20, 2016

Korean Perspective on Nuclear Safety Culture

Republic of Korea flag
We recently read two journal articles that present the Korean perspective on nuclear safety culture (NSC), one from a nuclear research institute and the other from the Korean nuclear regulator.  Selected highlights from each article are presented below, followed by our perspective on the articles’ value.

Warning:  Although the articles are in English, they were obviously translated from Korean, probably by a computer, and the translation is uneven.  However, the topics and references (including IAEA, NRC, J. Reason and Schein) will be familiar to you so with a little effort you can usually figure out what the authors are saying.

Korean NSC Situation and Issues*

The author is with the Korea Atomic Energy Research Institute.  He begins by describing a challenge facing the nuclear industry: avoiding complacency (because plant performance has been good) when the actual diffusion of NSC attributes among management and workers is unknown and major incidents, e.g., Fukushima, point to deficient NSC has a major contributor.  One consequence of this situation is that increased regulatory intervention in licensee NSC is a clear trend. (pp. 249, 254)

However, different countries have differing positions on how to intervene in or support NSC because (1) the objectification of an essentially qualitative factor is necessarily limited and (2) they fear diluting the licensee’s NSC responsibilities and/or causing unintended consequences. 

The U.S. NRC’s NSC history is summarized, including how NSC is addressed in the Reactor Oversight Process and relevant supplemental inspection procedures.  The author’s perception is “If safety culture vulnerability is judged to seriously affect the safety of a nuclear power plant, NRC orders the suspension of its operation, based on the judgment.” (p. 254)  In addition, the NRC has “developed and has been applying a licensee safety culture oversight program, based on site-stationed inspector's observation and assessment . . .” (ibid.)

The perception that the NRC would shut down a plant over NSC issues is a bit of a stretch.  While the agency is happy to pile on over NSC shortcomings when a plant has technical problems (see our June 16, 2016 post on ANO) it has also wrapped itself in knots to rationalize the acceptability of plant NSC in other cases (see our Jan. 30, 2013 post on Palisades).   

There is a passable discussion of the methods available for assessing NSC, ranging from observing top management leadership behavior to taking advantage of “Big data” approaches.  However, the author cautions against reliance on numeric indicators; they can have undesirable consequences.  He observes that Europe has a minimal number of NSC regulations while the U.S. has none.  He closes with recommendations for the Korean nuclear industry.

Regulatory Oversight of NSC**

The authors are with the Korea Institute of Nuclear Safety, the nuclear regulatory agency.  The article covers their philosophy and methods for regulating NSC.  It begins with a list of challenges associated with NSC regulatory oversight and a brief review of international efforts to date.  Regulatory approaches include monitoring onsite vulnerabilities (U.S.), performing standard reviews of licensee NSC evaluations (Canada, Korea) and using NSC indicators (Germany, Finland) although the authors note such indicators do not directly measure NSC. (pp. 267-68)

In the Korean view, the regulator should perform independent oversight but not directly intervene in licensee activities.  NSC assessment is separate and different from compliance-based inspection, requires effective two-way communications (i.e., a common language) and aims at creating long-term continuous improvement. (pp. 266-67)  Their NSC model uses a value-neutral definition of NSC (as opposed to strong vs. weak); incorporates Schein’s three levels; includes individuals, the organization and leaders; and emphasizes the characteristics shared by organization members.  It includes elements from IAEA GSR Part 2, the NRC, J. Reason's reporting culture, DOE, INPO, just culture and Korea-specific concerns about economics trumping safety. (pp. 268-69)***

In the detailed description of the model, we were pleased to see “Incentives, sanctions, and rewards correspond to safety competency of individuals.”  (p. 270)  An organization’s reward system has always been a hot-button issue for us; all nuclear organizations claim to value NSC, few are willing to pay for achieving or maintaining it.  Click the “Compensation” label to see all our posts on this topic.

The article presents a summary of an exercise to validate the model, i.e., link model components to actual plant safety performance.  The usual high-level mumbo-jumbo is not helped by the rough spots in the translation.  Inspection results, outage rates, scrams, incidents, unplanned shutdowns and radiation doses were claimed to be appropriately correlated with NSC model components.

There should be no surprise that the model was validated.  Getting a “right” answer is obviously good for the regulator.  We routinely express some skepticism over studies that validate models when we can’t see the actual data and we don’t know if the analysis was independently reviewed by anyone who actually understands or cares about the subject matter.

During the pilot study, several improvement areas in Korean NPP's safety culture were identified.  The approach has not been permanently installed.

Our Perspective

These articles are worth reading just to get a different, i.e., non-U.S., perspective on regulatory evaluation of (and possible intervention in) licensee SC.  It’s also worthwhile to get a non-U.S. perspective on what they think is going on in U.S. nuclear regulatory space.  Their information sources probably include a June 2015 NRC presentation to Korean regulators referenced in our Aug. 24, 2015 post.  

It’s interesting that Europe has some regulations that focus on ongoing communications with the licensees.  In contrast, the U.S. has no regulations but an approach that can stretch like a cheap blanket to cover all possible licensee situations.


We haven’t posted for awhile.  It’s not because we’ve lost interest but there hasn’t been much worth reporting.  The big nuclear news in the U.S. is not about NSC, rather it’s about plants being scheduled for shutdown because of their economics.  International information sources have not been offering up much either.  For example, the LinkedIn NSC forum has pretty much dried up except for recycled observations and consultants’ self-serving white papers.

*  Y-H Lee, “Current Status and Issues of Nuclear Safety Culture,” Journal of the Ergonomics Society of Korea vol. 35 no. 4 (Aug 2016) 247-261.

**  YS Choi, SJ Jung and YH Chung, “Regulatory Oversight of Nuclear Safety Culture and the Validation Study on the Oversight Model Components,” Journal of the Ergonomics Society of Korea vol. 35 no. 4 (Aug 2016) 263-275.

***  Korea has had problems, mentioned in both articles, caused by deficient NSC.  Also see our Aug. 7, 2013 post for related information.

Monday, August 1, 2016

Nuclear Safety Culture Self-Assessment Guidance from IAEA

IAEA report cover
The International Atomic Energy Agency (IAEA) recently published guidance on performing safety culture (SC) self-assessments (SCSAs).  This post summarizes the report* and offers our perspective on its usefulness.

The Introduction presents some general background on SC and specific considerations to keep in mind when conducting an SCSA, including a “conscious effort to think in terms of the human system (the complex, dynamic interaction of individuals and teams within an organization) rather than the technological system.” (p. 2)  Importantly, an SCSA is not based on technical skills or nuclear technology, nor is it focused on immediate corrective actions for observed problems.

Section 2 provides additional information on SC, starting with the basics, e.g., culture is one way of explaining why things happen in organizations.  The familiar iceberg model is presented, with the observable artifacts above the surface and the national, ethnic and religious values that underlie culture way below the waterline.  Culture is robust (it cannot be changed rapidly) and complicated (subcultures exist).  So far, so good.

Then things start to go off the rails.  The report reminds us that the IAEA SC framework** has five SC characteristics but then the report introduces, with no transition, a four-element model for envisioning SC; naturally, the model elements are different from the five SC characteristics previously mentioned.  The report continues with a discussion of IAEA’s notion of “shared space,” the boundary area where working relationships develop between the individual and other organizational members.  We won’t mince words: the four-component model and “shared space” are a distraction and zero value-added.

Section 3 explores the characteristics of SCSAs.  Initially, an SCSA focuses on developing an accurate description of the current culture, the “what is.”  It then moves on to evaluating a SC’s strengths and weaknesses by comparing “what is” with “what should be.”  An SCSA is different from a typical audit in numerous ways, including the need for specialized training, a focus on organizational dynamics and an understanding of the complex interplay of multicultural dimensions of the organization.

SCSAs require recognition of the biases present when a culture examines itself.  Coupling this observation with an earlier statement that effective SCSAs require understanding of the relevant social sciences, the report recommends obtaining qualified external support personnel (at least for the initial efforts at conducting SCSAs).  In addition, there are many risks (the report comes up with 17) associated with performing an SCSA that have to be managed.  All of these aspects are important and need to be addressed.

Section 4 describes the steps in performing an SCSA.  The figure that purportedly shows all the steps is unapproachable and unintelligible.  However, the steps themselves—prepare the organization, the team and the SCSA plan; conduct the pre-launch and the SCSA; analyze the results; summarize the communicate the findings; develop actions; capture lessons learned; and conduct a follow-up—are reasonable.

The description of SCSA team composition, competences and responsibilities is also reasonable.  Having a team member with a behavioral science background is highly desirable but probably not available internally in other than the largest organizations. 

Section 5 covers SCSA methods: document review, questionnaires, observations, focus groups and interviews.  For each method, the intent, limitations and risks, and intended uses are discussed.  Each method requires specific skills.  The purpose is to develop an overall view of the culture.  Because of the limitations of individual methods, multiple (and preferably all) methods should be used.  Overall, this section is a pretty good high-level description of the different investigative methods.

Section 6 describes how to perform an integrated analysis of the information gathered.  This involves working iteratively with parallel information sets.  There is a lengthy discussion of how to develop cultural themes from the different data sources.  Themes are combined into an overall descriptive view of the culture which can then be compared to the IAEA SC framework (a normative view) to identify relative strengths and weaknesses, and improvement opportunities.

Section 7 describes approaches to communicating the findings and transitioning into action.  It covers preparing the SCSA report, communicating the results to management and the larger organization, possible barriers to implementing improvement initiatives and maintaining continuous improvement in an organization’s SC.

The report has an extensive set of appendices that illustrate how an SCSA can be conducted.  Appendix I is a laundry list of potential areas for inquiry.  Appendices II-VIII present a case study using all the SCSA methods in Section 5, followed by some example overall conclusions.  Appendix IX is an outline of an SCSA final report.  The guidance on using the SCSA methods is acceptably complete and clear.

A 28-page Annex (including 8 pages of references) describes the social science underlying the recommended methodology for performing SCSAs.  It covers too much ground to be summarized here.  The writing is uneven, with some topics presented in a fluid style (probably a single voice) while others, especially those referring to many different sources, are more ragged.  Because of the extensive use of in-line references, the reader can easily identify source materials.   

Our Perspective

There’s good news and bad news in this Safety Report.  The good news is that when IAEA collates and organizes the work of others, e.g., academics, SC practitioners or industry best practices, IAEA can create a readable, reasonably complete reference on a subject, in this case, SCSA.

The bad news is that when IAEA tries to add new content with their own concepts, constructs, figures and such, they fail to add any value.  In fact, they detract from the total package.  It seems to never have occurred to the IAEA apparatchiks to circulate their ideas for new content for substantive review and comment.

*  International Atomic Energy Agency, “Performing Safety Culture Self-assessments,” Safety Reports Series no. 83 (Vienna: IAEA, 2016).  Thanks to Madalina Tronea for publicizing this report.  Dr. Tronea is the founder/moderator of the LinkedIn Nuclear Safety Culture discussion group.

**  Interestingly, the IAEA SC framework (SC definition, key characteristics and attributes) is mentioned without much discussion; the reader is referred to other IAEA documents for more details.  That’s OK.  For purposes of SCSA, it’s only important that the organization, including the SCSA team, agree on a SC definition and its associated characteristics and attributes.  This will give everyone involved a shared normative view for linking the SCSA findings to a picture of what the SC should look like.

Thursday, June 16, 2016

Nuclear Safety Culture at ANO—the NRC Weighs In

Arkansas Nuclear One (credit: Edibobb)
On June 25, 2015 we posted about Arkansas Nuclear One’s (ANO) performance problems (a stator drop, inadequate flood protection and unplanned scrams) and the Nuclear Regulatory Commission’s (NRC's) reaction.  The NRC assigned ANO to column 4 of the Action Matrix where it receives the highest level of oversight for an operating plant.  As part of this increased oversight, the NRC conducted a comprehensive inspection of ANO performance, programs and processes.  A lengthy inspection report* was recently issued.

According to the NRC press release** the inspection team identified the following major issues:

“Resource reductions and leadership behaviors were the most significant causes for ANO’s declining performance. . . . ANO management did not reduce workloads through efficiencies or the elimination of unnecessary work, . . . Leaders . . . did not address expanding work backlogs***. . . . An unexpected increase in employee attrition between 2012 and 2014 caused a loss in experienced personnel, . . . Since 2007, the reduced resources created a number of changes that slowly began to impact equipment reliability.  The Entergy fleet reduced preventive maintenance and extended the time between some maintenance activities.”

The press release goes on to list numerous ANO corrective actions and NRC observations that suggest the potential for improved plant performance.

What About ANO’s Safety Culture?

The press release also mentions that the inspection team evaluated the adequacy of a 2015 Third Party Nuclear Safety Culture Assessment (TPNSCA) conducted at ANO.  The press release gives short shrift to the key role a weak safety culture (SC) played in creating ANO’s problems in the first place and the extensive SC questions raised and diagnostics performed by the NRC inspection team.

Last June, based on NRC and ANO meeting presentations, we concluded “the ANO culture endorses a “blame the contractor” attitude, accepts incomplete investigations into actual events and potential problems, and is content to let the NRC point out problems for them.”  These are serious deficiencies.  Do the same or similar problems appear in the inspection report?  To answer that question, we need to dig into the details of the 243 page report.

The Cover Letter

Top-level SC problems are included in the NRC cover letter which says “The inspection team identified what it considered to be missed opportunities for ANO to have promptly initiated performance improvements since being placed in Column 4.  More specifically, ANO: 1) was slow to implement corrective actions to address the findings from the Corrective Action Program cause evaluation and the Third Party Nuclear Safety Culture Assessment; 2) did not perform an evaluation of the causes for safety culture problems; . . .” (letter, p. 2)

Executive Summary

The report's Executive Summary says “The Third Party Nuclear Safety Culture Assessment identified that ANO personnel tolerated, and at times normalized, degraded conditions.”  Expanding on the missed opportunities comment in the cover letter, “the NRC team’s independent safety culture evaluation noted limited improvement in safety culture since the completion of ANO’s independent Third Party Nuclear Safety Culture Assessment.” (report p. 5)  “ANO did not create a specific improvement plan to address the findings of the safety culture assessments, choosing to address selected safety culture attributes that were associated with root cause evaluations rather than treating the findings in the context of a separate problem area.  By not performing a cause evaluation for safety culture, ANO management missed the opportunity to address the full scope of safety culture weaknesses.” (pp. 5-6)

Review of ANO Recovery Plan 

The NRC’s critique of ANO’s Recovery Plan included “The NRC team questioned the recovery team’s decision not to perform casual evaluations of the PAs [Problem Areas].  In response, ANO performed apparent cause evaluations (ACEs) or gap analyses for each PA.  The NRC team questioned the recovery team’s decision not to perform causal evaluations for the safety culture attributes identified in [a 2014] . . . safety culture survey, the TPNSCA, and the RCEs [Root Cause Evaluations].  The team also questioned the recovery team’s decision not to treat safety culture as a separate problem area.” (p. 21)

This is an example where the NRC was still identifying ANO’s overarching problems for the plant staff.

Review of RCEs for Fundamental Problem Areas

“ANO’s Vendor Oversight RCE identified weak implementation of administrative controls and placing undue confidence in vendor services as common cause failures. However, ANO did not assess the underlying safety culture aspects.” (p. 110, emphasis added)

This is not “blame the vendor” but is a different serious problem, viz., an over-reliance on vendor activities to protect the customer.  (This problem is not unique to ANO; it also might exist at the Waste Isolation Pilot Plant.  See our May 3, 2016 post for details.)

Inspection Report Chapter on SC

The NRC team conducted its own assessment of ANO’s SC. The NRC team interviewed personnel at all levels, conducted focus group discussions, performed behavioral observations, reviewed documents and relevant plant programs, and evaluated plant management meetings.  Overall, they assessed all ten SC traits using the full set of SC attributes contained in NRC documentation.  For each trait, the report includes its attributes, inspection team observations and findings, and relevant ANO corrective actions.

The team also reviewed seven RCEs and concluded ANO addressed the major SC attributes identified in each RCE.  However, “The NRC team noted that ANO identified that some safety culture attributes were contributors to several of the RCE problem statements, but ANO did not consider the collective significance.” (p. 184)

ANO took the hint.  “In response to the NRC team’s concerns, ANO performed a common cause analysis of all of the safety culture attributes that were identified in the recovery RCEs in order to assess the collective significance and causes.” (p. 185)  ANO developed a SC Area Action Plan (AAP) and the NRC concluded “The corrective actions identified in the NSC AAP were comprehensive and appropriate to address the causes for safety culture weaknesses.” (p. 186)

“The NRC team’s graded safety culture assessment independently confirmed the results from the TPNSCA.” (p. 188)

“The NRC team was concerned that the SCLT’s [Safety Culture Leadership Team, senior managers] conclusion that ANO’s safety culture was “adequate” in August 2015 did not appropriately reflect the data provided by, or the recommendations from, the NSCMP [Nuclear Safety Culture Monitoring Panel, mid-level personnel].  This SCLT conclusion did not reflect the declining condition with respect to safety culture and indicated a lack of awareness that improvements in safety culture at ANO were needed.”  The SCLT eventually came around and in December 2015 declared that ANO’s SC was not acceptable. (p. 192)

Our Perspective

The NRC is optimistic that ANO has correctly identified the root causes of its performance problems and has undertaken corrective actions that will ultimately prove effective.  We hope so but we’ll go with “trust but verify” on this one.  ANO still exhibits problems with incomplete analyses and leaning on the NRC to identify systemic deficiencies.

The NRC team took a good look at ANO's SC.  Quite frankly, their effort was more comprehensive than we expected.  They used an acceptable methodology for their SC assessment.  The fact that their assessment findings were consistent with the TPNSCA is not surprising.  SC evaluation is a robust social science activity and qualified SC evaluators using similar techniques should obtain generally comparable results.

We believe the NRC’s SC professionals are qualified and competent but probably encouraged to support the overall inspection findings.  The elephant in the room is that SC is a policy, not a regulation.  Would the NRC keep a plant in column 4 based solely on their belief that the plant SC is deficient?  Look at the contortions the agency performed at Palisades as that plant’s SC somehow went from weak, with constant problems, to “improving” and, we inferred, acceptable.  (See our Jan. 30, 2013 post for details.)

There may have been a bit of similar magical thinking at ANO.  In the inspection report, every SC trait had examples of shortcomings but also had “appropriate” corrective actions to improve performance.****  How can this be when ANO (and Entergy) have been so slow to grasp the systemic nature of their SC problems?

Let’s close on a different note.  Earlier this year ANO named a full-time SC manager, a person whose background is in plant security.  On the surface, this is an “unfiltered” choice.  (See our March 10, 2016 post for a discussion of filtering in personnel decisions.)  He may be exactly the type of person ANO needs to make SC improvements happen.  We wish him well.

*  M. L. Dapas (NRC) to J. Browning (ANO), “Arkansas Nuclear One – NRC Supplemental Inspection Report 05000313/2016007 and 05000368/2016007” (June 9, 2016).  ADAMS ML16161B279.

**  V. Dricks, Press Release, “NRC Issues Comprehensive Inspection Report on Arkansas Nuclear One” (June 13, 2015).

***  We have often noted that large backlogs, especially of safety-related work, are an artifact of a weak SC.

****  One trait was judged to have no significant issues so corrective action was not needed.

Tuesday, June 7, 2016

The Criminalization of Safety (Part 3)

Our Perspective

The facts and circumstances of the events described in Table 1 in Part 1 point to a common driver - the collision of business and safety priorities, with safety being compromised.  Culture is inferred as the “cause” in several of the events but with little amplification or specifics.[1]  The compromises in some cases were intentional, others a product of a more complex rationalization.  The events have been accompanied by increased criminal prosecutions with varied success. 

We think it is fair to say that so far, criminalization of safety performance does not appear to be an effective remedy.  Statutory limitations and proof issues are significant limitations with no easy solution. The reality is that criminalization is at its core a “disincentive”.  To be effective it would have to deter actions or decisions that are not consistent with safety but not create a minefield of culpability.  It is also a blunt instrument requiring rather egregious behavior to rise to the level of criminality.  Its best use is probably as an ultimate boundary, to deter intentional misconduct but not be an unintended trap for bad judgment or inadequate performance.  In another vein, criminalization would also seem incompatible with the concept of a “just culture” other than for situations involving intentional misconduct or gross negligence.

Whether effective or not, criminalization reflects the urgency felt by government authorities to constrain excessive risk taking, intentional or not, and enhance oversight.  It is increasingly clear that current regulatory approaches are missing the mark.  All of the events catalogued in Table 1 occurred in industries that are subject to detailed safety and environmental regulation.  After the fact assessments highlight missed opportunities for more assertive regulatory intervention, and in the Flint cases there are actual criminal charges being applied to regulators.  The Fukushima event precipitated a complete overhaul of the nuclear regulatory structure in Japan, still a work in progress.  Post hoc punishments, no matter how severe, are not a substitute.

Nuclear Regulation Initiatives

Looking specifically at nuclear regulation in the U.S. we believe several specific reforms should be considered. It is always difficult to reform without the impetus of a major safety event, but we could see these actions as ones that could appear obvious in a post-event assessment if there was ever an “O-ring” moment in the nuclear industry.[2]

1. The NRC should include the safety management system in its regulatory activities.

The NRC has effectively constructed a cordon sanitaire around safety management by decreeing that “management” is beyond the scope of regulation.  The NRC relies on the fact that licensees bear the primary responsibility for safety and the NRC should not intrude into that role.  If one contemplates the trend of recent events scrutinizing the performance of regulators following safety events, this legalistic “defense” may not fare well in a situation where more intrusive regulation could have made the difference.

The NRC does monitor “safety culture” and often requires licensees to address weaknesses in culture following performance issues.  In essence safety culture has become an anodyne for avoiding direct confrontation of safety management issues.  Cynically one could say it is the ultimate conspiracy - where regulators and “stakeholders” come together to accept something that is non-contentious and conveniently abstract to prevent a necessary but unwanted (apparently by both sides) intrusion into safety management.

As readers of this blog know, our unyielding focus has been on the role of the complex socio-technical system that functions within a nuclear organization to operate nuclear plants effectively and safely.  This management system includes many drivers, variables, feedbacks, culture, and time delays in its processes, not all of which are explicit or linear.  The outputs of the system are the actions and decisions that ultimately produce tangible outcomes for production and safety.  Thus it is a safety system and a legitimate and necessary area for regulation.

NRC review of safety management need not focus on traditional management issues which would remain the province of the licensee.  So organizational structure, personnel decisions, etc. need not be considered.[3]  But here we should heed the view of Daniel Kahneman where he suggests we think of organizations as “factories for producing decisions” and therefore, think of decisions as a product.  (See our Nov. 4,2011 post, A Factory for Producing Decisions.)  Decisions are in fact the key product of the safety management system.  Regulatory focus on how the management system functions and the decisions it produces could be an effective and proactive approach.

We suggest two areas of the management system that could be addressed as a first priority: (1) Increased transparency of how the management system produces specific safety decisions including the capture of objective data on each such decision, and (2) review of management compensation plans to minimize the potential for incentives to promote excessive risk taking in operations.

2. The NRC should require greater transparency in licensee management decisions with potential safety impacts.

Managing nuclear operations involves a continuum of decisions balancing a variety of factors including production and safety.  These decisions may occur with individuals or with larger groups in meetings or other forums.  Some may involve multiple reviews and concurrences.  But in general the details of decision making, i.e., how the sausage is made, are rarely captured in detail during the process or preserved for later assessment.[4]  Typically only decisions that happen to yield a bad outcome (e.g., prompt the issuance of an LER or similar) become subject to more intensive review and post mortem.  Or actions that require specific, advance regulatory approval and require an SER or equivalent.[5]  

Transparency is key.  Some say the true test of ethics is what people do when no one is looking.  Well the converse of that may also be true - do people behave better when they know oversight is or could be occurring?  We think a lot of the NRC’s regulatory scheme is already built on this premise, relying as it does on auditing licensee activities and work products.

Thinking back to the Davis Besse example, the criminal prosecutions of both the corporate entity and individuals were limited to providing false or incomplete information to the NRC.  There was no attempt to charge on the basis of the actual decisions to propose, advocate for, and attempt to justify, that the plant could continue to operate beyond the NRC’s specified date for corrective actions.  The case made by First Energy was questionable as presented to the NRC and simply unjustified when accounting for the real facts behind their vessel head inspections.

Transparency would be served by documenting and preserving the decision process on safety significant issues.  These data might include the safety significance and applicable criteria, the potential impact on business performance (plant output, cost, schedule, etc), alternatives considered, and the participants and their inputs to the decision making process, and how a final decision was reached.   These are the specifics that are so hard or impossible to reproduce after the fact.[6]  The not unexpected result: blaming someone or something but not gaining insight into how the management system failed.

This approach would provide an opportunity for the NRC to audit decisions on a routine basis.  Licensee self assessment would also be served through safety committee review and other oversight including INPO.  Knowing that decisions will be subject to such scrutiny also can promote careful balancing of factors in safety decisions and serve to articulate how those balances are achieved and safety is served.  Having such tangible information shared throughout the organization could be the strongest way to reinforce the desired safety culture.

3. As part of its regulation of the safety management system, the NRC should restrict incentive compensation for nuclear management that is based on meeting business goals.

We started this series of posts focusing on criminalization of safety.  One of the arguments for more aggressive criminalization is essentially to offset the powerful pull of business-based incentives with the fear of criminal sanctions.  This has proved to elusive.  Similarly attempting to balance business incentives with safety incentives also is problematic.  The Transocean experience illustrates that quite vividly.[7]

Our survey several years ago of nuclear executive compensation indicated (1) the amounts of compensation are very significant for the top nuclear executives, (2) the compensation is heavily dependent on each years performance, and (3) business performance measured by EPS is the key to compensation, safety performance is a minor contributor.  A corollary to the third point might be that in no cases that we could identify was safety performance a condition precedent or qualification for earning the business-based incentives. (See our July 9, 2010 post, Nuclear Management Compensation (Part 2)).  With 60-70% of total compensation at risk, executives can see their compensation, and that of the entire management team, impacted by as much as several million dollars in a year.  Can this type of compensation structure impact safety?  Intuition says it creates both risk and a perception problems.  Virtually every significant safety event in Table 1 has reference to the undue influence of production priorities on safety.  The issue was directly raised in at least one nuclear organization[8] which revised its compensation system to avoid undermining safety culture. 

We believe a more effective approach is to minimize the business pressures in the first place.  We believe there is a need for a regulatory policy that discourages or prohibits licensee organizations from utilizing significant incentives based on financial performance.  Such incentives invariably target production and budget goals as they are fundamental to business success.  To the extent safety goals are included they are a small factor or based on metrics that do not reflect fundamental safety.  Assuring safety is the highest priority is not subject to easily quantifiable and measurable metrics - it is judgmental and implicit in many actions and decisions taken on a day-to-day basis at all levels of the organization.  Organizations should pay nuclear management competitively and generously and make informed judgments about their overall performance.

Others have recognized the problem and taken similar steps to address it.  For example, in the aftermath of the financial crisis of 2008 the Federal Reserve Board has been doing some arm twisting with U.S. financial services companies to adjust their executive compensation plans - and those plans are in fact being modified to cap bonuses associated with achieving performance goals. (See our April 25, 2013 post, Inhibiting Excessive Risk Taking by Executives.)

Nick Taleb (of Black Swan fame) believes that bonuses provide an incentive to take risks.  He states, “The asymmetric nature of the bonus (an incentive for success without a corresponding disincentive for failure) causes hidden risks to accumulate in the financial system and become a catalyst for disaster.”  Now just substitute “nuclear operations” for “the financial system”.

Central to Talebs thesis is his belief that management has a large informational advantage over outside regulators and will always know more about risks being taken within their operation. (See our Nov. 9, 2011 post, Ultimate Bonuses.)  Eliminating the force of incentives and providing greater transparency to safety management decisions could reduce risk and improve everybody’s insight into those risks deemed acceptable.


In industries outside the commercial nuclear space, criminal charges have been brought for bad outcomes that resulted, at least in part, from decisions that did not appropriately consider overall system safety (or, in the worst cases, simply ignored it.)  Our suggestions are intended to reduce the probability of such events occurring in the nuclear industry.

[1] It raises the question whether anytime business priorities trump safety it is a case of deficient culture.  We have argued in other blog posts that sufficiently high business or political pressure can compromise even a very strong safety culture.  So reflexive resort to safety culture may be easy but not be very helpful.
[2] Credit to Adam Steltzner author of The Right Kind of Crazy recounting his and other engineers’ roles in the design of the Mars rovers.  His reference is to the failure of O-ring seals on the space shuttle Challenger.
[3] We do recognize that there are regulatory criteria for general organizational matters such as for the training and qualification of personnel. 
[4] In essence this creates a “safe harbor” for most safety judgments and to which the NRC is effectively blind.
[5] In Davis Besse much of the “proof” that was relied on in the prosecutions of individuals was based on concurrence chains for key documents and NRC staff recollections of what was said in meetings.  There was no contemporaneous documentation of how First Energy made its threshold decision that postponing the outage was acceptable, who participated, and who made the ultimate decision.  Much was made of the fact that management was putting great pressure on maintaining schedule but there was no way to establish how that might have directly affected decision making.
[6] Kahneman believes there is “hindsight bias”.  Hindsight is 20/20 and it supposedly shows what decision makers could (and should) have known and done instead of their actual decisions that led to an unfavorable outcome, incident, accident or worse.  We now know that when the past was the present, things may not have been so clear-cut.  See our Dec.18, 2013 post, Thinking, Fast and Slow by Daniel Kahneman.
[7] Transocean, owner of the Deepwater Horizon oil rig, awarded millions of dollars in bonuses to its executives after “the best year in safety performance in our companys history,” according to an annual report…’Notwithstanding the tragic loss of life in the Gulf of Mexico, we achieved an exemplary statistical safety record as measured by our total recordable incident rate and total potential severity rate.’”  See our April 7, 2011 post for the original citation in Transocean's annual report and further discussion.
[8] “The reward and recognition system is perceived to be heavily weighted toward production over safety”.  The reward system was revised "to ensure consistent health of NSC”.  See our July 29, 2010 post, NRC Decision on FPL (Part 2).

Tuesday, May 31, 2016

The Criminalization of Safety (Part 2)

Risky Business 

As we illustrated in Part 1 of this post a new aspect of safety management risk is possible criminal liability for actions, or inactions, associated with events that did, or could have, safety consequences.  While there has always been the potential for criminal liability it has generally been directed at the corporate level versus individual employees.  Heretofore, “few executives have been on the hook, partly because it is tough for prosecutors to prove an individual had criminal intent in a corporate setting where decision-making is spread among many.” 1,2

The Justice Department has been making a new push to target individuals more frequently to hold them accountable for corporate malfeasance. Much of the criminal liability in recent years has been cropping up in industries other than nuclear, as illustrated in the summary table in Part 1.  The Deepwater Horizon drill rig explosion and the Massey Coal explosion at the Upper Big Branch mine have been leading examples.  More recently the series of scandals involving automobile manufacturers are adding to the record.  And the Flint water contamination situation is also evolving rapidly.  We’ll discuss the significance of these cases and how it could impact the conduct of individuals responsible for safe nuclear operations and the role of regulation.  In particular, under what circumstances criminal liability may attach and whether the potential to be held criminally liable is an effective force in assuring compliant behaviors and ultimately safety. 

Who’s a Criminal?

The various cases are a mix of corporate and individual liability.  All three corporations involved in Deepwater pleaded guilty to various charges and paid very large fines.  In BP’s case, it pleaded guilty to felony manslaughter.  Manslaughter charges against individuals employed by BP were dropped prior to trial.  Individual liability was limited to violations of the Clean Water Act and obstruction of justice (misdemeanors).3

David Uhlmann, a professor at the University of Michigan Law School and former environmental-crimes prosecutor stated, “The Justice Department always seeks to hold individuals accountable for corporate crime, but doing so in the Gulf oil spill meant charging individuals who had no control over the corporate culture that caused the spill.” 4

Other cases followed a similar pattern until Upper Big Branch.  Mostly lower level individuals were being targeted; higher ups were insulated from knowledge or direct involvement in the specific event.  With Massey prosecutors worked their way up the management chain all the way to the CEO.5  However even where there were significant indications of the CEO driving a “production first” culture, the felonies he faced were based on securities fraud and making false statements.  Ultimately he was convicted of violating safety standards and will serve jail time.Fukushima will be another attempt to hold senior management accountable (for something termed, “professional negligence”) but, as previously noted, the case is thought to be difficult.  The Attorney General in the Flint water cases promises more indictments and implies higher ups will be charged.  It remains to be seen whether this targeting of individuals will prove to be a truer preventive measure than other remedies.

Proof of Criminal Behavior is Difficult

Ultimately the prospect of criminal prosecution is fraught with legal and practical obstacles.Current law does not provide a realistic platform for prosecution or sentencing.  Statutory provisions are often limited to misdemeanors.  Making applicable statutes “tougher”, as already proposed by a presidential candidate, is also problematic as it risks over-criminalizing management actions which occur in a complex environment and involve many individuals.  Simple negligence is a problematic ground for criminal liability which generally requires a showing of intent or recklessness.As noted in regard to the VW scandal, “…investigations are ongoing. Whether criminal prosecutions result may be a matter of balancing suspicion of criminal wrongdoing against the standards of proof required - and the track record of recent prosecutions.9

All of the recent experience involving corporations were guilty pleas - the cases did not go to trial and so the standard of proof was not tested. In the BP cases, the DOJ made quite a splash with its indictments of individuals but clearly overreached in charging as the courts and juries quickly dismissed most cases and all felony charges.

Fukushima may be a bit of an oddity as the charges have been mandated by a citizen’s panel.   The charge is “professional negligence” which probably does not have a direct analog in U.S. law.  It does suggest that there will be scrutiny of the actual decisions made by executives which resulted in safety consequences.  In the Flint cases, there will another attempt to review an actual safety decision.  An engineer of the Michigan Department of Water Quality is charged with “misconduct” in authorizing use of the Flint water plant “knowing” it was deficient.  Bears watching.

Competing Priorities and Culture Are Being Cited More Frequently 

Personnel are already in a difficult position when it comes to assuring safety. Corporations inherently, and often quite intentionally, place significant emphasis on achieving operational and business goals.  These goals at certain junctures may conflict with assuring safety.  The de facto reality is that it is up to the operating personnel to constantly rationalize those conflicts in a way that achieves acceptable safely.  Those decisions are rarely obvious, may imply significant benefits or costs, and are subject to ex post critical review with all the benefits of time, hindsight, and no direct decision making responsibility.  Thus the focus may shift from decisions to the culture that may have produced or rationalized those decisions.

The Mine Safety and Health Administration report concluded that the [Upper Big Branch] disaster was "entirely preventable," and was caused in part by a pattern of major safety problems and Massey's efforts to conceal hazards from government inspectors, all of which "reflected a pervasive culture that valued production over safety.”  The Governor of West Virginia’s independent review also found that Massey had “made life difficult” for miners who tried to address safety and built “a culture in which wrongdoing became acceptable.”

As noted in the media, “the automotive industry is caught up in an emissions rigging scandal that exposes systematic cheating and an apparent culture of corrupt ethics."  At VW nine executives so far have been suspended but blame has been focused on a small group of engineers for the misconduct, and VW contends that members of its management board did not know of the decade-long deception.  The idea that a few engineers are responsible “just doesn’t pass the laugh test,’ said John German, a former official at the Environmental Protection Agency…its management culture — confident, cutthroat and insular — is coming under scrutiny as potentially enabling the lawbreaking behavior.10  Mitsubishi Motors is also implicated and investigations are being launched into their peers – including Daimler and Peugeot – to assess the extent of the problem around the world.

Ineffective Regulation is Becoming a Focus 

Last but perhaps the most intriguing evolution in these cases is a new emphasis on the responsibility of the regulator when safety is compromised. There was an extensive and ongoing history of violations at Big Branch Mine, many unresolved, but which did not lead to more stringent enforcement measures by the Mine Safety and Health Administration (MSHA) - such as a shutdown of mine operations.  State of West Virginia investigators claimed that the U.S Department of Labor and its MSHA were equally at fault for failing to act decisively after Massey was issued 515 citations for safety violations at the UBBM in 2009.  “…officials with the MSHA repeatedly defended their agency’s performance. They were quick to point to the fact that the Mine Safety Act places the duty for providing a safe workplace squarely on the shoulders of the employer, insisting that the operator is ultimately responsible for operating a safe mine.” 11

Similar concerns have arisen with regard to Fukushima where safety regulators have been perceived to lack independence from nuclear plant operators. And thinking back to Davis Besse, it seems that the NRC’s actions could have been more intrusive and proactive in determining the condition of the RPV head prior to allowing the inspections to be delayed.

With regard to Flint we noted above that criminal (felony) charges have been brought against a state engineer for “misconduct in office” for authorizing use of the Flint plant.  In addition, he and a supervisor are also charged with misconduct in office for “willfully and knowingly misleading the federal Environmental Protection Agency…”   An expert in environmental crimes notes ”It’s extremely unusual and maybe unprecedented for state and local officials to be charged with criminal drinking water violations, . . .” 12

Whether these pending actions lead to a robust effort to hold regulators and their staff accountable is hard to know.  It bears watching, particularly the contention by MSHA and other regulatory agencies including the NRC, that operators are primarily and ultimately responsible. In Part 3 we’ll share some thoughts on what might other approaches might be effective.

1 P. Loftus, "Criminal Trials of Former Health-Care Executives Set to Begin," The Wall Street Journal (May 22, 2016).

2 The Davis Besse case is prototypical of the way cases were handled in the past.  The corporation pleaded guilty to making false statements and paid a big fine.  Lower level individuals were found guilty of similar charges.  In the Siemaszko trial the court was quite ready to attribute to the defendant knowledge of the content of NRC communications, whether directly prepared by him or not, or acquiescence in materials drafted by others that misrepresented conditions for the RPV.  They also dismissed his contention that he lacked proper expertise.  The court found that he knew and had a motive - keeping the plant running.  There was testimony that higher management was the source of the operational pressure but culpability did not extend beyond the individuals making the actual statements and submittals to the NRC.

3 Transocean Deepwater Inc. also admitted that members of its crew onboard the Deepwater Horizon, acting at the direction of BP’s Well Site Leaders were negligent in failing fully to investigate clear indications that the well was not secure and that oil and gas were flowing into the well.  Halliburton was the supplier of drilling cement to seal the outside of the drilling pipe.  Its guilty plea admitted destroying evidence of instructions to employees to “get rid of” simulation analyses of the event that failed to show that Halliburton’s recommendations to BP would have lowered the risk of a blowout.  [S. Mufson, "Halliburton to Plead Guilty to Destroying Evidence in BP Spill," The Washington Post (July 25, 2013).]  This was an attempt to show that a decision by BP to use fewer pipe centralizers was a serious error contributing to the accident.

4 A. Viswanatha, "U.S. Bid to Prosecute BP Staff in Gulf Oil Spill Falls Flat," The Wall Street Journal (Feb. 27, 2016).

5 Notably the lower level managers pleaded to charges and did not go to trial.  The acquittal of the CEO on felony level charges illustrates the challenges of proving these cases.

6 “Large punitive or compensating settlements, so the argument goes, act as an effective deterrent for mining companies, forcing them to improve their safety systems or face potentially debilitating fines. However, given the revelations about Massey and the several major US mining disasters that have taken place in the last ten years, it's impossible to argue that financial punishment has been a wholly effective scarecrow, especially when companies feel they can game the MSHA system.”  [C. Lo, "Upper Big Branch: the search for justice," (June 20, 2013).]

7 "To this point, research on corporate crime has been, for the most part, overlooked by mainstream criminology. In particular, corporate violations of safety regulations in the coal mining industry have yet to be studied within the field of criminology.”  [C. N. Stickeler,  "A Deadly Way of Doing Business: A Case Study of Corporate Crime in the Coal Mining Industry," University of South Florida (Jan. 2012).]

8 “carelessness which is in reckless disregard for the safety or lives of others, and is so great it appears to be a conscious violation of other people's rights to safety. It is more than simple inadvertence, but it is just shy of being intentionally evil.”  Read more:

9 J. Ewing and G. Bowley, "The Engineering of Volkswagen’s Aggressive Ambition," The New York Times (Dec. 13, 2015).

10 Ibid.

11 The quote is from the case study and references the Governor’s investigation - McAteer, J. D., Beall, K., Beck, J. A., Jr., McGinley, P. C., Monforton, C., Roberts, D. C., Spence, B., & Weise, S. (2011). Upper Big Branch: The April 5, 2010, explosion: A Failure of Basic Coal Mine Safety Practices (Report to the Governor).

12 M. Davey and R. Perez-Pena "Flint Water Crisis Yields First Criminal Charges," New York Times (April 20, 2016).