Saturday, May 26, 2012

Most of Us Cheat—a Little

A recent Wall Street Journal essay* presented the author’s research into patterns of cheating by people.  He found that a few people are honest, a few people are total liars and most folks cheat a little.  Why?  “. . . the behavior of almost everyone is driven by two opposing motivations. On the one hand, we want to benefit from cheating and get as much money and glory as possible; on the other hand, we want to view ourselves as honest, honorable people. Sadly, it is this kind of small-scale mass cheating, not the high-profile cases, that is most corrosive to society.”

This behavioral tendency can present a challenge to maintaining a strong safety culture.  Fortunately, the author found one type of intervention that decreased the incidence of lying: “. . . reminders of morality—right at the point where people are making a decision—appear to have an outsize effect on behavior.”  In other words, asking subjects to think about the 10 Commandments or the school honor code before starting the research task resulted in less cheating.  So did having people sign their insurance forms at the top, before reporting their annual mileage, rather than the bottom, after the fudging had already been done.  Preaching and teaching about safety culture has a role, but the focus should be on the point where safety-related decisions are made and actions occur.    

I don’t want to oversell these findings.  Most of the research involved individual college students, not professionals working in large organizations with defined processes and built-in checks and balances.  But the findings do suggest that zero tolerance for certain behaviors has its place.  As the author concludes: “. . . although it is obviously important to pay attention to flagrant misbehaviors, it is probably even more important to discourage the small and more ubiquitous forms of dishonesty . . . This is especially true given what we know about the contagious nature of cheating and the way that small transgressions can grease the psychological skids to larger ones.”

*  D. Ariely, “Why We Lie,” Wall Street Journal online (May 26, 2012). 

Tuesday, May 22, 2012

The NRC Chairman, Acta Est Fabula

With today’s announcement the drama surrounding the Chairman of the NRC has played out to its foreseeable conclusion.  The merits of the Chairman’s leadership of the agency are beyond the scope of this blog, but there are a few aspects of his tenure that may be relevant to nuclear safety culture in high performing organizations, not to mention in high places.

First we should note that we have previously blogged about speeches and papers (here, here and here) given by the Chairman wherein he emphasized the importance of safety culture to nuclear safety.  In general we applauded his emphasis on safety culture as being necessary to raise the attention level of the industry.  Over time, as the NRC’s focus became absorbed with the Safety Culture Policy Statement we became less enamored with the Chairman’s satisfaction with achieving consensus among stakeholders as almost an end to itself.  The resultant policy statement with a heavy tilt to attitudes and values seemed to lack the kind of coherence that a regulatory agency needs to establish inspectable results.  As Commissioner Apostolakis so cogently observed, “...we really care about what people do and maybe not why they do it….”

Continuing with that thought, and if the assertions made by the four other Commissioners are accurate, what the Chairman’s did as agency head seems to have included intimidation, lack of transparency, manipulation of resources, and other behaviors not on the safety culture list of traits.  It illustrates, again, how easy it is for organizational leaders to mouth the correct words about safety culture yet behave in a contradictory manner.  We strongly suspect that this is another situation where the gravitational force of conflicting priorities - in this case a political agenda - was sufficient to bend the boundary line between strong leadership and self interest.

Thursday, May 17, 2012

NEI Safety Culture Initiative: A Good Start but Incomplete

The March 2012 NRC Regulatory Information Conference included a session on the NRC’s Safety Culture Policy Statement.  NRC personnel made most of the session presentations but there was one industry report on the NEI’s safety culture initiative.  The NEI presentation* included the figure shown below which we’ll assume represents industry’s current schematic for how a site’s safety culture should be assessed and maintained. 

The good news here is the central role of the site’s corrective action program (CAP).  The CAP is where identified issues get evaluated, prioritized and assigned; it is a major source for changes to the physical plant and plant procedures.  A strong safety culture is reflected in an efficient, effective CAP and vice versa.

Another positive aspect is the highlighted role of site management in responding to safety culture issues by implementing appropriate changes in site policies, programs, training, etc.

We also approve of presentation text that outlined industry’s objective to have “A repeatable, holistic approach for assessing safety culture on a continuing basis” and to use “Frequent evaluations [to] promote sensitivity to faint signals.”  

Opportunities for Improvement

There are some other factors, not shown in the figure or the text, that are also essential for establishing and maintaining a strong safety culture.  One of these is the site’s decision making process, or processes.  Is decision making consistently conservative, transparent, robust and fair?  How is goal conflict handled?  How about differences of opinion?  Are sensors in place to detect risk perception creep or normalization of deviance? 

Management commitment to safety is another factor.  Does management exercise leadership to reinforce safety culture and is management trusted by the organization?

A third set of factors establishes the context for decision making and culture.  What are corporate’s priorities?  What resources are available to the site?  Absent sufficient resources, the CAP and other mechanisms will assign work that can’t be accomplished, backlogs will grow and the organization will begin to wonder just how important safety is.  Finally, what are management’s performance objectives and incentive plan?

One may argue that the above “opportunities” are beyond the scope of the industry safety culture objective.  Well, yes and no.  While they may be beyond the scope of the specific presentation, we believe that nuclear safety culture can only be understood and  possibly influenced by accepting a complete, dynamic model of ALL the factors that affect, and are affected by, safety culture.  Lack of a system view is like trying to drive a car with some of the controls missing—it will eventually run off the road. 

*  J.E. Slider, Nuclear Energy Institute, “Status of the Industry’s Nuclear Safety Culture Initiative,” presented at the NRC Regulatory Information Conference (March 15, 2012).

Monday, May 14, 2012

NEA 2008-2011 Construction Experience Report: Not Much There for Safety Culture Aficionados.

This month the Nuclear Energy Agency, a part of the Organization for Economic Co-Operation and Development, published a report on problems identified and lessons learned at nuclear plants during the construction phase.  The report focuses on three plants currently under construction and also includes incidents from a larger population of plants and brief reviews of other related studies. 

The report identifies a litany of problems that have occurred during plant construction; it is of interest to us because it frequently mentions safety culture as something that needs to be emphasized to prevent such problems.  Unfortunately, there is not much usable guidance beyond platitudinous statements such as “Safety culture needs to be established prior to the start of authorized activities such as the construction phase, and it is applied to all participants (licensee, vendor, architect engineer, constructors, etc.)”, “Safety culture should be maintained at very high level from the beginning of the project” and, from an U.K. report, “. . . an understanding of nuclear safety culture during construction must be emphasized.”*

These should not be world-shaking insights for regulators (the intended audience for the report) or licensees.  On the other hand, the industry continues to have problems that should have been eliminated after the fiascos that occurred during the initial build-out of the nuclear fleet in the 1960s through 1980s; maybe it does need regular reminding of George Santayana’s aphorism: “Those who cannot remember the past are condemned to repeat it.” 

*  Committee on Nuclear Regulatory Activities, Nuclear Energy Agency, “First Construction Experience Synthesis Report 2008-2011,” NEA/CNRA/R(2012)2 (May 3, 2012), pp. 8, 16 and 41.

Wednesday, May 2, 2012

Conduct of the Science Enterprise and Effective Nuclear Safety Culture – A Reflection (Part 1)

(Ed. note: We have asked Bill Mullins to develop occasional posts for Safetymatters.  His posts will focus on, but not be limited to, the Hanford Waste Treatment Plant aka the Vit Plant.)

In a recent post the question was posed: “Can reality in the nuclear operating environment be similar (to the challenges of production pressures on scientists), or is nuclear somehow unique and different?”
In a prior post a Chief Nuclear Officer is quoted: “ . . the one issue is our corrective action program culture, our -- and it’s a culture that evolved over time. We looked at it more of a work driver, more of a -- you know, it’s a way to manage the system rather than . . . finding and correcting our performance deficiency.”

Another recent post describes the inherently multi-factor and non-linear character of what we’ve come to refer to as “Nuclear Safety Culture.”  Bob Cudlin observed: “We think there are a number of potential causes that are important to ensuring strong safety culture but are not receiving the explicit attention they deserve.  Whatever the true causes we believe that there will be multiple causes acting in a systematic manner - i.e., causes that interact and feedback in complex combinations to either reinforce or erode the safety culture state.

I’d like to suggest a framework in which these questions and observations can be brought into useful relationship for thinking about the future of the US National Nuclear Energy Enterprise (NNEE).

This week I read yet another report on the Black Swan at Fukushima – this one representing views of US Nuclear industry heavy weights. It is just one of perhaps a dozen reviews, complete or on-going, that are adding to the stew pot of observations, findings, and recommendations about lessons to be learned from those “wreck the plant” events. I was wondering how all this “stuff” comes together in a manner that gives confidence that the net reliability of the US NNEE is increased rather than encumbered.

Were all these various “nuclear safety” reports scientific papers of the type referred to in the recent news story, then we would understand how they are “received” into the shared body of knowledge. Contributions would be examined, validations pursued, implications assessed, and yes, rewards or sanctions for work quality distributed. This system for the conduct of scientific research is very mature and has seemingly responded well to the extraordinary growth in volume and variety of research during the past half-century.

In the case of the Fukushima reports (and I’d suggest as validated by the corresponding pile of Deepwater Horizon reviews) there is no process akin to the publishing standards commonly employed in science or other academic research. In form, industrial catastrophes are typically investigated with some variation of causal analysis; also typically a distinguished panel of “experts” is assembled to conduct the review.

The credentials of those selected experts are relied upon to lend gravity to report results; this is generally in lieu of any peer or independent stakeholder review. An exception to this occurs when legislative hearings are convened to receive testimony from panel members and/or the responsible officials implicated in the events – but these second tier reviews are more often political theater than exercises in “seeking to understand.”

Since the TMI accident this trial by Blue Ribbon Panel methodology has proliferated; often firms such a BP hire such reviews (e.g. the Baker Panel on Texas City) to be done for official stakeholders that are below the level of regulatory or legislative responsibility. In the case of Deepwater Horizon and Fukushima it has been virtually open season for interested parties with any sort of credentialed authority (i.e. academic, professional society, watchdog group, etc.) to offer up a formal assessment of these major events.

And today of course we have the 24 hour news cycle with its voracious maw and indiscriminate headline writers; and let’s not forget the opinionated individuals like me – blogging furiously away with no authentic credentials but personal experience! How, I ask myself, does “sense-making” occur across the NNEE in this flurry of bits and bytes – unencumbered by the benefit of a reasoning tradition such as the world of scientific research? Not very well would be my conclusion.

There would appear to be an unexamined assumption that some mechanisms do exist to vet all the material generated in these investigation reports, but that seems to be susceptible to the kind of “forest lost for the trees” misperception cited in the Chief Nuclear Officer’s quote regarding corrective action systems becoming “the way we think about managing work.”
I can understand how, for a line manager at a single nuclear plant site that is operating in the main course of its life cycle, a scarce resource pot would lead to focusing on every improvement opportunity you’d like to address appearing as a “corrective action.” I would go a step further and say that given the domination of 10 CFR 50 Appendix B on the hierarchical norms for “quality” and “safety” that managing to a single “list” makes sense – if only to ensure that each potential action is evaluated for its nuclear licensing implications.

At the site level, the CNO has a substantial and carefully groomed basis for establishing the relative significance of each material condition in the plant; in most instances administrative matters are brightly color-coded “nuclear” or “other.” As we move up the risk-reckoning ladder through corporate decision-making and then branching into a covey of regulatory bodies, stockholder perspectives, and public perceptions, the purity of issue descriptions degrades – benchmarks become fuzzy.

The overlap of stakeholder jurisdictions presents multiple perspectives (via diverse lexicons) for what “safety,” “risk,” and “culture” weights are to be assigned to any particular issue. Often the issue as first identified is a muddle of actual facts and supposition which may or may not be pruned upon further study. The potential for dilemmas, predicaments, and double-binding stakeholder expectations goes up dramatically.
I would suggest that responses to the recent spate of high-profile nuclear facility events, beginning with the Davis-Besse Reactor Pressure Vessel Head near-miss, has provoked a serious cleavage in our collective ability to reason prudently about the policy, industrial strategy, and regulatory levels of risk. The consequences of this cleavage are to increase the degree of chaotic programmatic action and to obscure the longer term significance of these large-scale, unanticipated/unwelcome events, i.e., Black Swan vulnerabilities.

In the case of the NNEE I hypothesize that we are victims of our own history – and the presumption of exceptional success in performance improvement that followed the TMI event. With the promulgation of the Reactor Oversight Process in 1999, NRC and the industry appeared to believe that a mature understanding of oversight and self-governance practice existed and that going forward clarity would only increase regarding what factors were important to sustained high reliability across the entire NNEE.
That presumption has proven a premature one, but it does not appear from the Fukushima responses that many in leadership positions recognize this fact. Today, the US NNEE finds itself trapped in a “limits to growth system.” That risk-reckoning system institutionalizes a series of related conclusions about the overall significance of nuclear energy health hazards and their relationship to other forms of risk common to all large industrial sectors.

The NNEE elements of thought leadership appear to act (on the evidence of the many Fukushima reports) as if the rationale of 10 CFR 50 Appendix B regarding “conditions adverse to quality” and the preeminence of “nuclear safety corrective actions” is beyond question. It’s time to do an obsolescence check on what I’ve come to call the Nuclear Fear Cycle.

Quoting Bob Cudlin again: “Whatever the true causes we believe that there will be multiple causes acting in a systematic manner - i.e., causes that interact and feedback in complex combinations to either reinforce or erode the safety culture state.” You are invited to ponder the following system.

 (Mr. Mullins is a Principal at Better Choices Consulting.)