Sunday, April 22, 2012

Science Culture: A Lesson for Nuclear Safety Culture?

An article in the New York Times* earlier this week caught our attention as part of our contemplation of the causes of safety culture issues and effectiveness.  The article itself is about the increasing incidence of misconduct by scientists in their research and publications, particularly in scientific journals.  There may in fact be a variety of factors that are responsible, including just the sheer accessibility of journal published research and the increased opportunity that errors will be spotted.  But the main thrust of the article is that other more insidious forces may be responsible:

“But other forces are more pernicious.  To survive professionally, scientists feel the need to publish as many papers as possible….And sometimes they cut corners or even commit misconduct to get there.”

The article goes on to describe how in the scientific community the ability to publish is key to professional recognition, advancement and award of grant money.  There is enormous pressure to publish first and publish often to overcome “cutthroat competition”.

So how do retractions of scientific papers relate to nuclear safety culture?  In the most general sense the presence and impact of “pressure” on scientists reminds us of the situation in nuclear generation - now very much a high stakes business - and the consequent pressure on nuclear managers to meet business goals and in some cases, personal compensation goals.  Nuclear personnel (engineers, managers, operators, craftsmen, etc.), like the scientists in this article, are highly trained and expected to observe certain cultural norms; a strong safety culture is expected.  For scientists there is adherence to the scientific method itself and the standards for integrity of their peer community.  Yet both may be compromised when the desire for professional success becomes dominant.

The scientific environment is in most ways much simpler than a nuclear operating organization and this may help shed light on the causes of normative failures.  Nuclear organizations are inherently large and complex.  The consideration of culture often becomes enmeshed in issues such as leadership, communications, expectations, pronouncements regarding safety priorities, perceptions, SCWE, etc.  In the simpler scientific world, scientists are essentially sole proprietors of their careers, even if they work for large entities.  They face challenges to their advancement and viability, they make choices, and sometimes they make compromises.  Can reality in the nuclear operating environment be similar, or is nuclear somehow unique and different?  


*  C. Zimmer, “A Sharp Rise in Retractions Prompts Calls for Reform,” New York Times (Apr. 16, 2012).

Monday, April 16, 2012

The Many Causes of Safety Culture Performance

The promulgation of the NRC’s safety culture policy statement and industry efforts to remain out in front of regulatory scrutiny have led to increasing attention to identifying safety culture issues and achieving a consistently strong safety culture.

The typical scenario for the identification of safety culture problems starts with performance deficiencies of one sort or another, identified by the NRC through the inspection process or internally through various quality processes.  When the circumstances of the deficiencies suggest that safety culture traits, values or behaviors are involved, safety culture may be deemed in need of strengthening and a standard prescription is triggered.  This usually includes the inevitable safety culture assessment, retraining, re-iteration of safety priorities, re-training in safety culture principles, etc.  The safety culture surveys focus on perceptions of problems and organizational “hot spots” but rarely delve deeply into underlying causes.  Safety culture surveys generate anecdotal data based on the perceptions of individuals, primarily focused on whether safety culture traits are well established but generally not focused on asking “why” there are deficiencies.

This approach to safety culture seems to us to suffer from several limitations.  One is that the standard prescription does not necessarily yield improved, sustainable results, an indication that symptoms are being treated instead of causes.  And therein is the source of the other limitation, a lack of explicit consideration of the possible causes that have led to safety culture being deficient.  The standard prescribed fixes include an implicit presumption that safety culture issues are the result of inadequate training, insufficient reinforcement of safety culture values, and sometimes the catchall of “leadership” shortcomings. 

We think there are a number of potential causes that are important to ensuring strong safety culture but are not receiving the explicit attention they deserve.  Whatever the true causes we believe that there will be multiple causes acting in a systematic manner - i.e., causes that interact and feedback in complex combinations to either reinforce or erode the safety culture state.  For now we want to use this post to highlight the need to think more about the reasons for safety culture problems and whether a “causal chain” exists.  Nuclear safety relies heavily on the concept of root causes as a means to understand the origin of problems and a belief that “fix-the-root cause” will “fix-the-problem”.  But a linear approach may not be effective in understanding or addressing complex organizational dynamics, and concerted efforts in one dimension may lead to emergent issues elsewhere.

In upcoming posts we’ll explore specific causes of safety culture performance and elicit readers’ input on their views and experience.

Thursday, April 12, 2012

Fort Calhoun in the Crosshairs

Things have gone from bad to worse for Fort Calhoun.  The plant shut down in April 2011 for refueling, but the shutdown was extended to address various issues, including those associated with Missouri River flooding in summer 2011.  The plant’s issues were sufficiently numerous and significant that the NRC issued a CAL specifying actions OPPD had to take before restarting.

In addition to these “normal” issues, a fire occurred in June 2011—an incident that has just gotten them a “Red” finding from the NRC.  Currently, it is the only plant in the country under NRC Inspection Manual Chapter 0350, which includes a restart checklist.  As part of the restart qualification, the NRC will review OPPD’s third-party safety culture survey and, if they aren’t satisfied with the results, NRC will conduct its own safety culture assessment.* 

Focusing a little more on Fort Calhoun’s safety culture, one particular item caught our attention: OPPD’s CNO saying, during an NRC-OPPD meeting, that one of their basic problems was their corrective action program culture.  (The following is an unscripted exchange, not prepared testimony.)

“Commissioner Apostolakis: . . . what I would be more interested in is to know, in your opinion, what were the top two or three areas where you feel you went wrong and you ended up in this unhappy situation?

“[OPPD CNO] David Bannister: . . . the one issue is our corrective action program culture, our -- and it’s a culture that evolved over time. We looked at it more of a work driver, more of a -- you know, it’s a way to manage the system rather than . . . finding and correcting our performance deficiency.”**

Note the nexus between the culture and the CAP, with the culture evolving to accept a view of the CAP as a work management system rather than the primary way the plant identifies, analyzes, prioritizes and fixes its issues.  Notwithstanding Fort Calhoun’s culture creep, the mechanics and metrics of an effective CAP are well-known to nuclear operators around the world.  It is a failure of management if an organization loses track of the ball in this area.

What’s Going to Happen?

I have no special insight into this matter but I will try to read the tea leaves.  Recently, the NRC has been showing both its “good cop” and “bad cop” personas.  The good cop has approved the construction of multiple new nuclear units, thus showing that the agency does not stand in the way of industry extension and expansion.

Meanwhile, the bad cop has his foot on the necks of a few problem plants, including Fort Calhoun.  The plant is an easy target: it is the second-smallest plant in the country and isolated (OPPD has no other nuclear facilities).  The NRC will not kill the plant but may leave it twisting in the wind indefinitely, reminding us of Voltaire’s famous observation in Candide:

“. . . in this country, it is wise to kill an admiral from time to time so to encourage the others.”


*  Fort Calhoun Station Manual Chapter 0350 Oversight Panel Charter (Jan. 12, 2012) ADAMS ML120120661.

**  NRC Public Meeting Transcript, Briefing on Fort Calhoun (Feb. 22, 2012) p. 62  ADAMS ML120541135.

Monday, April 2, 2012

A Breath of Fresh Air - From a Coal Mine

It may seem odd to find a source of fresh air in the context of the Massey coal mine disaster of 2010, a topic on which we have posted before.  But the news last week of a former mine supervisor’s guilty plea yielded some very direct observations on the breakdown of safety in the mine.  In a Wall Street Journal piece on March 29, 2012, it was reported:

“Booth Goodwin, the U.S. Attorney in Charleston, W.Va., wrote in the plea agreement that "‘laws were routinely violated’ by Massey because of a belief that ‘following those laws would decrease coal production.’"

Sometimes it takes a lawyer’s bluntness to cut through all the contributing circumstances and symptoms of a safety failure and place a finger directly on the cause.  How often have you seen such unvarnished truth telling with regard to safety culture issues at nuclear plants? 

“[The supervisor] specifically pleaded guilty to tipping off miners underground about inspections, falsifying record books, illegally rewiring a mining machine to operate without a functioning methane monitor and altering the mine's ventilation to trick a federal inspector.”

The above findings are more typical of what one sees in nuclear plant inspection reports and which are attributed to lack of strong safety culture.  This in turn triggers the inevitable safety culture assessments, retraining, re-iteration of safety priorities, etc that appear to be the standard prescription for a safety culture “fever”.  But what - continuing a not so good medical analogy - is causing the fever?  And why would one expect that the one size fits all prescription is the right answer?

To us it gets down to something that isn’t receiving enough attention.  What are the root causes of the problems that are typically associated with a finding that safety culture needs to be strengthened?  We will share our thoughts, and ask for yours, in an upcoming post.