Friday, May 7, 2010

Why Nuclear is Better than Oil

Today's New York Times has an article whose title says it all, "Regulator Deferred to Oil Industry on Offshore Rig Safety." It turns out the the Minerals Management Service, an Interior Department agency, is charged both with regulating the oil industry and with collecting royalties from it. The article goes to to say that the trend in other parts of the world is to separate these functions.

We wring our hands on this blog about creeping complacency, normalization of deviance, and failure to sufficiently acknowledge conflicting goals in nuclear plant operations. But we don't have to bang the drum about getting a dedicated safety regulator; the NRC was created in 1974 specifically to separate the government's promotional and regulatory roles in the nuclear industry.

Monday, May 3, 2010

Testing Positive - the NRC on Safety Culture

We have been spending some time with the NRC’s draft statement of policy on safety culture published in the Federal Register on November 6, 2009.*

The initial Commission policy statement on safety culture in 1989 simply referred to “safety culture”. The current statement initially refers to the need for a “strong” safety culture (“It is the Commission’s policy that a strong safety culture is an essential element for individuals . . . .”) but then settles on “positive” safety culture as the fundamental expectation of the policy (“licensees . . . should foster a positive safety culture in their organizations and among individuals . . . .”) (p. 57526) The NRC provides its definition of “safety culture” in the draft policy but does not specifically define “strong” or “positive” safety culture. The NRC states that “certain organizational characteristics and personnel attitudes and behaviors are present in a positive safety culture.” (p. 57528) It also notes that “The INSAG definition emphasizes that in a positive safety culture, the goal of maintaining nuclear safety receives the highest priority in the organization’s and individuals’ decision-making and actions when faced with a conflict with other organizational or individual goals.” (p. 57527)

We found the use of the term “positive” potentially ambiguous and decided to explore its possible meaning(s) as applicable to safety culture and what might be the NRC’s intent in using it to describe safety culture.

First of all we note the use of “positive” in this context is as an adjective. Complicating matters a bit from the start is that there are eight definitions of the adjective, positive. A few are readily dismissed such as where positive refers to the charge of electricity associated with a proton or having higher electric potential. But many remain. The definitions of positive that could be applicable to nuclear safety culture include:

  • prescribed, or formally laid down and imposed;
  • unconditioned or independent of changing circumstances;
  • real, not fictitious and being effective in a social circumstance;
  • contributing toward or characterized by increase or progression;
  • favorable, having a good effect.

Consulting with Black’s Law Dictionary, the term positive is defined as:

“Laid down, enacted, or prescribed. Direct, absolute, explicit.” Sounds close to the first definition above.So what did the NRC intend by the usage of positive to modify “safety culture”? The short answer is we don’t know. The various definitions imply several possible interpretations. Perhaps the more straightforward is the “prescribed” definition. It would require licensees have an explicit and prescribed policy for safety culture at its facilities, and the policy would need to address elements of safety culture as laid out in the policy statement. This interpretation might also imply that the safety culture is unconditioned and independent of changing circumstances; i.e., enduring. The other possibility is that the NRC intends the “favorable” and “increasing and progressing” safety culture. This meaning has more of a dynamic implication and is similar to continuous improvement. The NRC’s alternate use of the term “strong” in the introduction to the policy doesn’t help much either. Strong can mean the “power to resist or endure” so it could be consistent with the first definition that safety culture should be enduring. But it also implies “resistant to change” which could be good or bad, depending on the current state of safety culture and any need to change (improve) it.

Where does this leave us? As a minimum it would appear necessary that the NRC amplify its use of these modifiers of safety culture and the specific meanings that are intended. Better yet, the NRC could not use these terms and simply rely on an old regulatory standby such as “adequate” safety culture.

* Draft Safety Culture Policy Statement: Request for Public Comments NRC-2009-0485-0001

Saturday, May 1, 2010

Why is Nuclear Different?

We saw a very interesting observation in a recent World Nuclear News item describing updates to World Association of Nuclear Operators’ structure. The WANO managing director said “Any CEO must ensure their own facilities are safe but also ensure every other facility is safe. [emphasis added] It's part of their commitment to investors to do everything they can to ensure absolute safety and the one CEO that doesn't believe in this concept will risk the investment of every other.” As WNN succinctly put it, “These company heads are hostages of one another when it comes to nuclear safety.”

I think it's true that nuclear operators are joined at the wallet, but why? In most industries, a problem at one competitor creates opportunities for others. Why is the nuclear industry so tightly coupled and at constant risk of contagion? Is it the mystery and associated fears, suspicion and, in some cases, local visibility that attends nuclear?


Coal mining and oil exploration exist in sharp contrast to nuclear. "Everyone knows" coal mining is dirty and dangerous but bad things only happen, with no wide-ranging effects, to unfortunate folks in remote locations. Oil exploration is somewhat more visible: people will be upset for awhile over the recent blow-out in the Gulf of Mexico, offshore drilling will be put on a temporary hold, but things will eventually settle down. In the meantime, critics will use BP as a punching bag (again) but there will be no negative spillover to, say, Chevron.

Wednesday, April 28, 2010

Safety Culture: Cause or Context (part 2)

In an earlier post, we discussed how “mental models” of safety culture affect perceptions about how safety culture interacts with other organizational factors and what interventions can be taken if safety culture issues arise. We also described two mental models, the Causal Attitude and Engineered Organization. This post describes a different mental model, one that puts greater emphasis on safety culture as a context for organizational action.

Safety Culture as Emergent and Indeterminate

If the High Reliability Organization model is basically optimistic, the Emergent and Indeterminate model is more skeptical, even pessimistic as some authors believe that accidents are unavoidable in complex, closely linked systems. In this view, “the consequences of safety culture cannot be engineered and only probabilistically predicted.” Further, “safety is understood as an elusive, inspirational asymptote, and more often only one of a number of competing organizational objectives.” (p. 356)* Safety culture is not a cause of action, but provides the context in which action occurs. Efforts to exhaustively model (and thus eventually manage) the organization are doomed to failure because the organization is constantly adapting and evolving.

This model sees that the same processes that produce the ordinary and routine stuff of everyday organizational life also produce the messages of impending problems. But the organization’s necessary cognitive processes tend to normalize and homogenize; the organization can’t very well be expected to treat every input as novel or not previously experienced. In addition, distributed work processes and official security policies can limit the information available to individuals. Troublesome information may be buried or discredited. And finally, “Dangers that are neither spectacular, sudden, nor disastrous, or that do not resonate with symbolic fears, can remain ignored and unattended, . . . . “ (p. 357)

We don’t believe safety significant events are inevitable in nuclear organizations but we do believe that the hubris of organizational designers can lead to specific problems, viz., the tendency to ignore data that does not comport with established categories. In our work, we promote a systems approach, based on system dynamics and probabilistic thinking, but we recognize that any mental or physical model of an actual, evolving organization is just that, a model. And the problem with models is that their representation of reality, their “fit,” can change with time. With ongoing attention and effort, the fit may become better but that is a goal, not a guaranteed outcome.

Lessons Learned

What are the takeaways from this review? First, mental models are important. They provide a framework for understanding the world and its information flows, a framework that the holder may believe to be objective but is actually quite subjective and creates biases that can cause the holder to ignore information that doesn’t fit into the model.

Second, the people who are involved in the safety culture discussion do not share a common mental model of safety culture. They form their models with different assumptions, e.g., some think safety culture is a force that can and does affect the vector of organizational behavior, while others believe it is a context that influences, but does not determine, organizational and individual decisions.

Third, safety culture cannot be extracted from its immediate circumstances and examined in isolation. Safety culture always exists in some larger situation, a world of competing goals and significant uncertainty with respect to key factors that determine the organization’s future.

Fourth, there is a risk of over-reliance on surveys to provide some kind of "truth" about an organization’s safety culture, especially if actual experience is judged or minimized to fit the survey results. Since there is already debate about what surveys measure (safety culture or safety climate?), we advise caution.

Finally, in addition to appropriate models and analyses, training, supervision and management, the individual who senses that something is just not right and is supported by an organization that allows, rather than vilifies, alternative interpretations of data is a vital component of the safety system.


* This post draws on Susan S. Silbey, "Taming Prometheus: Talk of Safety and Culture," Annual Review of Sociology, Volume 35, September 2009, pp. 341-369.

Monday, April 26, 2010

The Massey Mess

A postscript to our prior posts re the Massey coal mine explosion two weeks ago. The fallout of the safety issues at Massey mines is reaching a crescendo as even the President of the United States is quoted as stating that the accident was "a failure, first and foremost, of management."

The full article is embedded in this post. It is clear that Massey will be under the spotlight for some time to come with Federal and state investigations being initiated. One wonders if the CEO will or should survive the scrutiny. For us the takeaway from this, and other examples such as Vermont Yankee, is a reminder not to underestimate the potential consequences of safety culture failures. They point directly at the safety management system including management itself. Once that door is opened, the ripple effects can range far downstream and throughout a business.

“Serious Systemic Safety Problem” at BP

Yes, it’s BP in the news again. In case you hadn’t noticed, the fire, explosion and loss of life last week was at an offshore drilling rig working for BP. In the words of an OSHA official BP still has a “serious, systemic safety problem” across the company. Check the embedded web page for the story.

Thursday, April 22, 2010

Classroom in the Sky

The opening of much of the European airspace in the last several days has provided a rare opportunity to observe in real time some dynamics of safety decision making. On the one hand there have been the airlines who have been contending it is safe to resume flights and on the other the regulatory authorities who had been taking a more conservative stance. The question posed in our prior post was to what extent were business pressures influencing the airlines position and to what extent might that pressure, perhaps transmuting into political pressure, influence regulators decisions. Perhaps most importantly, how could one tell?

We have extracted some interesting quotes from recent media reporting of the issue.





Civil aviation officials said their decision to reopen terminals where thousands of weary travelers had camped out was based on science, not on the undeniable pressure put on them by the airlines....’The only priority that we consider is safety. We were trying to assess the safe operating levels for aircraft engines with ash,’ said Eamonn Brennan, chief executive of Irish Aviation Authority. “Pressure to restart flights had been intense.” Despite their protests, the timing of some reopenings seemed dictated by airlines' commercial pressures.

"It's important to realize that we've never experienced in Europe something like this before....We needed the four days of test flights,the empirical data, to put this together and to understand the levels of ash that engines can absorb. Even as airports reopened, a debate swirled about the safety of flying without more extensive analysis of the risks, as it appeared that governments were operating without consistent international guidelines based on solid data. "What's missing is some sort of standard, based on science, that gives an indication of a safe level of volcanic ash..." “Some safety experts said pressure from hard-hit airlines and stranded passengers had prompted regulators to venture into uncharted territory with respect to the ash. In the past, the key was simply to avoid ash plumes.”

How can it be both ways - regulators did not respond to pressure or regulators only acted based on new analysis of data? The answer may lie in our old friend, the theory of normalization of deviation. ref Challenger Launch Decision). As we have discussed in prior posts normalization is a process whereby an organization’s safety standards maybe reinterpreted (to a lower or more accommodating level)over time due to complex interactions of cultural, organizational, regulatory and environmental factors. The fascinating aspects are that this process is not readily apparent to those involved and decisions then made in accordance with the reinterpreted standards are not viewed as deviant or inconsistent with, e.g., “safety is our highest priority”. Thus events which heretofore were viewed as discrepant,no longer are and are thus “normalized”. Aviation authorities believe their decisions are entirely safety-based. Yet to observers outside the organization it very much appears that the bar has been lowered, since what is considered safe today was not yesterday.

Monday, April 19, 2010

The View On The Ground

A brief follow-up to the prior post re situational factors that could be in play in reaching a decision about resuming airline flights in Europe.  Fox News has been polling to assess the views of the public and the results of the poll are provided in the adjacent box.  Note that the overwhelming sentiment is that safety should be the priority.  Also note the wording of the choices, where the “yes” option appears to imply that flights should be resumed based on the “other” priorities such as money and passenger pressure, while the “no” option is based on safety being the priority.  Obviously the wording makes the “yes” option appear to be one where safety may be sacrificed.

So the results are hardly surprising.  But what do the results really mean?  For one it reminds us of the importance of the wording of questions in a survey.  It also illustrates how easy it is to get a large positive response that “safety should be the priority”.  Would the responses have been different if the “yes” option made it clear that airlines believed it was safe to fly and had done test flights to verify?  Does the wording of the “no” option create a false impression that an “all clear” (presumably by regulators) would equate to absolute safety, or at least be arrived at without consideration of other factors such as the need to get air travel resumed? 

Note:  Regulatory authorities in Europe agreed late today to allow limited resumption of air travel starting Tuesday.  Is this an “all clear” or a more nuanced determination that it is safe enough?