Showing posts with label Challenger. Show all posts
Showing posts with label Challenger. Show all posts

Thursday, July 22, 2010

Transocean Safety Culture and Surveys

An article in today’s New York Times, “Workers on Doomed Rig Voiced Concern About Safety”  reports on the safety culture on the Deepwater Horizon drilling rig that exploded in the Gulf of Mexico.  The article reveals that a safety culture survey had been performed of the staff on the rig in the weeks prior to the explosion.  The survey was commissioned by Transocean and performed by Lloyd’s Register Group, a maritime and risk-management organization that conducted focus groups and one-on-one interviews with at least 40 Transocean workers.

There are two noteworthy findings from the safety culture survey.  While the headline is that workers voiced safety concerns, the survey results indicate:
“Almost everyone felt they could raise safety concerns and these issues would be acted upon if this was within the immediate control of the rig,” said the report, which also found that more than 97 percent of workers felt encouraged to raise ideas for safety improvements and more than 90 percent felt encouraged to participate in safety-improvement initiatives....But investigators also said, ‘It must be stated at this point, however, that the workforce felt that this level of influence was restricted to issues that could be resolved directly on the rig, and that they had little influence at Divisional or Corporate levels.’ “
This highlights several of the shortcomings of safety culture surveys.  One, the vast majority of respondents to the survey indicated they were comfortable raising safety concerns - yet subsequent events and decisions led to a major safety breakdown.  So, is there a response level that is indicative of how the organization is actually doing business or do respondents tell the the survey takers “what they want to hear”?  And, is comfort in raising a safety concern the appropriate standard, when the larger corporate environment may not be responsive to such concerns or bury them with resource and schedule mandates?  Second, this survey focused on the workers on the rig.  Apparently there was a reasonably good culture in that location but it did not extend to the larger organization.  Consistent with that perception are some of the preliminary reports that corporate was pushing production over safety which may have influenced risk taking on the rig.  This is reminiscent of the space shuttle Challenger where political pressure seeped down into the decision making process, subtly changing the perception of risk at the operational levels of NASA.  How useful are surveys if they do not capture the dynamics higher in the organization or the the insidious ability of exogenous factors to change risk perceptions?
The other aspect of the Transocean surveys came not from the survey results but the rationalization by Transocean of their safety performance.  They “noted that the Deepwater Horizon had seven consecutive years without a single lost-time incident or major environmental event.”  This highlights two fallacies.  One, that the absence of a major accident demonstrates that safety performance is meeting its goals.  Two, that industrial accident rates correlate to safety culture and prudent safety management.  They don’t.  Also, recall our recent posts regarding nuclear compensation where we noted that the the most common metric for determining safety performance incentives in the nuclear industry is industrial accident rate. 

The NY Times article may be found at http://www.nytimes.com/2010/07/23/us/23hearing.html.

Monday, May 17, 2010

How Do We Know? Dave Collins, Millstone, Dominion and the NRC

This week the issues being raised by former Dominion engineer David Collins* regarding safety culture at Millstone are receiving increased attention. It appears David is raising two general issues: (1) Is Dominion's safety culture being eroded due to cost and competitive pressures and (2) is the NRC being effective in its regulatory oversight of safety culture?

So far the responses of the various players are along standard lines. Dominion contends it is simply harvesting cost efficiencies in its organization without compromising safety and specific problems are isolated events. The NRC has referred the issues to its Office of Investigation thus limiting transparency. INPO will not comment on its confidential assessments of nuclear owners.

What is one to make of this? First we have no special insights into the bases for the issues being raised by Collins. We have interacted with him in the past on safety culture issues and he is clearly a dedicated and knowledgeable individual with a strong commitment to nuclear safety culture. Thus we would be inclined to give his allegations serious consideration.

On the broader issues we see both opportunities and risks. We have emphasized cost and competitive pressures as a key to understanding how safety culture must balance multiple priorities to assure safety. What we do not see at the moment is how Dominion, the NRC or INPO would be able to determine whether, or to what extent, such pressures might be impacting decisions within Dominion. We doubt whether current approaches to assessing safety culture can be determinative; e.g., safety culture surveys, NRC performance indicators, or INPO assessments. In addition a plant-centric focus is unlikely to reveal systemic interactions or top-down signals that may result in decisional pressure at the plant levels. Recall that a significant exogenous pressure cited in the space shuttle Challenger accident was Congressional political pressure.

So while we understand the nature of the processes underway to evaluate Collins’ issues, it would be helpful if any or all the organizations could explain their methods for assessing these type of issues and the objective evidence to be used in making findings. The risk at this point is that the industry and NRC appear to be more focused on negating the allegations than in taking a hard look at their possible merits, including how exactly to evaluate them.

* Follow the link below for an overview of Collins' issues and other parties' responses.

Sunday, April 18, 2010

Safety Culture: Cause or Context (part 1)

As we have mentioned before, we are perplexed that people are still spending time working on safety culture definitions. After all, it’s not because of some definitional issue that problems associated with safety culture arise at nuclear plants. Perhaps one contributing factor to the ongoing discussion is that people hold different views of what the essence of safety culture is, views that are influenced by individuals’ backgrounds, experiences and expectations. Consultants, lawyers, engineers, managers, workers and social scientists can and do have different perceptions of safety culture. Using a term from system dynamics, they have different “mental models.”

Examining these mental models is not an empty semantic exercise; one’s mental model of safety culture determines (a) the degree to which one believes it is measurable, manageable or independent, i.e. separate from other organizational features, (b) whether safety culture is causally related to actions or simply a context for actions, and (c) most importantly, what specific strategies for improving safety performance might work.

To help identify different mental models, we will refer to a 2009 academic article by Susan Silbey,* a sociology professor at MIT. Her article does a good job of reviewing the voluminous safety culture literature and assigning authors and concepts into three main categories: Culture as (a) Causal Attitude, (b) Engineered Organization, and (c) Emergent and Indeterminate. To fit into our blog format, we will greatly summarize her paper, focusing on points that illustrate our notion of different mental models, and publish this analysis in two parts.

Safety Culture as Causal Attitude

In this model, safety culture is a general concept that refers to an organization’s collective values, beliefs, assumptions, and norms, often assessed using survey instruments. Explanations of accidents and incidents that focus on or blame an organization’s safety culture are really saying that the then-existing safety culture somehow caused the negative events to occur or can be linked to the events by some causal chain. (For an example of this approach, refer to the Baker Report on the 2005 BP Texas City refinery accident.)

Adopting this mental model, it follows logically that the corrective action should be to fix the safety culture. We’ve all seen, or been a part of, this – a new management team, more training, different procedures, meetings, closer supervision – all intended to fix something that cannot be seen but is explicitly or implicitly believed to be changeable and to some extent measurable.

This approach can and does work in the short run. Problems can arise in the longer-term as non-safety performance goals demand attention; apparent success in the safety area breeds complacency; or repetitive, monotonous reinforcement becomes less effective, leading to safety culture decay. See our post of March 22, 2010 for a discussion of the decay phenomenon.

Perhaps because this model reinforces the notion that safety culture is an independent organizational characteristic, the model encourages involved parties (plant owners, regulators, the public) to view safety culture with a relatively narrow field of view. Periodic surveys and regulatory observations conclude a plant’s safety culture is satisfactory and everyone who counts accepts that conclusion. But then an event occurs like the recent situation at Vermont Yankee and suddenly people (or at least we) are asking: How can eleven employees at a plant with a good safety culture (as indicated by survey) produce or endorse a report that can mislead reviewers on a topic that can affect public health and safety?

Safety Culture as Engineered Organization

The model is evidenced in the work of the High Reliability Organization (HRO) writers. Their general concept of safety culture appears similar to the Causal Attitude camp but HRO differs in “its explicit articulation of the organizational configuration and practices that should make organizations more reliably safe.” (Silbey, p. 353) It focuses on an organization’s learning culture where “organizational learning takes place through trial and error, supplemented by anticipatory simulations.” Believers are basically optimistic that effective organizational prescriptions for achieving safety goals can be identified, specified and implemented.

This model appears to work best in a command and control organization, i.e., the military. Why? Primarily because a specific military service is characterized by a homogeneous organizational culture, i.e., norms are shared both hierarchically (up and down) and across the service. Frequent personnel transfers at all organizational levels remove people from one situation and reinsert them into another, similar situation. Many of the physical settings are similar – one ship of a certain type and class looks pretty much like another; military bases have a common set of facilities.

In contrast, commercial nuclear plants represent a somewhat different population. Many staff members work more or less permanently at a specific plant and the industry could not have come up with more unique physical plant configurations if it had tried. Perhaps it is not surprising that HRO research, including reviews of nuclear plants, has shown strong cultural homogeneity within individual organizations but lack of a shared culture across organizations.

At its best, the model can instill “processes of collective mindfulness” or “interpretive work directed at weak signals.” At its worst, if everyone sees things alike, an organization can “[drift] toward[s] inertia without consideration that things could be different.” (Weick 1999, quoted in Silbey, p.354) Because HRO is highly dependent on cultural homogeneity, it may be less conscious of growing problems if the organization starts to slowly go off the rails, a la the space shuttle Challenger.

We have seen efforts to implement this model at individual nuclear plants, usually by trying to get everything done “the Navy way.” We have even promoted this view when we talked back in the late 1990s about the benefits of industry consolidation and the best practices that were being implemented by Advanced Nuclear Enterprises (a term Bob coined in 1996). Today, we can see that this model provides a temporary, partial answer but can face challenges in the longer run if it does not constantly adjust to the dynamic nature of safety culture.

Stay tuned for Safety Culture: Cause or Context (part 2).

* Susan S. Silbey, "Taming Prometheus: Talk of Safety and Culture," Annual Review of Sociology, Volume 35, September 2009, pp. 341-369.

Wednesday, March 31, 2010

Can Blogging Be Good for Safety Culture?

I came across a very interesting idea in some of my recent web browsing - an idea that I like for several reasons. First, it centers on an approach of using a blog or blogging, to enhance safety culture in large, high risk organizations. Hard for someone writing a safety culture blog not to find the idea intriguing. Second, the idea emanated from an engineer at NASA, Dale Huls, at the Johnson Space Center in Houston. NASA has been directly and significantly challenged in safety issues multiple times, including the Challenger and Columbia shuttle accidents. Third, the idea was presented in a PAPER written five years ago when blogging was still a shadow of what it has become - now of course it occupies its own world, the blogosphere.

The thesis of Dale’s paper is “...to explore an innovative approach to culture change at NASA that goes beyond reorganizations, management training, and a renewed emphasis on safety.” (p.1) Whatever you may conclude about blogging as an approach, I do think it is time to look beyond the standard recipe of “fixes” that Dale enumerates and which the nuclear industry also follows almost as black letter law.

One of the benefits that Dale sees is that “Blogs could be a key component to overcoming NASA’s ‘silent safety culture.’ As a communications tool, blogs are used to establish trust...(p.1)....and to create and promote a workplace climate in which dissent can be constructively addressed and resolved.” (p.2) It seems to me that almost any mechanism that promotes safety dialogue could be beneficial. Blogs encourage participation and communication. Even if many visitors to the blog only read posts and do not post themselves, they are part of the discussion. To the extent that managers and even senior executives participate, it can provide a direct, unfiltered path to connect with people in the organization. All of this promotes trust, understanding, and openness. While these are things that management can preach, bringing about the reality can be more difficult.

“Note that blogs are not expected to replace formal lines of communication, but rather enhance those communication lines with an informal process that encourages participation without peer pressure or fear of retribution.” (p.2) In any event, much of a useful safety dialogue is broader than raising a particular safety issue or concern.

So you might be wondering, did NASA implement blogging to facilitate its safety culture change? Dale wrote me in an email, “While NASA did not formally take up a specific use of blogging for safety matters, it seems that NASA is beginning to embrace the blogging culture. Several prominent NASA members utilize blogging to address NASA culture, e.g., Wayne Hale, manager of the Space Shuttle Program.

What should the nuclear industry take away from this? It might start with a question or two. Are there informal communication media such as blogs active at individual nuclear plants and how are they viewed by employees? Are they supported in any way by management, or the organization, or the industry? Are there any nuclear industry blogs that fulfill a comparable role? There is the Nuclear Safety Culture Group on LinkedIn that has seen sporadic discussion and commenting on a few issues. It currently has 257 members. This would be a good topic for some input from those who know of other forums.

Wednesday, March 10, 2010

"Normalization of a Deviation"

These are the words of John Carlin, Vice President at the Ginna Nuclear Plant, referring to a situation in the past where chronic water leakages from the reactor refueling pit were tolerated by the plant’s former owners. 

The quote is from a piece reported by Energy & Environment Publishing’s Peter Behr in its ClimateWire online publication titled, “Aging Reactors Put Nuclear Power Plant ‘Safety Culture’ in the Spotlight” and also published in The New York Times.  The focus is on a series of incidents with safety culture implications that have occurred at the Nine Mile Point and Ginna plants now owned and operated by Constellation Energy.

The recitation of events and the responses of managers and regulators are very familiar.  The drip, drip, drip is not the sound of water leaking but the uninspired give and take of the safety culture dialogue that occurs each time there is an incident or series of incidents that suggest safety culture is not working as it should.

Managers admit they need to adopt a questioning attitude and improve the rigor of decision making; ensure they have the right “mindset”; and corporate promises “a campaign to make sure its employees across the company buy into the need for an exacting attention to safety.”  Regulators remind the licensee, "The nuclear industry remains ... just one incident away from retrenchment..." but must be wondering why these events are occurring when NRC performance indicators for the plants and INPO rankings do not indicate problems.  Pledges to improve safety culture are put forth earnestly and (I believe) in good faith.

The drip, drip, drip of safety culture failures may not be cause for outright alarm or questioning of the fundamental safety of nuclear operations, but it does highlight what seems to be a condition of safety culture stasis - a standoff of sorts where significant progress has been made but problems continue to arise, and the same palliatives are applied.  Perhaps more significantly, where continued evolution of thinking regarding safety culture has plateaued.  Peaking too early is a problem in politics and sports, and so it appears in nuclear safety culture.

This is why the remark by John Carlin was so refreshing.  For those not familiar with the context of his words, “normalization of deviation” is a concept developed by Diane Vaughan in her exceptional study of the space shuttle Challenger accident.  Readers of this blog will recall that we are fans her book, The Challenger Launch Decision, where a mechanism she identifies as “normalization of deviance” is used to explain the gradual acceptance of performance results that are outside normal acceptance criteria.  Most scary, an organization's standards can decay and no one even notices.  How this occurs and what can be done about it are concepts that should be central to current considerations of safety culture. 

For further thoughts from our blog on this subject, refer to our posts dated October 6, 2009 and November 12, 2009.  In the latter, we discuss the nature of complacency and its insidious impact on the very process that is designed to avoid it in the first place.