Friday, June 18, 2010

Assessing Safety Culture

In our June 15th post, we reported on Wahlström and Rollenhagen’s* concern that trying to measure safety culture could do more harm than good. However, the authors go on to assert that safety culture can and should be assessed. They identify different methods that can be used to perform such assessments, including peer reviews and self assessments. They conclude “Ideally safety culture assessments should be carried out as an interaction between an assessment team and a host organization and it should be aimed at the creation of an awareness of potential safety threats . . . .” (§ 7) We certainly agree with that observation.

We are particularly interested in their comments on safety (performance) indicators, another tool for assessing safety culture. We agree that “. . . most indicators are lagging in the sense that they summarize past safety performance” (§ 6.2) and thus may not be indicative of future performance. In an effort to improve performance indicators, the authors suggest “One approach towards leading safety indicators may be to start with a set of necessary conditions from which one can obtain a reasonable model of how safety is constructed. The necessary conditions would then suggest a set of variables that may be assessed as precursors for safety. An assessment could then be obtained using an ordinal scale and several variables could be combined to set an alarm level.” (ibid.)

We believe the performance indicator problem should be approached somewhat differently. Safety culture, safety management and safety performance do not exist in a vacuum. We advocate using the principles of system dynamics to construct an organizational performance model that shows safety as both input to and output from other, sometimes competing organizational goals, resource constraints and management actions. This is a more robust approach because it can not only show that safety culture is getting stronger or slipping, but why, i.e., what other organizational factors are causing safety culture change to occur. If the culture is slipping, then analysis of system information can suggest where the most cost-effective interventions can be made. For more information on using system dynamics to model safety culture, please visit our companion website, nuclearsafetysim.com.

* Björn Wahlström, Carl Rollenhagen. Assessments of safety culture – to measure or not? Paper presented at the 14th European Congress of Work and Organizational Psychology, May 13-16, 2009, Santiago de Compostela, Spain. The authors are also connected with the LearnSafe project, which we have discussed in earlier posts (click the LearnSafe label to see them.)

Tuesday, June 15, 2010

Can Measuring Safety Culture Harm It?

That’s a question raised in a paper by Björn Wahlström and Carl Rollenhagen.* Among other issues, the authors question the reliability and validity of safety culture measurement tools, especially the questionnaires and interviews often used to assess safety culture. One problem is that such measurement tools, when applied by outsiders such as regulators, can result in the interviewees trying to game the outcome. “. . . the more or less explicit threat to shut down a badly performing plant will most likely at least in a hostile regulatory climate, bring deceit and delusion into a regulatory assessment of safety culture.” (§ 5.3)

Another potential problem is created by a string of good safety culture scores. We have often said success breeds complacency and an unjustified confidence that past results will lead to future success. The nuclear industry does not prepare for surprises yet, as the authors note, the current state of safety thinking was inspired by two major accidents, not incremental progress. (§ 5.2) Where is the next Black Swan lurking?

Surprise after success can occur on a much smaller scale. After the recent flap at Vermont Yankee, evaluators spent considerable time poring over the plant’s most recent safety culture survey to see what insight it offered into the behavior of the staff involved with the misleading report on leaking pipes. I don’t think they found much. Entergy’s law firm conducted interviews at the plant and concluded the safety culture was and is strong. See the opening paragraph for a possible interpretation.

The authors also note that if safety culture is an emergent property of an organization, then it may not be measurable at all because emergent properties develop without conscious control actions. (§ 4.2) See our earlier post for a discussion of safety culture as emergent property.

While safety culture may not be measurable, it is possible to assess it. The authors’ thoughts on how to perform useful assessments will be reviewed in a future post.

* Björn Wahlström, Carl Rollenhagen. Assessments of safety culture – to measure or not? Paper presented at the 14th European Congress of Work and Organizational Psychology, May 13-16, 2009, Santiago de Compostela, Spain. The authors are also connected with the LearnSafe project, which we have discussed in earlier posts (click the LearnSafe label to see them.)

Saturday, June 12, 2010

HBR and BP

There’s a good essay on a Harvard Business Review blog describing how decision-making in high risk enterprises may be affected by BP’s disaster in the Gulf. Not surprisingly, the author’s observations include creating a robust safety culture “where the most stringent safety management will never be compromised for economic reasons.” However, as our Bob Cudlin points out in his comment below the article, such a state may represent a goal rather than reality because safety must co-exist in the same success space as other business and practical imperatives. The real, and arguably more difficult question is: How does safety culture ensure a calculus of safety and risk so that safety measures and management are adequate for the task at hand?

Friday, June 11, 2010

Safety Culture Issue at Callaway. Or Not.

We just read a KBIA* report on the handling of an employee’s safety concern at the Callaway nuclear plant that piqued our interest. This particular case was first reported in 2009 but has not had widespread media attention so we are passing it on to you.

The history seems straightforward: an employee raised a safety concern after an operational incident, was rebuffed by management, drafted a discrimination complaint for the U.S. Dept. of Labor, received (and accepted) a $550K settlement offer from the plant owner, and went to work elsewhere. The owner claimed the settlement fell under the NRC’s Alternative Dispute Resolution (ADR) process, and the NRC agreed.

We have no special knowledge of, nor business interest in, this case. It may be a tempest in a tea pot but we think it raises some interesting questions from a safety culture perspective.

First, here is another instance where an employee feels he must go outside the organization to get attention to a safety concern. The issue didn't seem to be that significant, at most an oversight by the operators or a deficient procedure. Why couldn’t the plant safety culture process his concern, determine an appropriate resolution, and move on?

Second, why was the company so ready to pony up $550K? That is a lot of dough and seems a bit strange. Even the employee noted that it was a generous offer. It makes one wonder what else was going on in the background. To encourage licensees to participate in ADR, the NRC closes investigations into alleged discrimination against employees when an ADR settlement is reached. Can safety essentially be for sale under ADR if an owner can settle with an employee?

Third, what happened to the original safety concern? According to another source,** the NRC found the operators’ actions to be “not prudent” but did not penalize any operators. Did the plant ever take any steps to address the issue to avoid repetition?


* P. Sweet and R. Townsend, KBIA Investigative Report: Looking Into Callaway Nuclear Power Plant’s “Safety Culture” (May 24, 2010).  KBIA is an NPR-member radio station owned by the U. of Missouri school of journalism.

**  FOCUS/Midwest website, "Did Ameren pay a whistleblower to shut up and go away?" (Jan. 4, 2009).

Tuesday, June 8, 2010

Toothpaste and Oil Slicks

At the end of last week came the surprise announcement from the former Dominion engineer, David Collins, that he was withdrawing his allegations regarding his former employer’s safety management and the NRC’s ability to provide effective oversight of safety culture.* The reasons for the withdrawal are still unclear though Collins cited lack of support by local politicians and environmental groups.

What is to be made of this? As we stated in a post at the time of the original allegations, we don’t have any specific insight into the bases for the allegations. We did indicate that how Dominion and the NRC would go about addressing the allegations might present some challenges.

What can be said about the allegations with more certainty is that they will not go away. Like the proverbial toothpaste, allegations can’t be put back into the tube and they will need to be addressed on their merits. We assume that Collins acted in good faith in raising the allegations. In addition, a strong safety culture at Dominion and the NRC should almost welcome the opportunity to evaluate and respond to such matters. A linchpin of any robust safety culture is the encouragement for stakeholders to raise safety concerns and for the organization to respond to them in an open and effective manner. If the allegations turn out to not have merit, it has still been an opportunity for the process to work.

In a somewhat similar vein, the fallout (I am mixing my metaphors) from the oil released into the gulf from the BP spill will remain and have to be dealt with long after the source is capped or shut off. It will serve as an ongoing reminder of the consequences of decisions where safety and business objectives try to occupy a very limited success space. In recent days there have been extensive pieces* in the Wall Street Journal and New York Times delineating in considerable detail the events and decision making leading up to the blowout. These accounts are worthy of reading and digesting by anyone involved in high risk industries. Two things made a particular impression. One, it is clear that the environment leading up to the blowout included fairly significant schedule and cost pressures. What is not clear at this time is to what extent those business pressures contributed to the outcome. There are numerous cited instances where best practices were not followed and concerns or recommendations for prudent actions were brushed aside. One wishes the reporters had pursued this issue in more depth to find out “Why?” Two, the eventual catastrophic outcome was the result of a series of many seemingly less significant decisions and developments. In other words it was a cumulative process that apparently never flashed an unmistakable warning alarm. In this respect it reminds us of the need for safety management to maintain a highly developed “systems” understanding with the ability to connect the dots of risk.

* Links below



Thursday, June 3, 2010

25 Standard Deviation Moves

A Reuters Breakingviews commentary in today’s New York Times makes some interesting arguments about the consequences of the BP oil spill on the energy industry. The commentary draws parallels between BP and the financial implosion that led to Lehman Brothers bankruptcy. ". . . flawed risk management, systemic hazard, and regulatory incompetence" are cited as the common causes, and business models that did not take account of the possibility for "25 standard deviation moves". These factors will inevitably lead to government intervention and industry consolidation as the estimated $27 billion in claims (a current estimate for the BP spill) is ". . . a liability no investor will be comfortable taking, . . ."

While much of this commentary makes sense, we think it is missing a big part of the picture by not focusing on the essential need for much more rigorous safety management. By all reports, the safety performance of BP is a significant outlier in the oil industry; maybe not 25 sigma but 2 or 3 sigma at least. We have posted previously about BP and its safety deficiencies and its apparent inability to learn from past mistakes. There has also been ample analysis of the events leading up to the spill to suggest that a greater commitment to safety could, and likely would, have avoided the blowout. Safety commitment and safety culture provide context, direction and constraints for risk calculations. The potential consequences of a deep sea accident will remain very large, but the probability of the event can and should be brought much lower. Simply configuring energy companies with vastly deep pockets seems unlikely to be a sufficient remedy. For one, money damages are at best an imperfect response to such a disaster. More important, a repeat of this type of event would likely result in a ban on deep sea drilling regardless of the financial resources of the driller.

In the nuclear industry the potentially large consequences of an incident have, so far, been assumed by the government. In this respect there is something of a parallel to the financial crisis where the government stepped in to bail out the "too large to fail" entities. Aside from the obvious lessons of the BP spill, nuclear industry participants have to ensure that their safety commitment is both reality and public perception, or there may be some collateral damage as policy makers think about how high risk industry, including nuclear, liabilities are being apportioned.

Tuesday, June 1, 2010

Underestimating Risk and Cost

Good article in today's New York Times Magazine Preview about economic decision making in general and the oil industry in particular. In summary, when an event is difficult to imagine (e.g., the current BP disaster), people tend to underestimate the probability of it occurring; when it's easier to imagine (e.g., a domestic terrorist attack after 9/11), people tend to overestimate the probability. Now add government caps on liability and decision-making can get really skewed, with unreasonable estimates of both event-related probabilities and costs.

The relevance of this decision-making model to the nuclear industry is obvious but we want to focus on something the article didn't mention: the role of safety culture. Nuclear safety culture guides planning for and reacting to unexpected, negative events. On the planning side, culture can encourage making dispassionate, fact-based decisions regarding unfavorable event probabilities and potential consequences. However, if such an event occurs, then affected personnel will respond consistent with their training and cultural expectations.