Tuesday, December 28, 2010

Cross Cutting Duke Energy vs. Indiana

The starting point of this post is not the nuclear industry per se and some may think it odd that we extrapolate from another part of the utility business to nuclear safety culture.  But the news of late regarding Duke Energy and its relationships with public utility regulators in Indiana raise some caution flags as to the nature of how corporate culture might influence the nuclear side of the business.  We believe it also raises fundamental issues about the NRC’s scope of culture assessment and may suggest the need for renewed approaches to so-called cross cutting issues.  With that let’s look at recent reports of what Duke executives and Indiana public servants have been up to.

In brief the situation involves Duke’s coal fired plant under construction in Indiana (the Edwardsport plant) and Duke Energy executives’ interactions with the Indiana Public Utilities Commission personnel, up to and including the head of the IPUC.  During the pendancy of the regulatory consideration of the plant costs, Duke engaged in hiring a key IPUC staff member (their General Counsel no less) and engaged in ongoing email exchanges indicating close personal relationships between Duke and IPUC personnel, the offering of favors, and the exchange of closely held information.  The upshot of the scandal has been the firings of the IPUC Chairman (by Indiana Governor Mitch Daniels), the former IPUC General Counsel hired by Duke, the head of Duke’s Indiana business unit (also a former IPUC staffer) and the Chief Operating Officer of Duke Energy, the second highest Duke Energy executive.  Phew.

James Rogers, chairman and chief executive officer of Duke, said Monday that former COO James Turner had not exercised undue influence with Indiana regulators but resigned because "he felt like it put the company in a bad place."  He went on to say, "He [Turner] made a decision on his own that he felt like those e-mails were embarrassing and inappropriate... If you read through them, it showed a very close relationship between the two of them.”*

But if one visits the Duke Energy website and reads through their Code of Business Ethics it would appear that the activities in Indiana routinely and broadly violated the espoused ethics.  So, did a very senior executive resign because he had bad email habits or was it the underlying actions and behaviors that were inappropriate? 

Our purpose here is not to delve into the details of the scandal since we think the actions taken in response speak for themselves.  Rather we want to examine the potential for such behaviors to spill over or create an influence on other parts of Duke’s business, namely their extensive nuclear plant fleet.  To us it raises the question of what is a cross cutting issue for nuclear safety?  The essence of cross cutting is to account for issues that have broad and potentially overlapping significance to achieving desired safety results.  With regard to safety culture, can such issues be limited to the scope of nuclear operations, or do they necessarily involve issues that span corporate governance and ethics?  A good question.

In prior posts (here and here) we reported some of our research on compensation structures within nuclear owner companies and the extent to which such compensation included incentives other than safety.  We found that corporate level executives including Chief Nuclear Officers were eligible for substantial amounts of compensation, large percentages of which were based on performance against business objectives such as profits and capacity factors.  We raised the concern that such large amounts of incentive-based compensation could exacerbate the influence of business priorities that compete with safety objectives.  The Duke-Indiana experience brings into focus other aspects of the corporate environment, such as business ethics and relationships with regulators, that could also bear on the ability of the organization to maintain its nuclear safety culture.  It is clear that certain corporate level decisions such as budgets and business goals directly impact the management of nuclear facilities.  It also seems likely that “softer” issues such as compensation, promotional opportunities and business ethics will be sending “environmental” messages to nuclear personnel as well.  

There is also something of an interesting parallel in the way that the current issues involving the Indiana PUC were handled.  Duke fired (or allowed to resign) both of its recent hires from IPUC and the COO of Duke Energy.  Similarly the governor of Indiana replaced the Chairman of the IPUC.  Sound familiar?  Seems like the approach taken within nuclear organizations when safety culture issues become problematic.  Get rid of some individuals and move on.  Such a response presumes that “bad people” are the root cause of these problems.  But one wonders, as do many of the news media reporting on this, whether at both Duke and IPUC this was just the way business was being done until it became public via the email exchanges.  If so doesn’t it suggest that cultural issues are involved?  And that the causes and extent of the cultural issues warrant further consideration?

What Duke decides to do or not do with regard to its corporate culture is probably an internal matter, but any spill over of corporate culture into nuclear safety culture is of more direct concern.  Should the NRC be aware of and interested in corporate culture, particularly when fundamental ethics and values are concerned?  How confident is the NRC that some safety culture breakdowns (think of Davis Besse and Millstone) had their genesis in a defective corporate culture?  Does corporate culture cross cut nuclear safety culture?  If not, why not?


*  J. Russell, “E-mail Scandal Topples Duke Energy's James Turner,” IndyStar.Com (Dec 7, 2010).

Thursday, December 23, 2010

Ian Richardson, Safety Culturist

Ian Richardson, the British actor, may not be the first person who leaps to mind as someone with insight into the art of assessing safety culture.  But in an episode in the second volume of the BBC series "House of Cards," Ian’s character, the UK Prime Minister, observes:

“You can get people to say anything you want them to say, but getting them to do what they say is another thing.”

And with that thought, we wish you Happy Holidays.

Thursday, December 16, 2010

SONGS Is Getting Better (Maybe)

On December 14, 2010 the NRC held a public meeting with Southern California Edison to discuss recent safety-related performance at the San Onofre Nuclear Generating Station (SONGS).  As you know, SONGS has been plagued for years by incidents, including willful violations, and employees claiming they fear retaliation if they report or discuss such incidents.  The newspaper item* on the meeting had an upbeat tone, and quoted NRC deputy director Troy Pruett, as saying:

"The trick now is for you guys to continue some of the successes you've had in the last two, three, four months."

That sounds good, and we hope SONGS continues to perform at a satisfactory level.  But the reality is a few months of symptom-free behavior is not enough to declare the patient cured.  To get a feel for the plant’s history of problems, check out this earlier article** by the same reporter.  In addition, please refer to our September, 2010 posts (here and here) for our perspective on what the underlying issues might be.

We’re Still Here

As you may have noticed, we haven’t been posting recently.  It’s not for lack of interest; we just haven’t come across much news or other items worthy of comment or passing on to you.  We will not waste your time (or ours) with fluff simply to fill the space.  If you have any safety culture news, articles, publications, etc. that you’d like us to review and comment on, please let us know about them via email.

*  P. Sisson, “San Onofre: Federal regulators report progress at nuclear plant,” North Country Times (Dec 14, 2010).

**  P. Sisson, “San Onofre: Latest progress report on nuke plant set for Tuesday,” North Country Times (Dec 9, 2010).

Thursday, November 18, 2010

Another Brick in the Wall for BP et al

Yesterday the National Academy of Engineering released their report* on the Deepwater Horizon blowout.  The report includes a critical appraisal of many decisions made during the period when the well was being prepared for temporary abandonment, decisions that in the aggregate decreased safety margins and increased risks.  This Washington Post article** provides a good summary of the report.

The report was written by engineers and scientists and has a certain “Just the facts, ma’am” tone.  It does not specifically address safety culture.  But we have to ask: What can one infer about a culture where the business practices don’t include “any standard practice . . . to guide the tradeoffs between cost and schedule and the safety implications of the many decisions (that is, a risk management approach).”  (p. 15)

We have had plenty to say about BP and the Deepwater Horizon accident.  Click on the BP label below to see all of our related blog entries.


*  Committee for the Analysis of Causes of the Deepwater Horizon Explosion, Fire, and Oil Spill to Identify Measures to Prevent Similar Accidents in the Future; National Academy of Engineering; National Research Council, “Interim Report on Causes of the Deepwater Horizon Oil Rig Blowout and Ways to Prevent Such Events” (2010).

**  D. Cappiello, “Experts: BP ignored warning signs on doomed well,” The Washington Post (Nov 17, 2010).  Given our blog’s focus on the nuclear industry, it’s worth noting that, in an interview, the committee chairman said, “the behavior leading up to the oil spill would be considered unacceptable in companies that work with nuclear power or aviation.”

Tuesday, November 9, 2010

Human Beings . . . Conscious Decisions

In a  New York Times article* dated November 8, 2010, there was a headline to the effect that Fred Bartlit, the independent investigator for the presidential panel on the BP oil rig disaster earlier this year had not found that “cost trumped safety” in decisions leading up to the accident.  The article noted that this finding contradicted determinations by other investigators including those sponsored by Congress.  We had previously posted on this subject, including taking notice of the earlier findings of cost trade-offs, and wanted to weigh in based on this new information.

First we should acknowledge that we have no independent knowledge of the facts associated with the blowout and are simply reacting to the published findings of current investigations.  In our prior posts we had posited that cost pressures could be part of the equation in the leadup to the spill.  On June 8, 2010 we observed:

“...it is clear that the environment leading up to the blowout included fairly significant schedule and cost pressures. What is not clear at this time is to what extent those business pressures contributed to the outcome. There are numerous cited instances where best practices were not followed and concerns or recommendations for prudent actions were brushed aside. One wishes the reporters had pursued this issue in more depth to find out ‘Why?’ ”

And we recall one of the initial observations made by an OSHA official shortly after the accident as detailed in our April 26, 2010 post:

“In the words of an OSHA official BP still has a ‘serious, systemic safety problem’ across the company.”

So it appears we have been cautious in reaching any conclusions about BP’s safety management.  That said, we do want to put into context the finding by Mr. Bartlit.  First we would note that he is, by profession, a trial lawyer and may be both approaching the issue and articulating his finding with a decidedly legal focus.  The specific quotes attributed to him are as follows:

“. . . we have not found a situation where we can say a man had a choice between safety and dollars and put his money on dollars” and “To date we have not seen a single instance where a human being made a conscious decision to favor dollars over safety,...”

It is not surprising that a lawyer would focus on culpability in terms of individual actions.  When things go wrong, most industries, nuclear included, look to assign blame to individuals and move on.  It is also worth noting that the investigator emphasized that no one had made a “conscious” decision to favor cost over safety.  We think it is important to keep in mind that safety management and failures of safety decision making may or may not involve conscious decisions.  As we have stated many times in other posts, safety can be undermined through very subtle mechanisms such that even those involved may not appreciate the effects, e.g., the normalization of deviance.  Finally we think the OSHA investigator may have been closer to the truth with his observation about “systemic” safety problems.  It may be that Mr. Bartlit, and other investigators, will be found to have suffered from what is termed “attribution error” where simple explanations and causes are favored and the more complex system-based dynamics are not fully assessed or understood in the effort to answer “Why?”  

* J.M. Broder, "Investigator Finds No Evidence That BP Took Shortcuts to Save Money," New York Times (Nov 8, 2010).

Thursday, October 28, 2010

Safety Culture Surveys in Aviation

Like nuclear power, commercial aviation is a high-reliability industry whose regulator (the FAA) is interested in knowing the state of safety culture.  At an air carrier, the safety culture needs to support cooperation, coordination, consistency and integration across departments and at multiple physical locations.

And, like nuclear power, employee surveys are used to assess safety culture.  We recently read a report* on how one aviation survey process works.  The report is somewhat lengthy so we have excerpted and summarized points that we believe will be interesting to you.

The survey and analysis tool is called the Safety Culture Indicator Scale Measurement System (SCISMS), “an organizational self-assessment instrument designed to aid operators in measuring indicators of their organization’s safety culture, targeting areas that work particularly well and areas in need of improvement.” (p. 2)  SCISMS provides “an integrative framework that includes both organizational level formal safety management systems, and individual level safety-related behavior.” (p. 8)

The framework addresses safety culture in four main factors:  Organizational Commitment to Safety, Operations Interactions, Formal Safety Indicators, and Informal Safety Indicators.  Each factor is further divided into three sub-factors.  A typical survey contains 100+ questions and the questions usually vary for different departments.

In addition to assessing the main factors, “The SCISMS contains two outcome scales: Perceived Personal Risk/Safety Behavior and Perceived Organizational Risk . . . . It is important to note that these measures reflect employees’ perceptions of the state of safety within the airline, and as such reflect the safety climate. They should not be interpreted as absolute or objective measures of safety behavior or risk.” (p. 15)  In other words, the survey factors and sub-factors are not related to external measurements of safety performance, but the survey-takers’ perceptions of risk in their work environment.

Summary results are communicated back to participating companies in the form of a two-dimensional Safety Culture Grid.  The two dimensions are employees’ perceptions of safety vs management’s perceptions of safety.  The grid displays summary measures from the surveys; the measures can be examined for consistency (one factor or department vs others), direction (relative strength of the safety culture) and concurrence of employee and management survey responses.

Our Take on SCISMS

We have found summary level graphics to be very important in communicating key information to clients and the Safety Culture Grid appears like it could be effective.  One look at the grid shows the degree to which the various factors have similar or different scores, the relative strength of the safety culture, and the perceptual alignment of managers and employees with respect to the organization’s safety culture.   Grids can be constructed to show findings across factors or departments within one company or across multiple companies for an industry comparison. 

Our big problem is with the outcome variables.  Given that the survey contains perceptions of both what’s going on and what it means in terms of creating safety risks, it is no surprise that the correlations between factor and outcome data are moderate to strong.  “Correlations with Safety Behavior range from r = .32 - .60 . . . . [and] Correlations between the subscales and Perceived Risk are generally even stronger, ranging from r = -.38 to -.71” (p. 25)  Given the structure of the instrument, one might ask why the correlations are not even larger.  We’d like to see some intelligent linkage between safety culture results and measures of safety performance, either objective measures or expert evaluations.

The Socio-Anthropological and Organizational Psychological Perspectives

We have commented on the importance of mental models (here, here and here) when viewing or assessing safety culture.  While not essential to understanding SCISMS, this report fairly clearly describes two different perspectives of safety culture: the socio-anthropological and organizational psychological.  The former “highlights the underlying structure of symbols, myths, heroes, social drama, and rituals manifested in the shared values, norms, and meanings of groups within an organization . . . . the deeper cultural structure is often not immediately interpretable by outsiders. This perspective also generally considers that the culture is an emergent property of the organization . . . and therefore cannot be completely understood through traditional analytical methods that attempt to break down a phenomenon in order to study its individual components . . . .”

In contrast, “The organizational psychological perspective . . . . assumes that organizational culture can be broken down into smaller components that are empirically more tractable and more easily manipulated . . . and in turn, can be used to build organizational commitment, convey a philosophy of management, legitimize activity and motivate personnel.” (pp.7-8) 

The authors characterize the difference between the two viewpoints as qualitative vs quantitative and we think that is a fair description.


*  T.L. von Thaden and A.M. Gibbons, “The Safety Culture Indicator Scale Measurement System (SCISMS)” (Jul 2008) Technical Report HFD-08-03/FAA-08-02. Savoy, IL: University of Illinois, Human Factors Division.

Friday, October 22, 2010

NRC Safety Culture Workshop

The information from the Sept 28, 2010 NRC safety culture meeting is available on the NRC website.  This was a meeting to review the draft safety culture policy statement, definition and traits.

As you probably know, the NRC definition now focuses on organizational “traits.”   According to the NRC, “A trait . . . is a pattern of thinking, feeling, and behaving that emphasizes safety, particularly in goal conflict situations, e.g., production vs. safety, schedule vs. safety, and cost of the effort vs. safety.”*  We applaud this recognition of goal conflicts as potential threats to effective safety management and a strong safety culture.

Several stakeholders made presentations at the meeting but the most interesting one was by INPO’s Dr. Ken Koves.**  He reported on a study that addressed two questions:
  • “How well do the factors from a safety culture survey align with the safety culture traits that were identified during the Feb 2010 workshop?
  • Do the factors relate to other measures of safety performance?” (p. 4)
The rest of this post summarizes and critiques the INPO study.

Methodology

For starters, INPO constructed and administered a safety culture survey.  The survey itself is interesting because it covered 63 sites and had 2876 respondents, not just a single facility or company.  They then performed a principal component analysis to reduce the survey data to nine factors.  Next, they mapped the nine survey factors against the safety culture traits from the NRC's Feb 2010 workshop, INPO principles, and Reactor Oversight Program components and found them generally consistent.  We have no issue with that conclusion. 

Finally, they ran correlations between the nine survey factors and INPO/NRC safety-related performance measures.  I assume the correlations included in his presentation are statistically significant.  Dr. Koves concludes that “Survey factors are related to other measures of organizational effectiveness and equipment performance . . . .” (p. 19)

The NRC reviewed the INPO study and found the “methods, data analyses and interpretations [were] appropriate.” ***

The Good News

Kudos to INPO for performing this study.  This analysis is the first (only?) large-scale attempt of which I am aware to relate safety culture survey data to anything else.  While we want to avoid over-inferring from the analysis, primarily because we have neither the raw data nor the complete analysis, we can find support in the correlation tables for things we’ve been saying for the last year on this blog.

For example, the factor with the highest average correlation to the performance measures is Management Decision Making, i.e., what management actually does in terms of allocating resources, setting priorities and walking the talk.  Prioritizing Safety, i.e., telling everyone how important it is and promulgating safety policies, is 7th (out of 9) on the list.  This reinforces what we have been saying all along: Management actions speak louder than words.

Second, the performance measures with the highest average correlation to the safety culture survey factors are the Human Error Rate and Unplanned Auto Scrams.  I take this to indicate that surveys at plants with obvious performance problems are more likely to recognize those problems.  We have been saying the value of safety culture surveys is limited, but can be more useful when perception (survey responses) agrees with reality (actual conditions).  Highly visible problems may drive perception and reality toward congruence.  For more information on perception vs. reality, see Bob Cudlin’s recent posts here and here.

Notwithstanding the foregoing, our concerns with this study far outweigh our comfort at seeing some putative findings that support our theses.

Issues and Questions

The industry has invested a lot in safety culture surveys and they, NRC and INPO have a definite interest (for different reasons) in promoting the validity and usefulness of safety culture survey data.  However, the published correlations are moderate, at best.  Should the public feel more secure over a positive safety culture survey because there's a "significant" correlation between survey results and some performance measures, some of which are judgment calls themselves?  Is this an effort to create a perception of management, measurement and control in a situation where the public has few other avenues for obtaining information about how well these organizations are actually protecting the public?

More important, what are the linkages (causal, logical or other) between safety culture survey results and safety-related performance data (evaluations and objective performance metrics) such as those listed in the INPO presentation?  Most folks know that correlation is not causation, i.e., just because two variables move together with some consistency doesn’t mean that one causes the other but what evidence exists that there is any relationship between the survey factors and the metrics?  Our skepticism might be assuaged if the analysts took some of the correlations, say, decision making and unplanned reactor scrams, and drilled into the scrams data for at least anecdotal evidence of how non-conservative decision making contributed to x number of scrams. We would be surprised to learn that anyone has followed the string on any scram events all the way back to safety culture.

Wrapping Up

The INPO analysis is a worthy first effort to tie safety culture survey results to other measures of safety-related performance but the analysis is far too incomplete to earn our endorsement.  We look forward to seeing any follow-on research that addresses our concerns.


*  “Presentation for Safety Club Public Meeting - Traits Comparison Charts,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102670381, p. 4.

**  G.K. Koves, “Safety Culture Traits Validation in Power Reactors,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010).

***  V. Barnes, “NRC Independent Evaluation of INPO’s Safety Culture Traits Validation Study,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102660125, p. 8.

Wednesday, October 20, 2010

Perception and Reality

In our October 18, 2010 post on how perception and reality may factor into safety culture surveys we ended with a question about the limits of the usefulness of surveys without a separate assessment to confirm the actual conditions within the organization.  Specifically, it makes us wonder, can a survey reliably distinguish between the following three situations:

-    an organization with strong safety culture with positive survey perceptions;
-    an organization with compromised safety culture but still reporting positive survey perceptions due to imperfect knowledge or other motivations;
-    an organization with compromised safety culture but still reporting positive survey perceptions due to complacency or normalization of lesser standards.

In our August 23, 2010 post we had raised a similar issue as follows:

“the overwhelming majority of nuclear power plant employees have never experienced a significant incident (we’re excluding ordinary personnel mishaps).  Thus, their work experience is of limited use in helping them assess just how strong their safety culture actually is.”

With what we know today it appears to us that safety culture survey results alone should not be used to reach conclusions about the state of safety culture in the organization or as a predictor of future safety performance.  Even comparisons across plants and the industry seem open to question due to the potential for significant and perhaps unknowable variation of perceptions of those surveyed. 

How would we see surveys contributing to knowledge of the safety culture in an organization?  In general we would say that certain survey questions can provide useful information where the objective is to elicit the perceptions of employees (versus a factual determination) on certain issues.  There is still the impediment that some employees’ perceptions will be colored, e.g., they will discern the “right” answer or will be motivated by other factors to bias their answers. 

What kind of questions might be perception-based?  We would say in areas where the perceptions of the organization are as important or of as much interest as the actual reality.  For example, whether the organization perceives that there is a bias for production goals over safety goals.  The existence of such a perception could have wide ranging impacts on individuals including their willingness to raise concerns or rigorously pursue their causes.  Even if the perceptions derived from the survey are not consistent with reality, it is important to understand that the perception exists and take steps to correct it.  Questions that go to ascertaining trust in management would also be useful as trust is largely a matter of perception.  It is not enough for management to be trustworthy.  Management must also be perceived as trustworthy to realize its benefit.   

The complication is that perception and reality are pulling in different directions. This signifies that although reality is certainly always present, perception is pulling at it and in many instances shaping reality. The impact of this relationship is that if not properly managed, perception will take over and will lessen if not eliminate the other attributes, especially reality.

It would suggest that a useful goal or trait of safety culture is to bring perception as close to reality as possible.  Perceptions that are inflated or unduly negative only distort the dynamics of safety management.  As with most complex systems, perceptions generally exist with some degree of time delay relative to actual reality.  Things improve, but perceptions lag as it takes time for information to flow, attitudes to adjust to new information, and new perceptions take hold.  Using perception data from surveys combined with the forensics of assessments can provide the necessary calibration to bring perception and reality into alignment.

Monday, October 18, 2010

Perception Is/Is Not Reality?

This post will continue our thoughts re the use of safety culture surveys.  The Oxford Dictionary says reality is the state of things as they actually exist, rather than as they may appear or may be thought to be.  Another theory of reality is that there is no objective reality.  Such belief is that there simply and literally is no reality beyond the perceptions, beliefs and attitudes we each have about reality.  In other words, “perception is reality”.  So, when a safety culture surveys is conducted, what reality is it measuring?  Is the purpose of the survey to determine an “objective” reality based on what an informed and knowledgeable person would say?  Or is the purpose simply to catalog the range of perceptions of reality held by those surveyed, whether accurate or not?  Why does it matter?

In our August 11, 2010 post we noted that UK researcher Dr. Kathryn Mearns referred to safety culture surveys as “perception surveys”, since they focus on people’s perceptions of attitudes, values and behaviors.  In a followup post on August 27, 2010 reporting some followup communications with Dr. Mearns we quoted her as follows:

“I see the survey results as a ‘temperature check’ but it needs a more detailed diagnosis to find out what really ails the safety culture.”

If one agrees that surveys are perception-based, it creates something of a dilemma as to which reality is of interest.  If “things as they actually exist” is important, then surveys alone may be of limited value, even misleading, without thorough diagnostic assessments, which is Dr. Mearns' point.  On the other hand, if perception itself is important, then surveys offer a window into that reality.  We think both realities have their place.

We find some empirical support for these ideas from the results of a recent safety culture assessment at Nuclear Fuel Services.*  The report is quite lengthy (over 300 pages) and exhaustive in its detail.  The assessment was done as part of a commitment by the owners of Nuclear Fuel Services (NFS) to the NRC and in response to ongoing safety performance issues at its facilities.  The assessment was performed by an independent team and included a safety culture survey.  It is the survey results that we focus on.

In reporting the results of the survey, the team identified a number of cautions as to the interpretation of NFS workforce perceptions.  The team found that survey numerical ratings were inflated due to the lack of an accurate frame of reference or adequate understanding of a particular cultural attribute.  This conclusion was based on the findings of the overall assessment project.  The team found the workforce perceptions to be “generally (and in some cases significantly) more positive than warranted” (p. 40) or justified by actual performance.

We found these results to be interesting in several respects.  First there is the acknowledgment that surveys simply compile the perceptions of individuals in the organization.  In the NFS case the assessment team concluded that the reported perceptions were inaccurate based on the team’s own detailed analysis of the organization.

Perhaps more interesting was that this inherent subjectivity of perceptions was attributed in this project to the lack of knowledge and frame of reference of the NFS staff, specifically related to standards of excellence associated with commercial nuclear sites.  This resonates with an observation from our August 23 post that “workers who had been through an accident recognized a relatively safer (riskier) environment better than workers who had not.”  In other words, people’s perceptions are influenced by the limits of their own experiences and context.  Makes sense.

The NFS assessment team goes on to indicate that the results of a prior safety culture survey a year earlier also are compromised based on the very time frame in which it was administered.  “It is reasonable to assume that the survey numerical ratings would have been lower if the survey had been administered after the workforce had become aware of the facts associated with the series of operational events that occurred” [prior to the survey].  (p. 41)  We would add there are probably numerous other factors that could easily bias perceptions, e.g., people being sensitive to what the “right answer” is and responding on that basis; complacency; the effect of externalities such as a significant corporate initiative dependent on the performance of the nuclear business; normalization of deviation; job-related incentives, etc.

We think it is very likely that the assessment team was correct in discounting the NFS survey results.  The question is, can any other survey results be relied on absent independent calibration by detailed organizational assessments?  We will take this up in a forthcoming post.

*  "Information to Fulfill Confirmatory Order, Section V, Paragraph 3.e" (Jun 29,2010)  ADAMS Accession Number ML101820096.

Monday, October 4, 2010

Survival of the Safest

One of our goals with SafetyMatters is bringing thought provoking materials to our readers, particularly materials they might not otherwise come across.  This post is an example from the greater business world and the current state of the U.S. economy.  Once again it is based on some interesting research from professors at Yale University* and described in an article in the New York Times.**

“Corporate managers struggling to preserve their companies and protect their core employees have inadvertently contributed to a vicious cycle of rising unemployment and plummeting national morale. If we are to break out of this downward spiral, we first need to understand the problem…professional managers throughout the business world see it as their job to keep work-force morale high. But, paradoxically, the actions they take for their own workplaces often make the overall crisis more severe.”

These issues have been the subject of research by Yale economics professor Truman Bewley.  While his specific focus is on labor markets and how wages respond (or don’t respond) to periods of reduced demand, some of the insights channel directly into the current issues of safety culture at nuclear plants. 

Bewley’s approach was to interview hundreds of corporate managers at length about the driving forces for their actions.  The article goes on to describe how corporate managers respond to recessions by protecting their most important staff, but paradoxically these actions tend to produce unforeseen and often counter-productive results. 

The description of how actions result in unintended consequences is emblematic of the complexity of business systems, where dynamics and interdependencies are not always seen or understood by the managers tasked with achieving results.  Nuclear safety culture exists in such a complex socio-technical system and requires more than just “leadership” to assure long term sustainability. 

This brings us to the first part of Dr. Bewley’s approach - his focus on identifying and understanding the driving forces for managers’ actions.  We see this as precisely the right prescription for improving our understanding of nuclear safety culture dynamics, particularly in cases where safety culture weaknesses have been observed.  A careful and penetrating look at why people don’t act in accordance with safety culture principles would do much to identify the types of factors, such as performance incentives, cost and schedule pressures, etc. that may be at work in an organization.  Driving forces are not necessarily different from root causes - a term more familiar in the nuclear industry - but I tend to prefer it because it explicitly reminds us that safety culture is dynamic, and results from the interaction of many moving parts.  Currently the focus of the industry, and the NRC for that matter, is on safety culture “traits”.  Traits are really the results or manifestations of safety culture and thus build out the picture of what is desired.  But they do not get at what factors actually produce strong safety culture in the first place.

As an example we refer you to a comment we posted on a Nuclear Safety Culture group thread on LinkedIn.com.  Dr. Bill Corcoran initiated a thread asking for proposals of safety culture traits that were at least as important as those in the NRC strawman.  Our response proposed:

 “The compensation structure in the corporation is aligned with its safety priorities and does not create real or perceived conflicts in decisions affecting nuclear safety.” ***

While this was proposed as a “trait” in response to Bill’s request, it is clearly a driving force that will enable and support strong safety culture behaviors and decisions.

* To read about other interesting work at Yale, check out our August 30, 2010 post.

** Robert J. Shiller, "The Survival of the Safest," New York Times (Oct 2, 2010).

*** The link to the thread (including Bob's comment) is here.  This may be difficult for readers who are not LinkedIn members to access.  We are not promoting LinkedIn but the Nuclear Safety Culture group has some interesting commentary.

Thursday, September 30, 2010

BP's New Safety Division

It looks like oil company BP believes that creating a new, “global” safety division is part of the answer to their ongoing safety performance issues including most recently the explosion of Deepwater Horizon oil rig in the Gulf of Mexico.  An article in the September 29, 2010 New York Times* quotes BP’s new CEO as stating “safety and risk management [are] our most urgent priority” but does not provide many details of how the initiative will accomplish its goal.  Without seeming to jump to conclusions, it is hard for us to see how a separate safety organization is the answer although BP asserts it will be “powerful”. 

Of more interest was a lesser headline in the article with the following quote from BP’s new CEO:

“Mr. Dudley said he also plans a review of how BP creates incentives for business performance, to find out how it can encourage staff to improve safety and risk management.”

We see this as one of the factors that is a lot closer to the mark for changing behaviors and priorities.  It parallels recent findings by FPL in its nuclear program (see our July 29, 2010 post) and warning flags that we had raised in our July 6 and July 9, 2010 posts regarding trends in U.S. nuclear industry compensation.  Let’s see which speaks the loudest to the organization: CEO pronouncements about safety priority or the large financial incentives that executives can realize by achieving performance goals.  If they are not aligned, the new “division of safety” will simply mean business as usual.

*  The original article is available via the iCyte below.  An updated version is available on the NY Times website.

Wednesday, September 22, 2010

Games Theory

In the September 15, 2010 New York Times there is an interesting article* about the increasing recognition within school environments that game-base learning has great potential.  We cite this article as further food for thought about our initiatives to bring simulation-based games to training for nuclear safety management.

The benefits of using games as learning spaces is based on the insight that games are systems, and systems thinking is really the curriculum, bringing a nuanced and rich way of looking at real world situations. 

“Games are just one form of learning from experience. They give learners well-designed experiences that they cannot have in the real world (like being an electron or solving the crisis in the Middle East). The future for learning games, in my view, is in games that prepare people to solve problems in the world.” **

“A game….is really just a “designed experience,” in which a participant is motivated to achieve a goal while operating inside a prescribed system of boundaries and rules.” ***  The analogy in nuclear safety management is to have the game participants manage a nuclear operation - with defined budgets and performance goals - in a manner that achieves certain safety culture attributes even as achievement of those attributes comes into conflict with other business needs.  The game context brings an experiential dimension that is far more participatory and immersive than traditional training environments.  In the NuclearSafetySim simulation, the players’ actions and decisions also feedback into the system, impacting other factors such as  organizational trust and the willingness of personnel to identify deviations.  Experiencing the loss of trust in the simulation is likely to be a much more powerful lesson than simply the admonition to “walk the talk” burned into a Powerpoint slide.

* Sara Corbett, "Learning by Playing: Video Games in the Classroom," New York Times (Sep 15, 2010).

** J.P. Gee, "Part I: Answers to Questions About Video Games and Learning," New York Times (Sep 20, 2010).

*** "Learning by Playing," p. 3 of retrieved article.



Thursday, September 16, 2010

Missing the Mark at SONGS

In our September 13, 2010 post on the current situation at SONGS we commented on the (in our opinion) undue focus on “leadership” as the sine qua non of safety culture.  Delving into the details of the most recent NRC inspection report* we came across another perplexing organizational response.  This time the issue was deliberate non-compliance.  While deliberate violations do not often get a lot of visibility, we find them potentially useful for illustrating safety culture dynamics.

First the SONGS experience.  Recall that it was a series of deliberate violations by fire watch personnel in the 2001-2006 time frame that started to crystallize safety culture concerns.  To address the problem, SCE committed to providing Corporate Ethics training to managers, supervisors and other specified employees.  The training was completed in 2008.  In 2009 additional ethics training was given to all employees including a SONGS-specific case study.  In addition monitoring programs were enhanced to better detect deliberate violations.

How effective was the training?  As reported in the NRC inspection report, between January 2008 and mid-2010, nine additional instances of deliberate non-compliances were identified.  The inspection report went on to say: “In response to these nine deliberate non-compliances, the licensee performed an Apparent Cause Evaluation….This evaluation identified the need to continue the training and monitoring programs which were developed in response to the Confirmatory Order.”

Did the NRC agree?  “The inspectors determined that this large number of deliberate non-compliances indicated that training on ethics and the disciplinary policy had not been fully effective in eliminating deliberate non-compliances.”  But in a bewildering twist, the NRC goes on to sign off on the issue because actions taken to detect and address deliberate violations have been effective...and the licensee intended to continue taking actions to prevent further instances from occurring. 

Perhaps both SCE and the NRC might have found our recent posts on current academic thinking on the subject of teaching ethics to be of value.  The Yale School of Management’s authors [see our August 30,2010 post] indicated: the concern arises when values are taught in the abstract and reliance is placed on commitments to high ethics without the contextual conflicts that will arise in the real world.  And the MIT article cited in our September 1, 2010 post bluntly reminds us “a decision necessarily involves an implicit or explicit trade-off of values.”  and that companies typically drill employees on values statements and codes of conduct, which have a more “symbolic than instrumental effect”.

We don’t have access to the ethics training provided by SCE but our suspicion is that it probably misses the target in the manner described in these management papers.  In the case of deliberate violations can there be any question that a trade-off of values is occurring?  And if trade-offs are occurring, then one has to ask, Why?  If situational forces are driving behavior, and training was not effective the first time, will repeating the training produce a different result?  

* Letter dated Aug 30, 2010 from R.E. Lantz (NRC) to R.T. Ridenoure (SCE), subject "SAN ONOFRE NUCLEAR GENERATING STATION – NRC FOCUSED BASELINE INSPECTION OF SUBSTANTIVE CROSS-CUTTING ISSUES INSPECTION REPORT 05000361/2010010 and 05000362/2010010," ADAMS Accession Number ML102420696.

Monday, September 13, 2010

Here We Go Again

Back on March 22, 2010 we posted about the challenge of addressing safety culture issues through one-dimensional approaches such as focusing on leadership or reiterating training materials.  We observed that the conventional wisdom that culture is simply leadership driven does not address the underlying complexity of culture dynamics.  San Onofre may be the most recent case in point.  In 2008 new leadership was brought in to the station in response to ongoing culture issues.  Safety culture improved somewhat, at least according to surveys, then it resumed its decline. Last week leadership was changed again following continued pressure by the NRC on cross cutting issues.  Perhaps ironically, one of the more recent actions taken at the station in response to continuing allegations of a “chilled environment” was….leadership training.*

The evolution of events at San Onofre also reinforces another observation we have made about the reliance on safety culture surveys.  As with just about all similar situations, the prescription for weaknesses in “cornerstone” issues by both licensees and the NRC is: conduct a survey.  Looking back in the San Onofre case, the following was determined in its October 2009 survey:

Overall, the Independent Safety Culture Assessment determined that “the safety culture at SONGS is sufficient to support plant operations”.

SCE also reported to the NRC that the survey showed:

Site management is communicating strong and consistent safety messages, including:

-    Safety is the first priority
-    Site personnel are encouraged and expected to identify and report potential safety concerns**

The NRC then conducted additional inspections in early 2010.  “The inspection team determined that the safety culture at SONGS was adequate; however, several areas were identified that needed improvement .... All of the individuals interviewed expressed a willingness to raise safety concerns and were able to provide multiple examples of avenues available, such as their supervisor, writing a notification, other supervisors/managers, or the Nuclear Safety Concerns Program; however, approximately 25% of those interviewed indicated that they perceived that individuals would be retaliated against if they went to the NRC with a safety concern if they were not satisfied with their management’s response.”***

“When asked about the 2009 nuclear safety culture assessment, all of the individuals interviewed remembered having attended a briefing session on the results. However, only the general result of "safety culture was adequate” was recalled by those interviewed.”***

* "SONGS Hit with Stern NRC Rebuke," San Clemente Times (March 2, 2010).

** Slides presented at Nov 5, 2009 SCE-NRC meeting, attached to NRC Meeting Summary dated Nov 20,2009, ADAMS Accession Number ML093240212.

*** Letter dated Mar 2, 2010 from E. Collins (NRC) to R.T. Ridenoure (SCE), subject "Work Environment Issues at San Onofre Nuclear Generating Station—Chilling Effect," ADAMS Accession Number ML100601272.

Wednesday, September 1, 2010

Making Values Count

In our August 30, 2010 Experiencing Decisions post we highlighted a Wall Street Journal article with some interesting insights into how teaching values must have a strong “experiential” component.  In the real world experiential means that day-to-day decision making is a contact sport, where values collide with business priorities.  In that article reference was made to another paper from the MIT Sloan Management Review, “How to Make Values Count in Everyday Decisions.”*  This work provides a useful and practical resource for follow up reading and implementation.

The Sloan paper authors come right to the point, stating “...a decision necessarily involves an implicit or explicit trade-off of values.” [p.75].  And “decision making is a trade-off between values….[for example] choosing customer safety over short-term financial performance”, referring to the decision by Johnson & Johnson to pull Tylenol off store shelves in 1982.  This is an important perspective but not necessarily one that is very often part of the dialogue about nuclear safety culture.

“The typical approach of many companies is to drill employees on values statements and codes of conduct, but by themselves such sets of principles do not easily permeate everyday decisions.  Recent research suggests that they usually have a more symbolic than instrumental effect.” [p.76]

The authors suggest the use of “decision maps” which is a device to create a picture of the decision process including choices, consequences, outcomes and values.  Note that a distinction is drawn between short term results of a decision (consequences) and longer term impacts (outcomes).  Identifying the longer term consequences of a decision requires thinking through the dynamics of the whole business “system” over multiple time periods.  Think of a pinball machine, perhaps even a pinball machine where you can’t see inside.

One of the problems cited in the paper is that values articulated at the top of the organization can be subverted by the everyday decisions made by staff members, in effect creating a default alternative value structure.  As a practical matter it is the sum of actual decisions that defines the value structure more than the abstract and idealized statements of values.

What is one to do?  Are decision maps the answer?  We’ll leave that to our readers to decide after reading this paper and examining the example provided.  What we can endorse is the authors’ prescription in the last section of the article.  It is based on a belief that leaders should be teachers, and teaching means explaining how decisions are made and how they reflect the values espoused by the leaders.  Basically, decisions need to be explained, used in training and communicated widely within the organization.  We like the idea of identifying a number of recent decisions and examining how the decisions were made, particularly how choices, consequences, and outcomes were linked to values and how values were balanced and traded-off.  This might not always be comfortable but the willingness to use such a process may say more about an organization’s values than any other action.

* “How to Make Values Count in Everyday Decisions”, J.E. Urbany, T.J. Reynolds and J.M. Phillips, MIT Sloan Management Review, Summer 2008, pp. 75-80.

Monday, August 30, 2010

Experiencing Decisions

With this post we’re back to a topic that is high on our list of important issues for ensuring a strong safety culture - and one that doesn’t seem to get a lot of attention by the industry and regulators.  It is the critical importance of how safety values are integrated into decision making processes, particularly for decisions that involve balancing other business priorities and where the safety implications are not clear cut.   

An article in the Wall Street Journal on August 23, 2010* provides the perspective of educators at a top business school on teaching values to their students.  The concern arises when values are taught in the abstract and reliance is placed on commitments to high ethics without the contextual conflicts that will arise in the real world.  “The power of the situation, and our too frequent disregard for it, is an overarching lesson from sociology and social psychology. Situational forces drive behavior to a surprising extent, much more than expected by those who believe character determines all.”

Isn’t this also true for the nuclear industry?

The authors, professors at Yale’s School of Management, observe, “This [traditional leadership courses] leaves the connection between values, leadership and action underdeveloped.”  Students “...must be brought face-to-face with the pressures that profit-maximization will create for them.”

How can students appreciation of decision making challenges be made more realistic and compelling?  The authors believe “We must make them aware that these decisions will challenge their values…... We need to make sure they engage in a continuing dialogue….We have found this is best achieved through experiential learning.”

The term “experiential learning” certainly caught our eye as we believe it is consistent with our own focus on developing simulation tools to provide a realistic decisionmaking environment - where trainees can practice managing the conflicts they will inevitably confront on the job. 

*"Promises Aren't Enough: Business Schools Need to Do a Better Job Teaching Students Values," Rodrigo Canales, B. Cade Massey and Amy Wrzesniewski, The Wall Street Journal, August 23, 2010.

Friday, August 27, 2010

Safety Climate Surveys (Part 2)

On August 23, 2010 we posted on a paper* reporting on a safety climate survey conducted at a number of off-shore oil facilities.  We noted that paper presented a rigorous analysis of the survey data and also discussed the limitations of the data and the analysis.  Our Bob Cudlin has been in contact with the paper’s lead author who provided a candid assessment of how survey data should be used.

In a private message to Bob, Dr. Mearns gave a general warning against over-inference from survey data and findings.  She says, “I think safety climate surveys have their place but they need to be done properly and unfortunately, many attempts to measure safety climate are poorly executed.   The data obtained from surveys are simply numbers but they don’t tell you much about what is actually going on within the organisation or the team regarding safety.  I see the survey results as a ‘temperature check’ but it needs a more detailed diagnosis to find out what really ails the safety culture.” 

We couldn’t have said it better ourselves. 

*  Mearns K, Whitaker S & Flin R, “Safety climate, safety management practices and safety performance in offshore environments.”  Safety Science 41(8) 2003 (Oct) pp 641-680.

Monday, August 23, 2010

Safety Climate Surveys (Part 1)

In our August 11, 2010 post we quoted from a paper* addressing safety culture on off-shore oil facilities.  While the paper is a bit off-topic for SafetyMatters (the focus is more on industrial safety and individual, as opposed to group, perceptions), it provides a very good example of how safety climate survey data should be collected and rigorously analyzed, and hypotheses tested.  In addition, one of the findings is quite interesting.

The researchers knew from survey data which respondents had experienced an accident at a facility (not just those facilities where they were currently working), and which respondents had not.  They also knew which of the surveyed facilities had a historically higher proportion of accidents and which had a lower proportion.  “In this case, . . . respondents who had not experienced an accident provided significantly less favorable scores on installations with low accident proportions. Additionally, respondents who had experienced an accident provided significantly less favorable scores on installations with high accident proportions.” (p. 656)  In other words, workers who had been through an accident recognized a relatively safer (riskier) environment better than workers who had not.  While this is certainly more evidence that experience is the best teacher, we think it might have an implication for the commercial nuclear industry.

Unlike offshore oil workers, the overwhelming majority of nuclear power plant employees have never experienced a significant incident (we’re excluding ordinary personnel mishaps).  Thus, their work experience is of limited use in helping them assess just how strong their safety culture actually is.  Does this make these employees more vulnerable to complacency or slowing running off the rails a la NASA?  

*  Mearns K, Whitaker S & Flin R, “Safety climate, safety management practices and safety performance in offshore environments.”  Safety Science 41(8) 2003 (Oct) pp 641-680.

Thursday, August 19, 2010

NRC Chairman on Safety Culture

NRC Chairman Jaczko recently gave a speech* where he presented three ongoing challenges to the nuclear industry: knowledge management, safety culture and public outreach.  We applaud the chairman for continuing to emphasize safety culture.  He indicated it was important to have a strong safety culture, all industry participants need to have a consistent focus on safety, and complacency is a potential pitfall.

Unfortunately, he plowed no new ground in his remarks.  We think the chairman should have mentioned the potential for lessons learned from recent disasters in other industries.  In addition, with the U.S. nuclear renaissance trying to get underway, we believe he could have pointed out the need for a strong safety culture at all the players who will be involved in building new plants: designers, fabricators, suppliers, contractors, owners, and many others.

*  Jaczko, G.B., “Focus on Regulation.” Prepared Remarks for the Goizueta Directors Institute, Atlanta, GA, (NRC S-10-031) August 10, 2010.

Monday, August 16, 2010

SafetyMatters

“Jim Reason defines a safety culture as consisting of constellations of practices, most importantly, concerning reporting and learning. A safety culture, he argues, is both a reporting culture and a learning culture. Where safety is an organisation’s top priority, the organisation will aim is to assemble as much relevant information as possible, circulate it, analyse it, and apply.” *

This quote comes from a paper [see cite below] we recently posted about and reminds us of our purpose in writing this blog.

We have posted about Reason’s insights into safety culture and find his work to be particularly useful.  We hesitate to exploit such insights for our own purposes but we can hardly resist noting that one of the core purposes of our SafetyMatters Blog is to share and disseminate useful thinking about safety management issues.  We wonder if nuclear operating organizations take advantage of these materials and assure their dissemination within their organizations.  Ditto for nuclear regulators and other industry organizations.  We observe the traffic to the blog and note that individuals within such organizations are regular visitors.  We hope they are not the exceptions or the only people within their organizations to have exposure to the exchange of ideas here at SafetyMatters. 

Comments are always welcome.

*  Andrew Hopkins, "Studying Organisational Cultures and their Effects on Safety," paper prepared for presentation to the International Conference on Occupational Risk Prevention, Seville, May 2006 (National Research Centre for OHS Regulation, Australian National University), p. 16.

Wednesday, August 11, 2010

Down Under Perspective on Surveys

Now from Australia we have come across more research results related to some of the key findings we discussed in our August 2, 2010 post “Mission Impossible”. Recall from that post that research comparing the results of safety surveys prior to a significant event at an offshore oil platform with post-event investigations, revealed significant differences in cultural attributes.

This 2006 paper* draws on a variety of other published works and the author’s own experience in analyzing major safety events. Note that the author refers to safety culture surveys as “perception surveys”, since they focus on people’s perceptions of attitudes, values and behaviors.

“The survey method is well suited to studying individual attitudes and values and it might be thought that the method is thereby biased in favour of a definition of culture in these terms. However, the survey method is equally suited to studying practices, or ‘the way we do things around here’. The only qualification is that survey research of “the way we do things around” here necessarily measures people’s perceptions rather than what actually happens, which may not necessarily coincide.” (p.5) As we have argued, and this paper agrees, it is actual behaviors and outcomes that are most important. The question is, can actual behaviors be discerned or predicted on the basis of surveys? The answer is not clear.

“The question of whether or how the cultures so identified [e.g., by culture surveys] impact on safety is a separate question. Mearns and co-workers argue that there is some, though rather limited, evidence that organisations which do well in safety climate surveys actually have fewer accidents” (p. 14 citing Mearns et al)**

I kind of liked a distinction made early on in the paper, that it is better to ascertain an organization’s “culture” and then assess the impact of that culture on safety, then to directly assess “safety culture”. This approach emphasizes the internal dynamics and the interaction of values and safety priorities with other competing business and environmental pressures. As this paper notes, “. . .the survey method tells us very little about dynamic processes - how the organisation goes about solving its problems. This is an important limitation. . . .Schein makes a similar point when he notes that members of a culture are most likely to reveal themselves when they have problems to solve. . . .(p. 6)

*  Andrew Hopkins, "Studying Organisational Cultures and their Effects on Safety," paper prepared for presentation to the International Conference on Occupational Risk Prevention, Seville, May 2006 (National Research Centre for OHS Regulation, Australian National University).

**  Mearns K, Whitaker S & Flin R, “Safety climate, safety management practices and safety performance in offshore environments”. Safety Science 41(8) 2003 (Oct) pp 641-680.

Monday, August 2, 2010

Mission Impossible

We are back to the topic of safety culture surveys with a new post regarding an important piece of research by Dr. Stian Antonsen of the Norwegian University of Science and Technology.  He presents an empirical analysis of the following question:

    “..whether it is possible to ‘predict’ if an organization is prone to having major accidents on the basis of safety culture assessments.”*

We have previously posted a number of times on the use and efficacy of safety culture assessments.  As we observed in an August 17, 2009 post, “Both the NRC and the nuclear industry appear aligned on the use of assessments as a response to performance issues and even as an ongoing prophylactic tool.  But, are these assessments useful?  Or accurate?  Do they provide insights into the origins of cultural deficiencies?”

Safety culture surveys have become ubiquitous across the U.S. nuclear industry.  This reliance on surveys may be justified, Antonsen observes, to the extent they provide a “snapshot” of “attitudes, values and perceptions about organizational practices…”  But Antonsen cautions that the ability of surveys to predict organizational accidents has not been established empirically and cites some researchers who suspect surveys “‘invite respondents to espouse rationalisations, aspirations, cognitions or attitudes at best’ and that ‘we simply don’t know how to interpret the scales and factors resulting from this research’”.  Furthermore, surveys present questions where the favorable or desired answers may be obvious.  “The risk is, therefore, that the respondents’ answers reflect the way they feel they should feel, think and act regarding safety, rather than the way they actually do feel, think and act…”  As we have stated in a white paper** on nuclear safety management, “it is hard to avoid the trap that beliefs may be definitive but decisions and actions often are much more nuanced.”

To investigate the utility of safety culture surveys Antonsen compared results of a safety survey conducted of the employees of an offshore oil platform (Snorre Alpha) prior to a major operational incident, with the results of detailed investigations and analyses following the incident.  The survey questionnaire included twenty questions similar to those found in nuclear plant surveys.  Answers were structured on a six-point Likert scale, also similar to nuclear plant surveys.  The overall result of the survey was that employees had a highly positive view of safety culture on the rig.

The after incident analysis was performed by the Norwegian Petroleum Safety Authority  and a causal analysis was subsequently performed by Statoil (the rig owner) and a team of researchers.  The findings from the original survey and the later incident investigations were “dramatically different” as to the Snorre Alpha safety culture.  Perhaps one of the telling differences was that the post hoc analyses identified that the rig culture included meeting production targets as a dominant cultural value.  The bottom line finding was that the survey failed to identify significant organizational problems that later emerged in the incident investigations.

Antonsen evaluates possible reasons for the disconnect between surveys and performance outcomes.  He also comments on the useful role surveys can play; for example inter-organizational comparisons and inferring cultural traits.  In the end the research sounds a cautionary note on the link between survey-based measures and the “real” conditions that determine safety outcomes.

Post Script: Antonsen’s “Mission Impossible” paper was published in December 2009.  We now have seen another oil rig accident with the recent explosion and oil spill from BP’s Deepwater Horizon rig.  As we noted in our July 22, 2010 post, a safety culture survey had been performed of that rig’s staff several weeks prior to the explosion with overall positive results.  The investigations of this latest event could well provide additional empirical support for the "Mission Impossible" study results. 

* The study is “Safety Culture Assessment: A Mission Impossible?”  The link connects to the abstract; the paper is available for purchase at the same site.

**  Robert L. Cudlin, "Practicing Nuclear Safety Management" (March 2008), p. 3.