Thursday, November 18, 2010

Another Brick in the Wall for BP et al

Yesterday the National Academy of Engineering released their report* on the Deepwater Horizon blowout.  The report includes a critical appraisal of many decisions made during the period when the well was being prepared for temporary abandonment, decisions that in the aggregate decreased safety margins and increased risks.  This Washington Post article** provides a good summary of the report.

The report was written by engineers and scientists and has a certain “Just the facts, ma’am” tone.  It does not specifically address safety culture.  But we have to ask: What can one infer about a culture where the business practices don’t include “any standard practice . . . to guide the tradeoffs between cost and schedule and the safety implications of the many decisions (that is, a risk management approach).”  (p. 15)

We have had plenty to say about BP and the Deepwater Horizon accident.  Click on the BP label below to see all of our related blog entries.


*  Committee for the Analysis of Causes of the Deepwater Horizon Explosion, Fire, and Oil Spill to Identify Measures to Prevent Similar Accidents in the Future; National Academy of Engineering; National Research Council, “Interim Report on Causes of the Deepwater Horizon Oil Rig Blowout and Ways to Prevent Such Events” (2010).

**  D. Cappiello, “Experts: BP ignored warning signs on doomed well,” The Washington Post (Nov 17, 2010).  Given our blog’s focus on the nuclear industry, it’s worth noting that, in an interview, the committee chairman said, “the behavior leading up to the oil spill would be considered unacceptable in companies that work with nuclear power or aviation.”

Tuesday, November 9, 2010

Human Beings . . . Conscious Decisions

In a  New York Times article* dated November 8, 2010, there was a headline to the effect that Fred Bartlit, the independent investigator for the presidential panel on the BP oil rig disaster earlier this year had not found that “cost trumped safety” in decisions leading up to the accident.  The article noted that this finding contradicted determinations by other investigators including those sponsored by Congress.  We had previously posted on this subject, including taking notice of the earlier findings of cost trade-offs, and wanted to weigh in based on this new information.

First we should acknowledge that we have no independent knowledge of the facts associated with the blowout and are simply reacting to the published findings of current investigations.  In our prior posts we had posited that cost pressures could be part of the equation in the leadup to the spill.  On June 8, 2010 we observed:

“...it is clear that the environment leading up to the blowout included fairly significant schedule and cost pressures. What is not clear at this time is to what extent those business pressures contributed to the outcome. There are numerous cited instances where best practices were not followed and concerns or recommendations for prudent actions were brushed aside. One wishes the reporters had pursued this issue in more depth to find out ‘Why?’ ”

And we recall one of the initial observations made by an OSHA official shortly after the accident as detailed in our April 26, 2010 post:

“In the words of an OSHA official BP still has a ‘serious, systemic safety problem’ across the company.”

So it appears we have been cautious in reaching any conclusions about BP’s safety management.  That said, we do want to put into context the finding by Mr. Bartlit.  First we would note that he is, by profession, a trial lawyer and may be both approaching the issue and articulating his finding with a decidedly legal focus.  The specific quotes attributed to him are as follows:

“. . . we have not found a situation where we can say a man had a choice between safety and dollars and put his money on dollars” and “To date we have not seen a single instance where a human being made a conscious decision to favor dollars over safety,...”

It is not surprising that a lawyer would focus on culpability in terms of individual actions.  When things go wrong, most industries, nuclear included, look to assign blame to individuals and move on.  It is also worth noting that the investigator emphasized that no one had made a “conscious” decision to favor cost over safety.  We think it is important to keep in mind that safety management and failures of safety decision making may or may not involve conscious decisions.  As we have stated many times in other posts, safety can be undermined through very subtle mechanisms such that even those involved may not appreciate the effects, e.g., the normalization of deviance.  Finally we think the OSHA investigator may have been closer to the truth with his observation about “systemic” safety problems.  It may be that Mr. Bartlit, and other investigators, will be found to have suffered from what is termed “attribution error” where simple explanations and causes are favored and the more complex system-based dynamics are not fully assessed or understood in the effort to answer “Why?”  

* J.M. Broder, "Investigator Finds No Evidence That BP Took Shortcuts to Save Money," New York Times (Nov 8, 2010).

Thursday, October 28, 2010

Safety Culture Surveys in Aviation

Like nuclear power, commercial aviation is a high-reliability industry whose regulator (the FAA) is interested in knowing the state of safety culture.  At an air carrier, the safety culture needs to support cooperation, coordination, consistency and integration across departments and at multiple physical locations.

And, like nuclear power, employee surveys are used to assess safety culture.  We recently read a report* on how one aviation survey process works.  The report is somewhat lengthy so we have excerpted and summarized points that we believe will be interesting to you.

The survey and analysis tool is called the Safety Culture Indicator Scale Measurement System (SCISMS), “an organizational self-assessment instrument designed to aid operators in measuring indicators of their organization’s safety culture, targeting areas that work particularly well and areas in need of improvement.” (p. 2)  SCISMS provides “an integrative framework that includes both organizational level formal safety management systems, and individual level safety-related behavior.” (p. 8)

The framework addresses safety culture in four main factors:  Organizational Commitment to Safety, Operations Interactions, Formal Safety Indicators, and Informal Safety Indicators.  Each factor is further divided into three sub-factors.  A typical survey contains 100+ questions and the questions usually vary for different departments.

In addition to assessing the main factors, “The SCISMS contains two outcome scales: Perceived Personal Risk/Safety Behavior and Perceived Organizational Risk . . . . It is important to note that these measures reflect employees’ perceptions of the state of safety within the airline, and as such reflect the safety climate. They should not be interpreted as absolute or objective measures of safety behavior or risk.” (p. 15)  In other words, the survey factors and sub-factors are not related to external measurements of safety performance, but the survey-takers’ perceptions of risk in their work environment.

Summary results are communicated back to participating companies in the form of a two-dimensional Safety Culture Grid.  The two dimensions are employees’ perceptions of safety vs management’s perceptions of safety.  The grid displays summary measures from the surveys; the measures can be examined for consistency (one factor or department vs others), direction (relative strength of the safety culture) and concurrence of employee and management survey responses.

Our Take on SCISMS

We have found summary level graphics to be very important in communicating key information to clients and the Safety Culture Grid appears like it could be effective.  One look at the grid shows the degree to which the various factors have similar or different scores, the relative strength of the safety culture, and the perceptual alignment of managers and employees with respect to the organization’s safety culture.   Grids can be constructed to show findings across factors or departments within one company or across multiple companies for an industry comparison. 

Our big problem is with the outcome variables.  Given that the survey contains perceptions of both what’s going on and what it means in terms of creating safety risks, it is no surprise that the correlations between factor and outcome data are moderate to strong.  “Correlations with Safety Behavior range from r = .32 - .60 . . . . [and] Correlations between the subscales and Perceived Risk are generally even stronger, ranging from r = -.38 to -.71” (p. 25)  Given the structure of the instrument, one might ask why the correlations are not even larger.  We’d like to see some intelligent linkage between safety culture results and measures of safety performance, either objective measures or expert evaluations.

The Socio-Anthropological and Organizational Psychological Perspectives

We have commented on the importance of mental models (here, here and here) when viewing or assessing safety culture.  While not essential to understanding SCISMS, this report fairly clearly describes two different perspectives of safety culture: the socio-anthropological and organizational psychological.  The former “highlights the underlying structure of symbols, myths, heroes, social drama, and rituals manifested in the shared values, norms, and meanings of groups within an organization . . . . the deeper cultural structure is often not immediately interpretable by outsiders. This perspective also generally considers that the culture is an emergent property of the organization . . . and therefore cannot be completely understood through traditional analytical methods that attempt to break down a phenomenon in order to study its individual components . . . .”

In contrast, “The organizational psychological perspective . . . . assumes that organizational culture can be broken down into smaller components that are empirically more tractable and more easily manipulated . . . and in turn, can be used to build organizational commitment, convey a philosophy of management, legitimize activity and motivate personnel.” (pp.7-8) 

The authors characterize the difference between the two viewpoints as qualitative vs quantitative and we think that is a fair description.


*  T.L. von Thaden and A.M. Gibbons, “The Safety Culture Indicator Scale Measurement System (SCISMS)” (Jul 2008) Technical Report HFD-08-03/FAA-08-02. Savoy, IL: University of Illinois, Human Factors Division.

Friday, October 22, 2010

NRC Safety Culture Workshop

The information from the Sept 28, 2010 NRC safety culture meeting is available on the NRC website.  This was a meeting to review the draft safety culture policy statement, definition and traits.

As you probably know, the NRC definition now focuses on organizational “traits.”   According to the NRC, “A trait . . . is a pattern of thinking, feeling, and behaving that emphasizes safety, particularly in goal conflict situations, e.g., production vs. safety, schedule vs. safety, and cost of the effort vs. safety.”*  We applaud this recognition of goal conflicts as potential threats to effective safety management and a strong safety culture.

Several stakeholders made presentations at the meeting but the most interesting one was by INPO’s Dr. Ken Koves.**  He reported on a study that addressed two questions:
  • “How well do the factors from a safety culture survey align with the safety culture traits that were identified during the Feb 2010 workshop?
  • Do the factors relate to other measures of safety performance?” (p. 4)
The rest of this post summarizes and critiques the INPO study.

Methodology

For starters, INPO constructed and administered a safety culture survey.  The survey itself is interesting because it covered 63 sites and had 2876 respondents, not just a single facility or company.  They then performed a principal component analysis to reduce the survey data to nine factors.  Next, they mapped the nine survey factors against the safety culture traits from the NRC's Feb 2010 workshop, INPO principles, and Reactor Oversight Program components and found them generally consistent.  We have no issue with that conclusion. 

Finally, they ran correlations between the nine survey factors and INPO/NRC safety-related performance measures.  I assume the correlations included in his presentation are statistically significant.  Dr. Koves concludes that “Survey factors are related to other measures of organizational effectiveness and equipment performance . . . .” (p. 19)

The NRC reviewed the INPO study and found the “methods, data analyses and interpretations [were] appropriate.” ***

The Good News

Kudos to INPO for performing this study.  This analysis is the first (only?) large-scale attempt of which I am aware to relate safety culture survey data to anything else.  While we want to avoid over-inferring from the analysis, primarily because we have neither the raw data nor the complete analysis, we can find support in the correlation tables for things we’ve been saying for the last year on this blog.

For example, the factor with the highest average correlation to the performance measures is Management Decision Making, i.e., what management actually does in terms of allocating resources, setting priorities and walking the talk.  Prioritizing Safety, i.e., telling everyone how important it is and promulgating safety policies, is 7th (out of 9) on the list.  This reinforces what we have been saying all along: Management actions speak louder than words.

Second, the performance measures with the highest average correlation to the safety culture survey factors are the Human Error Rate and Unplanned Auto Scrams.  I take this to indicate that surveys at plants with obvious performance problems are more likely to recognize those problems.  We have been saying the value of safety culture surveys is limited, but can be more useful when perception (survey responses) agrees with reality (actual conditions).  Highly visible problems may drive perception and reality toward congruence.  For more information on perception vs. reality, see Bob Cudlin’s recent posts here and here.

Notwithstanding the foregoing, our concerns with this study far outweigh our comfort at seeing some putative findings that support our theses.

Issues and Questions

The industry has invested a lot in safety culture surveys and they, NRC and INPO have a definite interest (for different reasons) in promoting the validity and usefulness of safety culture survey data.  However, the published correlations are moderate, at best.  Should the public feel more secure over a positive safety culture survey because there's a "significant" correlation between survey results and some performance measures, some of which are judgment calls themselves?  Is this an effort to create a perception of management, measurement and control in a situation where the public has few other avenues for obtaining information about how well these organizations are actually protecting the public?

More important, what are the linkages (causal, logical or other) between safety culture survey results and safety-related performance data (evaluations and objective performance metrics) such as those listed in the INPO presentation?  Most folks know that correlation is not causation, i.e., just because two variables move together with some consistency doesn’t mean that one causes the other but what evidence exists that there is any relationship between the survey factors and the metrics?  Our skepticism might be assuaged if the analysts took some of the correlations, say, decision making and unplanned reactor scrams, and drilled into the scrams data for at least anecdotal evidence of how non-conservative decision making contributed to x number of scrams. We would be surprised to learn that anyone has followed the string on any scram events all the way back to safety culture.

Wrapping Up

The INPO analysis is a worthy first effort to tie safety culture survey results to other measures of safety-related performance but the analysis is far too incomplete to earn our endorsement.  We look forward to seeing any follow-on research that addresses our concerns.


*  “Presentation for Safety Club Public Meeting - Traits Comparison Charts,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102670381, p. 4.

**  G.K. Koves, “Safety Culture Traits Validation in Power Reactors,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010).

***  V. Barnes, “NRC Independent Evaluation of INPO’s Safety Culture Traits Validation Study,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102660125, p. 8.

Wednesday, October 20, 2010

Perception and Reality

In our October 18, 2010 post on how perception and reality may factor into safety culture surveys we ended with a question about the limits of the usefulness of surveys without a separate assessment to confirm the actual conditions within the organization.  Specifically, it makes us wonder, can a survey reliably distinguish between the following three situations:

-    an organization with strong safety culture with positive survey perceptions;
-    an organization with compromised safety culture but still reporting positive survey perceptions due to imperfect knowledge or other motivations;
-    an organization with compromised safety culture but still reporting positive survey perceptions due to complacency or normalization of lesser standards.

In our August 23, 2010 post we had raised a similar issue as follows:

“the overwhelming majority of nuclear power plant employees have never experienced a significant incident (we’re excluding ordinary personnel mishaps).  Thus, their work experience is of limited use in helping them assess just how strong their safety culture actually is.”

With what we know today it appears to us that safety culture survey results alone should not be used to reach conclusions about the state of safety culture in the organization or as a predictor of future safety performance.  Even comparisons across plants and the industry seem open to question due to the potential for significant and perhaps unknowable variation of perceptions of those surveyed. 

How would we see surveys contributing to knowledge of the safety culture in an organization?  In general we would say that certain survey questions can provide useful information where the objective is to elicit the perceptions of employees (versus a factual determination) on certain issues.  There is still the impediment that some employees’ perceptions will be colored, e.g., they will discern the “right” answer or will be motivated by other factors to bias their answers. 

What kind of questions might be perception-based?  We would say in areas where the perceptions of the organization are as important or of as much interest as the actual reality.  For example, whether the organization perceives that there is a bias for production goals over safety goals.  The existence of such a perception could have wide ranging impacts on individuals including their willingness to raise concerns or rigorously pursue their causes.  Even if the perceptions derived from the survey are not consistent with reality, it is important to understand that the perception exists and take steps to correct it.  Questions that go to ascertaining trust in management would also be useful as trust is largely a matter of perception.  It is not enough for management to be trustworthy.  Management must also be perceived as trustworthy to realize its benefit.   

The complication is that perception and reality are pulling in different directions. This signifies that although reality is certainly always present, perception is pulling at it and in many instances shaping reality. The impact of this relationship is that if not properly managed, perception will take over and will lessen if not eliminate the other attributes, especially reality.

It would suggest that a useful goal or trait of safety culture is to bring perception as close to reality as possible.  Perceptions that are inflated or unduly negative only distort the dynamics of safety management.  As with most complex systems, perceptions generally exist with some degree of time delay relative to actual reality.  Things improve, but perceptions lag as it takes time for information to flow, attitudes to adjust to new information, and new perceptions take hold.  Using perception data from surveys combined with the forensics of assessments can provide the necessary calibration to bring perception and reality into alignment.

Monday, October 18, 2010

Perception Is/Is Not Reality?

This post will continue our thoughts re the use of safety culture surveys.  The Oxford Dictionary says reality is the state of things as they actually exist, rather than as they may appear or may be thought to be.  Another theory of reality is that there is no objective reality.  Such belief is that there simply and literally is no reality beyond the perceptions, beliefs and attitudes we each have about reality.  In other words, “perception is reality”.  So, when a safety culture surveys is conducted, what reality is it measuring?  Is the purpose of the survey to determine an “objective” reality based on what an informed and knowledgeable person would say?  Or is the purpose simply to catalog the range of perceptions of reality held by those surveyed, whether accurate or not?  Why does it matter?

In our August 11, 2010 post we noted that UK researcher Dr. Kathryn Mearns referred to safety culture surveys as “perception surveys”, since they focus on people’s perceptions of attitudes, values and behaviors.  In a followup post on August 27, 2010 reporting some followup communications with Dr. Mearns we quoted her as follows:

“I see the survey results as a ‘temperature check’ but it needs a more detailed diagnosis to find out what really ails the safety culture.”

If one agrees that surveys are perception-based, it creates something of a dilemma as to which reality is of interest.  If “things as they actually exist” is important, then surveys alone may be of limited value, even misleading, without thorough diagnostic assessments, which is Dr. Mearns' point.  On the other hand, if perception itself is important, then surveys offer a window into that reality.  We think both realities have their place.

We find some empirical support for these ideas from the results of a recent safety culture assessment at Nuclear Fuel Services.*  The report is quite lengthy (over 300 pages) and exhaustive in its detail.  The assessment was done as part of a commitment by the owners of Nuclear Fuel Services (NFS) to the NRC and in response to ongoing safety performance issues at its facilities.  The assessment was performed by an independent team and included a safety culture survey.  It is the survey results that we focus on.

In reporting the results of the survey, the team identified a number of cautions as to the interpretation of NFS workforce perceptions.  The team found that survey numerical ratings were inflated due to the lack of an accurate frame of reference or adequate understanding of a particular cultural attribute.  This conclusion was based on the findings of the overall assessment project.  The team found the workforce perceptions to be “generally (and in some cases significantly) more positive than warranted” (p. 40) or justified by actual performance.

We found these results to be interesting in several respects.  First there is the acknowledgment that surveys simply compile the perceptions of individuals in the organization.  In the NFS case the assessment team concluded that the reported perceptions were inaccurate based on the team’s own detailed analysis of the organization.

Perhaps more interesting was that this inherent subjectivity of perceptions was attributed in this project to the lack of knowledge and frame of reference of the NFS staff, specifically related to standards of excellence associated with commercial nuclear sites.  This resonates with an observation from our August 23 post that “workers who had been through an accident recognized a relatively safer (riskier) environment better than workers who had not.”  In other words, people’s perceptions are influenced by the limits of their own experiences and context.  Makes sense.

The NFS assessment team goes on to indicate that the results of a prior safety culture survey a year earlier also are compromised based on the very time frame in which it was administered.  “It is reasonable to assume that the survey numerical ratings would have been lower if the survey had been administered after the workforce had become aware of the facts associated with the series of operational events that occurred” [prior to the survey].  (p. 41)  We would add there are probably numerous other factors that could easily bias perceptions, e.g., people being sensitive to what the “right answer” is and responding on that basis; complacency; the effect of externalities such as a significant corporate initiative dependent on the performance of the nuclear business; normalization of deviation; job-related incentives, etc.

We think it is very likely that the assessment team was correct in discounting the NFS survey results.  The question is, can any other survey results be relied on absent independent calibration by detailed organizational assessments?  We will take this up in a forthcoming post.

*  "Information to Fulfill Confirmatory Order, Section V, Paragraph 3.e" (Jun 29,2010)  ADAMS Accession Number ML101820096.

Monday, October 4, 2010

Survival of the Safest

One of our goals with SafetyMatters is bringing thought provoking materials to our readers, particularly materials they might not otherwise come across.  This post is an example from the greater business world and the current state of the U.S. economy.  Once again it is based on some interesting research from professors at Yale University* and described in an article in the New York Times.**

“Corporate managers struggling to preserve their companies and protect their core employees have inadvertently contributed to a vicious cycle of rising unemployment and plummeting national morale. If we are to break out of this downward spiral, we first need to understand the problem…professional managers throughout the business world see it as their job to keep work-force morale high. But, paradoxically, the actions they take for their own workplaces often make the overall crisis more severe.”

These issues have been the subject of research by Yale economics professor Truman Bewley.  While his specific focus is on labor markets and how wages respond (or don’t respond) to periods of reduced demand, some of the insights channel directly into the current issues of safety culture at nuclear plants. 

Bewley’s approach was to interview hundreds of corporate managers at length about the driving forces for their actions.  The article goes on to describe how corporate managers respond to recessions by protecting their most important staff, but paradoxically these actions tend to produce unforeseen and often counter-productive results. 

The description of how actions result in unintended consequences is emblematic of the complexity of business systems, where dynamics and interdependencies are not always seen or understood by the managers tasked with achieving results.  Nuclear safety culture exists in such a complex socio-technical system and requires more than just “leadership” to assure long term sustainability. 

This brings us to the first part of Dr. Bewley’s approach - his focus on identifying and understanding the driving forces for managers’ actions.  We see this as precisely the right prescription for improving our understanding of nuclear safety culture dynamics, particularly in cases where safety culture weaknesses have been observed.  A careful and penetrating look at why people don’t act in accordance with safety culture principles would do much to identify the types of factors, such as performance incentives, cost and schedule pressures, etc. that may be at work in an organization.  Driving forces are not necessarily different from root causes - a term more familiar in the nuclear industry - but I tend to prefer it because it explicitly reminds us that safety culture is dynamic, and results from the interaction of many moving parts.  Currently the focus of the industry, and the NRC for that matter, is on safety culture “traits”.  Traits are really the results or manifestations of safety culture and thus build out the picture of what is desired.  But they do not get at what factors actually produce strong safety culture in the first place.

As an example we refer you to a comment we posted on a Nuclear Safety Culture group thread on LinkedIn.com.  Dr. Bill Corcoran initiated a thread asking for proposals of safety culture traits that were at least as important as those in the NRC strawman.  Our response proposed:

 “The compensation structure in the corporation is aligned with its safety priorities and does not create real or perceived conflicts in decisions affecting nuclear safety.” ***

While this was proposed as a “trait” in response to Bill’s request, it is clearly a driving force that will enable and support strong safety culture behaviors and decisions.

* To read about other interesting work at Yale, check out our August 30, 2010 post.

** Robert J. Shiller, "The Survival of the Safest," New York Times (Oct 2, 2010).

*** The link to the thread (including Bob's comment) is here.  This may be difficult for readers who are not LinkedIn members to access.  We are not promoting LinkedIn but the Nuclear Safety Culture group has some interesting commentary.

Thursday, September 30, 2010

BP's New Safety Division

It looks like oil company BP believes that creating a new, “global” safety division is part of the answer to their ongoing safety performance issues including most recently the explosion of Deepwater Horizon oil rig in the Gulf of Mexico.  An article in the September 29, 2010 New York Times* quotes BP’s new CEO as stating “safety and risk management [are] our most urgent priority” but does not provide many details of how the initiative will accomplish its goal.  Without seeming to jump to conclusions, it is hard for us to see how a separate safety organization is the answer although BP asserts it will be “powerful”. 

Of more interest was a lesser headline in the article with the following quote from BP’s new CEO:

“Mr. Dudley said he also plans a review of how BP creates incentives for business performance, to find out how it can encourage staff to improve safety and risk management.”

We see this as one of the factors that is a lot closer to the mark for changing behaviors and priorities.  It parallels recent findings by FPL in its nuclear program (see our July 29, 2010 post) and warning flags that we had raised in our July 6 and July 9, 2010 posts regarding trends in U.S. nuclear industry compensation.  Let’s see which speaks the loudest to the organization: CEO pronouncements about safety priority or the large financial incentives that executives can realize by achieving performance goals.  If they are not aligned, the new “division of safety” will simply mean business as usual.

*  The original article is available via the iCyte below.  An updated version is available on the NY Times website.