Tuesday, December 28, 2010

Cross Cutting Duke Energy vs. Indiana

The starting point of this post is not the nuclear industry per se and some may think it odd that we extrapolate from another part of the utility business to nuclear safety culture.  But the news of late regarding Duke Energy and its relationships with public utility regulators in Indiana raise some caution flags as to the nature of how corporate culture might influence the nuclear side of the business.  We believe it also raises fundamental issues about the NRC’s scope of culture assessment and may suggest the need for renewed approaches to so-called cross cutting issues.  With that let’s look at recent reports of what Duke executives and Indiana public servants have been up to.

In brief the situation involves Duke’s coal fired plant under construction in Indiana (the Edwardsport plant) and Duke Energy executives’ interactions with the Indiana Public Utilities Commission personnel, up to and including the head of the IPUC.  During the pendancy of the regulatory consideration of the plant costs, Duke engaged in hiring a key IPUC staff member (their General Counsel no less) and engaged in ongoing email exchanges indicating close personal relationships between Duke and IPUC personnel, the offering of favors, and the exchange of closely held information.  The upshot of the scandal has been the firings of the IPUC Chairman (by Indiana Governor Mitch Daniels), the former IPUC General Counsel hired by Duke, the head of Duke’s Indiana business unit (also a former IPUC staffer) and the Chief Operating Officer of Duke Energy, the second highest Duke Energy executive.  Phew.

James Rogers, chairman and chief executive officer of Duke, said Monday that former COO James Turner had not exercised undue influence with Indiana regulators but resigned because "he felt like it put the company in a bad place."  He went on to say, "He [Turner] made a decision on his own that he felt like those e-mails were embarrassing and inappropriate... If you read through them, it showed a very close relationship between the two of them.”*

But if one visits the Duke Energy website and reads through their Code of Business Ethics it would appear that the activities in Indiana routinely and broadly violated the espoused ethics.  So, did a very senior executive resign because he had bad email habits or was it the underlying actions and behaviors that were inappropriate? 

Our purpose here is not to delve into the details of the scandal since we think the actions taken in response speak for themselves.  Rather we want to examine the potential for such behaviors to spill over or create an influence on other parts of Duke’s business, namely their extensive nuclear plant fleet.  To us it raises the question of what is a cross cutting issue for nuclear safety?  The essence of cross cutting is to account for issues that have broad and potentially overlapping significance to achieving desired safety results.  With regard to safety culture, can such issues be limited to the scope of nuclear operations, or do they necessarily involve issues that span corporate governance and ethics?  A good question.

In prior posts (here and here) we reported some of our research on compensation structures within nuclear owner companies and the extent to which such compensation included incentives other than safety.  We found that corporate level executives including Chief Nuclear Officers were eligible for substantial amounts of compensation, large percentages of which were based on performance against business objectives such as profits and capacity factors.  We raised the concern that such large amounts of incentive-based compensation could exacerbate the influence of business priorities that compete with safety objectives.  The Duke-Indiana experience brings into focus other aspects of the corporate environment, such as business ethics and relationships with regulators, that could also bear on the ability of the organization to maintain its nuclear safety culture.  It is clear that certain corporate level decisions such as budgets and business goals directly impact the management of nuclear facilities.  It also seems likely that “softer” issues such as compensation, promotional opportunities and business ethics will be sending “environmental” messages to nuclear personnel as well.  

There is also something of an interesting parallel in the way that the current issues involving the Indiana PUC were handled.  Duke fired (or allowed to resign) both of its recent hires from IPUC and the COO of Duke Energy.  Similarly the governor of Indiana replaced the Chairman of the IPUC.  Sound familiar?  Seems like the approach taken within nuclear organizations when safety culture issues become problematic.  Get rid of some individuals and move on.  Such a response presumes that “bad people” are the root cause of these problems.  But one wonders, as do many of the news media reporting on this, whether at both Duke and IPUC this was just the way business was being done until it became public via the email exchanges.  If so doesn’t it suggest that cultural issues are involved?  And that the causes and extent of the cultural issues warrant further consideration?

What Duke decides to do or not do with regard to its corporate culture is probably an internal matter, but any spill over of corporate culture into nuclear safety culture is of more direct concern.  Should the NRC be aware of and interested in corporate culture, particularly when fundamental ethics and values are concerned?  How confident is the NRC that some safety culture breakdowns (think of Davis Besse and Millstone) had their genesis in a defective corporate culture?  Does corporate culture cross cut nuclear safety culture?  If not, why not?


*  J. Russell, “E-mail Scandal Topples Duke Energy's James Turner,” IndyStar.Com (Dec 7, 2010).

Thursday, December 23, 2010

Ian Richardson, Safety Culturist

Ian Richardson, the British actor, may not be the first person who leaps to mind as someone with insight into the art of assessing safety culture.  But in an episode in the second volume of the BBC series "House of Cards," Ian’s character, the UK Prime Minister, observes:

“You can get people to say anything you want them to say, but getting them to do what they say is another thing.”

And with that thought, we wish you Happy Holidays.

Thursday, December 16, 2010

SONGS Is Getting Better (Maybe)

On December 14, 2010 the NRC held a public meeting with Southern California Edison to discuss recent safety-related performance at the San Onofre Nuclear Generating Station (SONGS).  As you know, SONGS has been plagued for years by incidents, including willful violations, and employees claiming they fear retaliation if they report or discuss such incidents.  The newspaper item* on the meeting had an upbeat tone, and quoted NRC deputy director Troy Pruett, as saying:

"The trick now is for you guys to continue some of the successes you've had in the last two, three, four months."

That sounds good, and we hope SONGS continues to perform at a satisfactory level.  But the reality is a few months of symptom-free behavior is not enough to declare the patient cured.  To get a feel for the plant’s history of problems, check out this earlier article** by the same reporter.  In addition, please refer to our September, 2010 posts (here and here) for our perspective on what the underlying issues might be.

We’re Still Here

As you may have noticed, we haven’t been posting recently.  It’s not for lack of interest; we just haven’t come across much news or other items worthy of comment or passing on to you.  We will not waste your time (or ours) with fluff simply to fill the space.  If you have any safety culture news, articles, publications, etc. that you’d like us to review and comment on, please let us know about them via email.

*  P. Sisson, “San Onofre: Federal regulators report progress at nuclear plant,” North Country Times (Dec 14, 2010).

**  P. Sisson, “San Onofre: Latest progress report on nuke plant set for Tuesday,” North Country Times (Dec 9, 2010).

Thursday, November 18, 2010

Another Brick in the Wall for BP et al

Yesterday the National Academy of Engineering released their report* on the Deepwater Horizon blowout.  The report includes a critical appraisal of many decisions made during the period when the well was being prepared for temporary abandonment, decisions that in the aggregate decreased safety margins and increased risks.  This Washington Post article** provides a good summary of the report.

The report was written by engineers and scientists and has a certain “Just the facts, ma’am” tone.  It does not specifically address safety culture.  But we have to ask: What can one infer about a culture where the business practices don’t include “any standard practice . . . to guide the tradeoffs between cost and schedule and the safety implications of the many decisions (that is, a risk management approach).”  (p. 15)

We have had plenty to say about BP and the Deepwater Horizon accident.  Click on the BP label below to see all of our related blog entries.


*  Committee for the Analysis of Causes of the Deepwater Horizon Explosion, Fire, and Oil Spill to Identify Measures to Prevent Similar Accidents in the Future; National Academy of Engineering; National Research Council, “Interim Report on Causes of the Deepwater Horizon Oil Rig Blowout and Ways to Prevent Such Events” (2010).

**  D. Cappiello, “Experts: BP ignored warning signs on doomed well,” The Washington Post (Nov 17, 2010).  Given our blog’s focus on the nuclear industry, it’s worth noting that, in an interview, the committee chairman said, “the behavior leading up to the oil spill would be considered unacceptable in companies that work with nuclear power or aviation.”

Tuesday, November 9, 2010

Human Beings . . . Conscious Decisions

In a  New York Times article* dated November 8, 2010, there was a headline to the effect that Fred Bartlit, the independent investigator for the presidential panel on the BP oil rig disaster earlier this year had not found that “cost trumped safety” in decisions leading up to the accident.  The article noted that this finding contradicted determinations by other investigators including those sponsored by Congress.  We had previously posted on this subject, including taking notice of the earlier findings of cost trade-offs, and wanted to weigh in based on this new information.

First we should acknowledge that we have no independent knowledge of the facts associated with the blowout and are simply reacting to the published findings of current investigations.  In our prior posts we had posited that cost pressures could be part of the equation in the leadup to the spill.  On June 8, 2010 we observed:

“...it is clear that the environment leading up to the blowout included fairly significant schedule and cost pressures. What is not clear at this time is to what extent those business pressures contributed to the outcome. There are numerous cited instances where best practices were not followed and concerns or recommendations for prudent actions were brushed aside. One wishes the reporters had pursued this issue in more depth to find out ‘Why?’ ”

And we recall one of the initial observations made by an OSHA official shortly after the accident as detailed in our April 26, 2010 post:

“In the words of an OSHA official BP still has a ‘serious, systemic safety problem’ across the company.”

So it appears we have been cautious in reaching any conclusions about BP’s safety management.  That said, we do want to put into context the finding by Mr. Bartlit.  First we would note that he is, by profession, a trial lawyer and may be both approaching the issue and articulating his finding with a decidedly legal focus.  The specific quotes attributed to him are as follows:

“. . . we have not found a situation where we can say a man had a choice between safety and dollars and put his money on dollars” and “To date we have not seen a single instance where a human being made a conscious decision to favor dollars over safety,...”

It is not surprising that a lawyer would focus on culpability in terms of individual actions.  When things go wrong, most industries, nuclear included, look to assign blame to individuals and move on.  It is also worth noting that the investigator emphasized that no one had made a “conscious” decision to favor cost over safety.  We think it is important to keep in mind that safety management and failures of safety decision making may or may not involve conscious decisions.  As we have stated many times in other posts, safety can be undermined through very subtle mechanisms such that even those involved may not appreciate the effects, e.g., the normalization of deviance.  Finally we think the OSHA investigator may have been closer to the truth with his observation about “systemic” safety problems.  It may be that Mr. Bartlit, and other investigators, will be found to have suffered from what is termed “attribution error” where simple explanations and causes are favored and the more complex system-based dynamics are not fully assessed or understood in the effort to answer “Why?”  

* J.M. Broder, "Investigator Finds No Evidence That BP Took Shortcuts to Save Money," New York Times (Nov 8, 2010).

Thursday, October 28, 2010

Safety Culture Surveys in Aviation

Like nuclear power, commercial aviation is a high-reliability industry whose regulator (the FAA) is interested in knowing the state of safety culture.  At an air carrier, the safety culture needs to support cooperation, coordination, consistency and integration across departments and at multiple physical locations.

And, like nuclear power, employee surveys are used to assess safety culture.  We recently read a report* on how one aviation survey process works.  The report is somewhat lengthy so we have excerpted and summarized points that we believe will be interesting to you.

The survey and analysis tool is called the Safety Culture Indicator Scale Measurement System (SCISMS), “an organizational self-assessment instrument designed to aid operators in measuring indicators of their organization’s safety culture, targeting areas that work particularly well and areas in need of improvement.” (p. 2)  SCISMS provides “an integrative framework that includes both organizational level formal safety management systems, and individual level safety-related behavior.” (p. 8)

The framework addresses safety culture in four main factors:  Organizational Commitment to Safety, Operations Interactions, Formal Safety Indicators, and Informal Safety Indicators.  Each factor is further divided into three sub-factors.  A typical survey contains 100+ questions and the questions usually vary for different departments.

In addition to assessing the main factors, “The SCISMS contains two outcome scales: Perceived Personal Risk/Safety Behavior and Perceived Organizational Risk . . . . It is important to note that these measures reflect employees’ perceptions of the state of safety within the airline, and as such reflect the safety climate. They should not be interpreted as absolute or objective measures of safety behavior or risk.” (p. 15)  In other words, the survey factors and sub-factors are not related to external measurements of safety performance, but the survey-takers’ perceptions of risk in their work environment.

Summary results are communicated back to participating companies in the form of a two-dimensional Safety Culture Grid.  The two dimensions are employees’ perceptions of safety vs management’s perceptions of safety.  The grid displays summary measures from the surveys; the measures can be examined for consistency (one factor or department vs others), direction (relative strength of the safety culture) and concurrence of employee and management survey responses.

Our Take on SCISMS

We have found summary level graphics to be very important in communicating key information to clients and the Safety Culture Grid appears like it could be effective.  One look at the grid shows the degree to which the various factors have similar or different scores, the relative strength of the safety culture, and the perceptual alignment of managers and employees with respect to the organization’s safety culture.   Grids can be constructed to show findings across factors or departments within one company or across multiple companies for an industry comparison. 

Our big problem is with the outcome variables.  Given that the survey contains perceptions of both what’s going on and what it means in terms of creating safety risks, it is no surprise that the correlations between factor and outcome data are moderate to strong.  “Correlations with Safety Behavior range from r = .32 - .60 . . . . [and] Correlations between the subscales and Perceived Risk are generally even stronger, ranging from r = -.38 to -.71” (p. 25)  Given the structure of the instrument, one might ask why the correlations are not even larger.  We’d like to see some intelligent linkage between safety culture results and measures of safety performance, either objective measures or expert evaluations.

The Socio-Anthropological and Organizational Psychological Perspectives

We have commented on the importance of mental models (here, here and here) when viewing or assessing safety culture.  While not essential to understanding SCISMS, this report fairly clearly describes two different perspectives of safety culture: the socio-anthropological and organizational psychological.  The former “highlights the underlying structure of symbols, myths, heroes, social drama, and rituals manifested in the shared values, norms, and meanings of groups within an organization . . . . the deeper cultural structure is often not immediately interpretable by outsiders. This perspective also generally considers that the culture is an emergent property of the organization . . . and therefore cannot be completely understood through traditional analytical methods that attempt to break down a phenomenon in order to study its individual components . . . .”

In contrast, “The organizational psychological perspective . . . . assumes that organizational culture can be broken down into smaller components that are empirically more tractable and more easily manipulated . . . and in turn, can be used to build organizational commitment, convey a philosophy of management, legitimize activity and motivate personnel.” (pp.7-8) 

The authors characterize the difference between the two viewpoints as qualitative vs quantitative and we think that is a fair description.


*  T.L. von Thaden and A.M. Gibbons, “The Safety Culture Indicator Scale Measurement System (SCISMS)” (Jul 2008) Technical Report HFD-08-03/FAA-08-02. Savoy, IL: University of Illinois, Human Factors Division.

Friday, October 22, 2010

NRC Safety Culture Workshop

The information from the Sept 28, 2010 NRC safety culture meeting is available on the NRC website.  This was a meeting to review the draft safety culture policy statement, definition and traits.

As you probably know, the NRC definition now focuses on organizational “traits.”   According to the NRC, “A trait . . . is a pattern of thinking, feeling, and behaving that emphasizes safety, particularly in goal conflict situations, e.g., production vs. safety, schedule vs. safety, and cost of the effort vs. safety.”*  We applaud this recognition of goal conflicts as potential threats to effective safety management and a strong safety culture.

Several stakeholders made presentations at the meeting but the most interesting one was by INPO’s Dr. Ken Koves.**  He reported on a study that addressed two questions:
  • “How well do the factors from a safety culture survey align with the safety culture traits that were identified during the Feb 2010 workshop?
  • Do the factors relate to other measures of safety performance?” (p. 4)
The rest of this post summarizes and critiques the INPO study.

Methodology

For starters, INPO constructed and administered a safety culture survey.  The survey itself is interesting because it covered 63 sites and had 2876 respondents, not just a single facility or company.  They then performed a principal component analysis to reduce the survey data to nine factors.  Next, they mapped the nine survey factors against the safety culture traits from the NRC's Feb 2010 workshop, INPO principles, and Reactor Oversight Program components and found them generally consistent.  We have no issue with that conclusion. 

Finally, they ran correlations between the nine survey factors and INPO/NRC safety-related performance measures.  I assume the correlations included in his presentation are statistically significant.  Dr. Koves concludes that “Survey factors are related to other measures of organizational effectiveness and equipment performance . . . .” (p. 19)

The NRC reviewed the INPO study and found the “methods, data analyses and interpretations [were] appropriate.” ***

The Good News

Kudos to INPO for performing this study.  This analysis is the first (only?) large-scale attempt of which I am aware to relate safety culture survey data to anything else.  While we want to avoid over-inferring from the analysis, primarily because we have neither the raw data nor the complete analysis, we can find support in the correlation tables for things we’ve been saying for the last year on this blog.

For example, the factor with the highest average correlation to the performance measures is Management Decision Making, i.e., what management actually does in terms of allocating resources, setting priorities and walking the talk.  Prioritizing Safety, i.e., telling everyone how important it is and promulgating safety policies, is 7th (out of 9) on the list.  This reinforces what we have been saying all along: Management actions speak louder than words.

Second, the performance measures with the highest average correlation to the safety culture survey factors are the Human Error Rate and Unplanned Auto Scrams.  I take this to indicate that surveys at plants with obvious performance problems are more likely to recognize those problems.  We have been saying the value of safety culture surveys is limited, but can be more useful when perception (survey responses) agrees with reality (actual conditions).  Highly visible problems may drive perception and reality toward congruence.  For more information on perception vs. reality, see Bob Cudlin’s recent posts here and here.

Notwithstanding the foregoing, our concerns with this study far outweigh our comfort at seeing some putative findings that support our theses.

Issues and Questions

The industry has invested a lot in safety culture surveys and they, NRC and INPO have a definite interest (for different reasons) in promoting the validity and usefulness of safety culture survey data.  However, the published correlations are moderate, at best.  Should the public feel more secure over a positive safety culture survey because there's a "significant" correlation between survey results and some performance measures, some of which are judgment calls themselves?  Is this an effort to create a perception of management, measurement and control in a situation where the public has few other avenues for obtaining information about how well these organizations are actually protecting the public?

More important, what are the linkages (causal, logical or other) between safety culture survey results and safety-related performance data (evaluations and objective performance metrics) such as those listed in the INPO presentation?  Most folks know that correlation is not causation, i.e., just because two variables move together with some consistency doesn’t mean that one causes the other but what evidence exists that there is any relationship between the survey factors and the metrics?  Our skepticism might be assuaged if the analysts took some of the correlations, say, decision making and unplanned reactor scrams, and drilled into the scrams data for at least anecdotal evidence of how non-conservative decision making contributed to x number of scrams. We would be surprised to learn that anyone has followed the string on any scram events all the way back to safety culture.

Wrapping Up

The INPO analysis is a worthy first effort to tie safety culture survey results to other measures of safety-related performance but the analysis is far too incomplete to earn our endorsement.  We look forward to seeing any follow-on research that addresses our concerns.


*  “Presentation for Safety Club Public Meeting - Traits Comparison Charts,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102670381, p. 4.

**  G.K. Koves, “Safety Culture Traits Validation in Power Reactors,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010).

***  V. Barnes, “NRC Independent Evaluation of INPO’s Safety Culture Traits Validation Study,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102660125, p. 8.