Showing posts with label INPO. Show all posts
Showing posts with label INPO. Show all posts

Tuesday, January 25, 2011

A Nuclear Model for Oil and Gas

The President’s Commission has issued its report on the Deepwater Horizon disaster.* The report reviews the history of the tragedy and makes recommendations based on lessons learned.  This post focuses on the report’s use of the nuclear industry, in particular the role played by INPO, as a model for an oil and gas industry safety institute and auditor.

The report provides an in-depth review of INPO’s role and methods and we will not repeat that review in this space.  We want to highlight the differences between the oil and gas and nuclear industries, some recognized in the report, that would challenge a new safety auditor. 

First, “The oil and gas industry is more fragmented and diversified in nature. . . .” (p. 240)  The industry includes vertically integrated giants, specialty niche firms and everything in-between.  Some are global in nature while others are regional firms.  In our view, it appears that oil and gas industry participants cooperate with each other in certain instances and compete with each other in different cases.  (In contrast, most [all?] U.S. nuclear plants are not in direct competition with other plants.)  Obtaining agreement to create a relatively powerful industry auditing entity will not be a simple matter.    

Second, “concerns about potential disclosure to business competitors of proprietary information might make it harder to establish an INPO-like entity in the oil and gas industry.” (p. 240)  Oil and gas firms regard technology as an important source of competitive advantage.  “[A]n INPO-like approach might run into problems if companies perceived the potential for inspections of offshore facilities to reveal ‘technical and proprietary and confidential information that companies may be reluctant to share with one another.’” (p. 241)  Not only will it be difficult to get a firm to share its proprietary technology if it may lose competitive advantage by doing so, but this will make it more difficult for the auditing organization to promote the industry-wide use of the most effective, safest technologies

Third, and this could be a potentially large problem, INPO operates in almost total secrecy.  “[INPO] assessment results are never revealed to anyone other than the utility CEOs and site managers, but INPO formally meets with the NRC four times a year to discuss trends and information of “mutual interest.” And if INPO has discovered serious problems associated with specific plants, it notifies the NRC.”  (p. 236)  INPO claims, probably realistically, that maintaining member confidentiality is key to obtaining full and willing cooperation in evaluations. 

However, this secrecy contributes zero to public understanding of and support for nuclear plant operations and owners.  At this point in its evolution, the oil and gas industry needs more transparency in its auditing and oversight functions, not less.  After all, and forgive the bluntness here, very few people have died at U.S. commercial nuclear power plants (and those were in non-nuclear incidents) while the oil and gas industry has suffered numerous fatalities.  We think a government auditor, whose evaluations of facilities and managements would be made public, is the better answer for the oil and gas industry at this time.


*  National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, “Deep Water: The Gulf Oil Disaster and the Future of Offshore Drilling,” Report to the President (Jan 2011).

Wednesday, January 12, 2011

ESP and Safety Culture

A recent New York Times article* on an extrasensory perception (ESP) study and the statistical methods used therein caught our attention.  The article’s focus is on the controversy surrounding statistical significance testing.  “A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent.”  We have all seen such analyses.

However, critics of classical significance testing say a finding based on such a test “could overstate the significance of the finding by a factor of 10 or more,” a sort of super false positive.  The critics claim a better approach is to apply the methods of Bayesian analysis, which incorporates known probabilities, if available, from outside the study.  Check out the comments on the article, especially the reader recommended ones, for more information on statistical methods and issues.  (You can ignore the ESP-related comments unless you have some special interest in the topic). 

What has this got to do with safety culture?

Recall that last October we reported on an INPO study that, among other things, calculated correlations between safety culture survey factors and various safety-related performance measures.  We expressed reservations about the overall approach and results even though a few correlations supported points we have been making in our blog.

The controversy over the ESP study and its associated statistical methods reminds us that analysts in many fields are under pressure to find something “significant.”  This pressure comes from bosses, funding agencies, editors and tenure committees.  Studies that find no effects, or ones not aligned with higher-level organizational objectives, are less likely to be publicized and their authors rewarded.  In addition, I fear some (many?) social science researchers don’t fully understand the statistical methods they are using, i.e., their built-in biases and limitations.  So, once again, caveat emptor.   

By the way, we are not saying or implying the INPO study was biased in any way; we have no information on it other than what was presented at the NRC meeting referenced in our original blog post. 

*  B. Carey, “You Might Already Know This ...,” New York Times (Jan 11, 2011).

Thursday, January 6, 2011

Nuclear Safety Culture Assessment Manual

July 9, 2012 update: How to Get the NEI Nuclear Safety Culture Assessment Manual

The manual is available in the NRC ADAMS database, Accession Numbers ML091810801, ML091810803, ML091810805, ML091810807, ML091810808 and ML091810809.

**********************************************************
 
As recently reported at TheDay.com,* NEI has published a “Nuclear Safety Culture Assessment Manual,” a document that provides guidance for conducting a safety culture (SC) assessment at a nuclear power plant.  The industry has issued the manual and conducted some pilot program assessments in an effort to influence and stay ahead of the NRC’s initiative to finalize a SC policy statement this year.  The NRC is formulating a policy (as opposed to a regulatory requirement) in this area because it apparently believes that SC cannot be directly regulated and/or any attempt to assess SC comes too close to evaluating (or interfering with) plant management, a task the agency has sought to avoid. 

Basically, the manual describes an assessment methodology based on the eight INPO principles for creating/maintaining a strong nuclear safety culture.  It is a comprehensive how-to document including assessment team organization, schedules, interview guidance and questions, sample communication memos, and report templates.  The manual has a strongly prescriptive approach, i.e., it seeks to create a standardized approach which should facilitate comparisons between different facilities and the same facility over time. 

The best news from our perspective is that the NEI assessment approach relies heavily on interviews; it uses a site survey instrument only to identify pre-assessment areas of interest.  It’s no secret that we are skeptical about over-inference with respect to the health of a plant’s safety culture from the snapshot a survey provides.  The assessment also uses direct observations of behavior of employees at all levels during scheduled activities, such and meetings and briefings, and ad-hoc observation opportunities.

A big question is: In a week-long self assessment, can a team discern the degree to which an organization satisfies key principles, e.g., the level of trust in the organization or whether leaders demonstrate a commitment to safety?  I think we have to answer that with “Maybe.”  Skilled and experienced interviewers can probably determine the general status of these variables but may not develop a complete picture of all the nuances.  BUT, their evaluation will likely be more useful than any survey.

There is one obvious criticism with the NEI approach which industry critics have quickly identified.  As David Collins puts it in TheDay.com article, “[T]he industry is monitoring itself - this is the fox monitoring the henhouse."  While the manual is proposed for use by anyone performing a safety culture assessment, including a truly independent third party, the reality is the industry expects the primary users to be utilities performing self assessments or “independent” assessments, which include non-utility people on the team. 


*  P. Daddona, “Nuclear group puts methods into use to foster 'a safety culture',” TheDay.com
(Dec 21, 2010).

Friday, October 22, 2010

NRC Safety Culture Workshop

The information from the Sept 28, 2010 NRC safety culture meeting is available on the NRC website.  This was a meeting to review the draft safety culture policy statement, definition and traits.

As you probably know, the NRC definition now focuses on organizational “traits.”   According to the NRC, “A trait . . . is a pattern of thinking, feeling, and behaving that emphasizes safety, particularly in goal conflict situations, e.g., production vs. safety, schedule vs. safety, and cost of the effort vs. safety.”*  We applaud this recognition of goal conflicts as potential threats to effective safety management and a strong safety culture.

Several stakeholders made presentations at the meeting but the most interesting one was by INPO’s Dr. Ken Koves.**  He reported on a study that addressed two questions:
  • “How well do the factors from a safety culture survey align with the safety culture traits that were identified during the Feb 2010 workshop?
  • Do the factors relate to other measures of safety performance?” (p. 4)
The rest of this post summarizes and critiques the INPO study.

Methodology

For starters, INPO constructed and administered a safety culture survey.  The survey itself is interesting because it covered 63 sites and had 2876 respondents, not just a single facility or company.  They then performed a principal component analysis to reduce the survey data to nine factors.  Next, they mapped the nine survey factors against the safety culture traits from the NRC's Feb 2010 workshop, INPO principles, and Reactor Oversight Program components and found them generally consistent.  We have no issue with that conclusion. 

Finally, they ran correlations between the nine survey factors and INPO/NRC safety-related performance measures.  I assume the correlations included in his presentation are statistically significant.  Dr. Koves concludes that “Survey factors are related to other measures of organizational effectiveness and equipment performance . . . .” (p. 19)

The NRC reviewed the INPO study and found the “methods, data analyses and interpretations [were] appropriate.” ***

The Good News

Kudos to INPO for performing this study.  This analysis is the first (only?) large-scale attempt of which I am aware to relate safety culture survey data to anything else.  While we want to avoid over-inferring from the analysis, primarily because we have neither the raw data nor the complete analysis, we can find support in the correlation tables for things we’ve been saying for the last year on this blog.

For example, the factor with the highest average correlation to the performance measures is Management Decision Making, i.e., what management actually does in terms of allocating resources, setting priorities and walking the talk.  Prioritizing Safety, i.e., telling everyone how important it is and promulgating safety policies, is 7th (out of 9) on the list.  This reinforces what we have been saying all along: Management actions speak louder than words.

Second, the performance measures with the highest average correlation to the safety culture survey factors are the Human Error Rate and Unplanned Auto Scrams.  I take this to indicate that surveys at plants with obvious performance problems are more likely to recognize those problems.  We have been saying the value of safety culture surveys is limited, but can be more useful when perception (survey responses) agrees with reality (actual conditions).  Highly visible problems may drive perception and reality toward congruence.  For more information on perception vs. reality, see Bob Cudlin’s recent posts here and here.

Notwithstanding the foregoing, our concerns with this study far outweigh our comfort at seeing some putative findings that support our theses.

Issues and Questions

The industry has invested a lot in safety culture surveys and they, NRC and INPO have a definite interest (for different reasons) in promoting the validity and usefulness of safety culture survey data.  However, the published correlations are moderate, at best.  Should the public feel more secure over a positive safety culture survey because there's a "significant" correlation between survey results and some performance measures, some of which are judgment calls themselves?  Is this an effort to create a perception of management, measurement and control in a situation where the public has few other avenues for obtaining information about how well these organizations are actually protecting the public?

More important, what are the linkages (causal, logical or other) between safety culture survey results and safety-related performance data (evaluations and objective performance metrics) such as those listed in the INPO presentation?  Most folks know that correlation is not causation, i.e., just because two variables move together with some consistency doesn’t mean that one causes the other but what evidence exists that there is any relationship between the survey factors and the metrics?  Our skepticism might be assuaged if the analysts took some of the correlations, say, decision making and unplanned reactor scrams, and drilled into the scrams data for at least anecdotal evidence of how non-conservative decision making contributed to x number of scrams. We would be surprised to learn that anyone has followed the string on any scram events all the way back to safety culture.

Wrapping Up

The INPO analysis is a worthy first effort to tie safety culture survey results to other measures of safety-related performance but the analysis is far too incomplete to earn our endorsement.  We look forward to seeing any follow-on research that addresses our concerns.


*  “Presentation for Safety Club Public Meeting - Traits Comparison Charts,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102670381, p. 4.

**  G.K. Koves, “Safety Culture Traits Validation in Power Reactors,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010).

***  V. Barnes, “NRC Independent Evaluation of INPO’s Safety Culture Traits Validation Study,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102660125, p. 8.

Monday, May 17, 2010

How Do We Know? Dave Collins, Millstone, Dominion and the NRC

This week the issues being raised by former Dominion engineer David Collins* regarding safety culture at Millstone are receiving increased attention. It appears David is raising two general issues: (1) Is Dominion's safety culture being eroded due to cost and competitive pressures and (2) is the NRC being effective in its regulatory oversight of safety culture?

So far the responses of the various players are along standard lines. Dominion contends it is simply harvesting cost efficiencies in its organization without compromising safety and specific problems are isolated events. The NRC has referred the issues to its Office of Investigation thus limiting transparency. INPO will not comment on its confidential assessments of nuclear owners.

What is one to make of this? First we have no special insights into the bases for the issues being raised by Collins. We have interacted with him in the past on safety culture issues and he is clearly a dedicated and knowledgeable individual with a strong commitment to nuclear safety culture. Thus we would be inclined to give his allegations serious consideration.

On the broader issues we see both opportunities and risks. We have emphasized cost and competitive pressures as a key to understanding how safety culture must balance multiple priorities to assure safety. What we do not see at the moment is how Dominion, the NRC or INPO would be able to determine whether, or to what extent, such pressures might be impacting decisions within Dominion. We doubt whether current approaches to assessing safety culture can be determinative; e.g., safety culture surveys, NRC performance indicators, or INPO assessments. In addition a plant-centric focus is unlikely to reveal systemic interactions or top-down signals that may result in decisional pressure at the plant levels. Recall that a significant exogenous pressure cited in the space shuttle Challenger accident was Congressional political pressure.

So while we understand the nature of the processes underway to evaluate Collins’ issues, it would be helpful if any or all the organizations could explain their methods for assessing these type of issues and the objective evidence to be used in making findings. The risk at this point is that the industry and NRC appear to be more focused on negating the allegations than in taking a hard look at their possible merits, including how exactly to evaluate them.

* Follow the link below for an overview of Collins' issues and other parties' responses.

Friday, April 2, 2010

NRC Briefing on Safety Culture - March 30, 2010

It would be difficult to come up with an attention-grabbing headline for the March 30 Commission briefing on safety culture. Not much happened. There were a lot of high fives for the perceived success of the staff’s February workshop and its main product, a strawman definition of nuclear safety culture. The only provocative remarks came from a couple of outside the mainstream “stakeholders”, the union rep for the NRC employees (and this was really limited to perceptions of internal NRC safety culture) and long time nuclear gadfly, Bille Garde (commended by Commissioner Svinicki for her consistency of position on safety culture spanning the last 20 years). Otherwise the discussions were heavily process oriented with very light questioning by the two currently seated Commissioners.

The main thrust of the briefing was on the definition of safety culture that was produced in the workshop. That strawman is different than that proposed by the NRC staff, or for that matter those used by other nuclear organizations such as INPO and INSAG. The workshop process sounded much more open and collegial than recent legislative processes on Capitol Hill.

Perhaps the one quote of the session that yields some insight as to where the Commission may be headed was from Chairman Jaczko; his comments can be viewed in the video below. Later in the briefing the staff demurred on endorsing the workshop product (versus the original staff proposal) pending additional input from internal and external sources.


Monday, March 29, 2010

Well Done by NRC Staffer

To support the discussion items on this blog we spend time ferreting out interesting pieces of information that bear on the issue of nuclear safety culture and promote further thought within the nuclear community. This week brought us to the NRC website and its Key Topics area.

As probably most of you are aware, the NRC hosted a workshop in February of this year for further discussions of safety culture definitions. In general we believe that the amount of time and attention being given to definitional issues currently seems to be at the point of diminishing returns. When we examine safety culture performance issues that arise around the industry, it is not apparent that confusion over the definition of safety culture is a serious causal issue, i.e., that someone was thinking of the INPO definition of safety culture instead of the INSAG one or the Schein perspective. Perhaps it must be a step in the process but to us what is interesting, and of paramount importance, is what causes disconnects between safety beliefs and actions taken and what can be done about them?


Thus, it was heartening and refreshing to see a presentation that addressed the key issue of culture and actions head-on. Most definitions of safety culture are heavy on descriptions of commitment, values, beliefs and attributes and light on the actual behaviors and decisions people make everyday. However, the definition that caught our attention was:


“The values, attitudes, motivations and knowledge that affect the extent to which safety is emphasized over competing goals in decisions and behavior.”

(Dr. Valerie Barnes, USNRC, “What is Safety Culture”, Powerpoint presentation, NRC workshop on safety culture, February 2010, p. 13)

This definition acknowledges the existence of competing goals and the need to address the bottom line manifestation of culture: decisions and actual behavior. We would prefer “actions” to “behavior” as it appears that behavior is often used or meant in the context of process or state of mind. Actions, as with decisions, signify to us the conscious and intentional acts of individuals. The definition also focuses on result in another way - “the extent to which safety is emphasized . . . in decisions. . . .” [emphasis added] What counts is not just the act of emphasizing, i.e. stressing or highlighting, safety but the extent to which safety impacts decisions made, or actions taken.


For similar reasons we think Dr. Barnes' definition is superior to the definition that was the outcome of the workshop:


“Nuclear safety culture is the core values and behaviors resulting from a collective commitment by leaders and individuals to emphasize safety over competing goals to ensure protection of people and the environment.”


(Workshop Summary, March 12, 2010, ADAMS ACCESSION NUMBER ML100700065, p.2)


As we previously argued in a 2008 white paper:


“. . . it is hard to avoid the trap that beliefs may be definitive but decisions and actions often are much more nuanced. . . .


"First, safety management requires balancing safety and other legitimate business goals, in an environment where there are few bright lines defining what is adequately safe, and where there are significant incentives and penalties associated with both types of goals. As a practical matter, ‘Safety culture is fragile.....a balance of people, problems and pressures.’


"Second, safety culture in practice is “situational”, and is continually being re-interpreted based on people’s actual behaviors and decisions in the safety management process. Safety culture beliefs can be reinforced or challenged through the perception of each action (or inaction), yielding an impact on culture that can be immediate or incubate gradually over time.”


(Robert Cudlin, "Practicing Nuclear Safety Management," March 2008, p. 3)


We hope the Barnes definition gets further attention and helps inform this aspect of safety culture policy.

Wednesday, March 10, 2010

"Normalization of a Deviation"

These are the words of John Carlin, Vice President at the Ginna Nuclear Plant, referring to a situation in the past where chronic water leakages from the reactor refueling pit were tolerated by the plant’s former owners. 

The quote is from a piece reported by Energy & Environment Publishing’s Peter Behr in its ClimateWire online publication titled, “Aging Reactors Put Nuclear Power Plant ‘Safety Culture’ in the Spotlight” and also published in The New York Times.  The focus is on a series of incidents with safety culture implications that have occurred at the Nine Mile Point and Ginna plants now owned and operated by Constellation Energy.

The recitation of events and the responses of managers and regulators are very familiar.  The drip, drip, drip is not the sound of water leaking but the uninspired give and take of the safety culture dialogue that occurs each time there is an incident or series of incidents that suggest safety culture is not working as it should.

Managers admit they need to adopt a questioning attitude and improve the rigor of decision making; ensure they have the right “mindset”; and corporate promises “a campaign to make sure its employees across the company buy into the need for an exacting attention to safety.”  Regulators remind the licensee, "The nuclear industry remains ... just one incident away from retrenchment..." but must be wondering why these events are occurring when NRC performance indicators for the plants and INPO rankings do not indicate problems.  Pledges to improve safety culture are put forth earnestly and (I believe) in good faith.

The drip, drip, drip of safety culture failures may not be cause for outright alarm or questioning of the fundamental safety of nuclear operations, but it does highlight what seems to be a condition of safety culture stasis - a standoff of sorts where significant progress has been made but problems continue to arise, and the same palliatives are applied.  Perhaps more significantly, where continued evolution of thinking regarding safety culture has plateaued.  Peaking too early is a problem in politics and sports, and so it appears in nuclear safety culture.

This is why the remark by John Carlin was so refreshing.  For those not familiar with the context of his words, “normalization of deviation” is a concept developed by Diane Vaughan in her exceptional study of the space shuttle Challenger accident.  Readers of this blog will recall that we are fans her book, The Challenger Launch Decision, where a mechanism she identifies as “normalization of deviance” is used to explain the gradual acceptance of performance results that are outside normal acceptance criteria.  Most scary, an organization's standards can decay and no one even notices.  How this occurs and what can be done about it are concepts that should be central to current considerations of safety culture. 

For further thoughts from our blog on this subject, refer to our posts dated October 6, 2009 and November 12, 2009.  In the latter, we discuss the nature of complacency and its insidious impact on the very process that is designed to avoid it in the first place.

Wednesday, August 26, 2009

Can Assessments Identify Complacency? Can Assessments Breed Complacency?

To delve a little deeper into this question, on Slide 10 of the NEI presentation there is a typical summary graphic of assessment results.  The chart catalogs the responses of members of the organization by the eight INPO principles of safety culture.  This summary indicates a variety of responses to the individual principles – for 3 or 4 of the principles there seems to be a fairly strong consensus that the right things are happening.  But 5 of the 8 principles show greater than a 20 score negative responses and 2 of the principles show greater than a 40 score negatives. 

First, what can or should one conclude about the overall state of safety culture in this organization given these results?  One wonders if these results were shown to a number of experts, whether their interpretations would be consistent or whether they would even purport to associate the results with a finding.  As discussed in a prior post, this issue is fundamental to the nature of safety culture, whether it is amenable to direct measurement, and whether assessment results really say anything about the safety health of the organization.

But the more particular question for this post is whether an assessment can detect complacency in an organization and its potential for latent risk to the organization’s safety performance.  In a post dated July 30, 2009 I referred to the problems presented by complacency, particularly in organizations experiencing few operational challenges.  That environment can be ripe for a weak culture to develop or be sustained. Could that environment also bias the responses to assessment questions, reinforcing the incorrect perception that safety culture is healthy?  It may be that this type of situation is of most relevance in today’s nuclear industry where the vast majority of plants are operating at high capacity factors and experiencing few significant operational events.  It is not clear to this commentator that assessments can be designed to explicitly detect complacency, and even the use of assessment results in conjunction with other data (data likely to look normal when overall performance is good) may not be credible in raising an alarm.

Link to NEI presentation.