Showing posts with label Assessment. Show all posts
Showing posts with label Assessment. Show all posts

Friday, January 25, 2013

Safety Culture Assessments: the Vit Plant vs. Other DOE Facilities

The Vit Plant
 As you recall, the Defense Nuclear Facilities Safety Board (DNFSB) set off a little war with DOE when DNFSB published its blistering June 2011 critique* of the Hanford Waste Treatment Plant's (Vit Plant) safety culture (SC).  Memos were fired back and forth but eventually things settled down.  One of DOE's resultant commitments was to assess SC at other DOE facilities to see if  SC concerns identified at the Vit Plant were also evident elsewhere.  Last month DOE transmitted the results of five assessments to DNFSB.**  The following facilities were evaluated:

• Los Alamos National Laboratory Chemistry and Metallurgy Research Replacement Project (Los Alamos)
• Y-12 National Security Complex Uranium Processing Facility Project (UPF)
• Idaho Cleanup Project Sodium Bearing Waste Treatment Project (Idaho)
• Office of Environmental Management Headquarters (EM)
• Pantex Plant
 


The same protocol was used for each of the assessments: DOE's Health, Safety and Security organization formed a team of its own assessors and two outside experts from the Human Performance Analysis Corporation (HPA).  Multiple data collection tools, including functional analysis, semi-structured focus group and individual interviews, observations and behavioral anchored rating scales, were used to assess organizational behaviors.  The external experts also conducted a SC survey at each site.

A stand-alone report was prepared for each facility, consisting of a summary and recommendation (ca. 5 pages) and the outside experts' report (ca. 25 pages).  The outside experts organized their observations and findings along the nine SC traits identified by the NRC, viz.,

• Leadership Safety Values and Actions
• Problem Identification and Resolution
• Personal Accountability
• Work Processes
• Continuous Learning
• Environment for Raising Concerns
• Effective Safety Communication
• Respectful Work Environment
• Questioning Attitude.

So, do Vit Plant SC concerns exist elsewhere?

That's up to the reader to determine.  The DOE submittal contained no meta-analysis of the five assessments, and no comparison to Vit Plant concerns.  As far as I can tell, the individual assessments made no attempt to focus on whether or not Vit Plant concerns existed at the reviewed facilities.

However, my back-of-the-envelope analysis (no statistics, lots of inference) of the reports suggests there are some Vit Plant issues that exist elsewhere but not to the degree that riled the DNFSB when it looked at the Vit Plant.  I made no effort to distinguish between issues mentioned by federal versus contractor employees, or by different contractors.  Following are the major Vit Plant concerns, distilled from the June 2011 DNFSB letter, and their significance at other facilities.

Schedule and/or budget pressure that can lead to suppressed issues or safety short-cuts
 

This is the most widespread and frequently mentioned concern.  It appears to be a significant issue at the UPF where the experts say “the project is being driven . . . by a production mentality.”  Excessive focus on financial incentives was also raised at UPF.  Some Los Alamos interviewees reported schedule pressure.  So did some folks at Idaho but others said safety was not compromised to make schedule; financial incentives were also mentioned there.  At EM, there were fewer comments on schedule pressure and at Pantex, interviewees opined that management shielded employees from pressure and tried to balance the message that both safety and production are important.

A chilled atmosphere adverse to safety exists

The atmosphere is cool at some other facilities, but it's hard to say the temperature is actually chilly.  There were some examples of perceived retaliation at Los Alamos and Pantex.  (Two Pantex employees reported retaliation for raising a safety concern; that's why Pantex, which was not on the original list of facilities for SC evaluation, was included.)  Fear of retaliation, but not actual examples, was reported at UPF and EM.  Fear of retaliation was also reported at Pantex. 

Technical dissent is suppressed

This is a minor issue.  There were some negative perceptions of the differing professional opinion (DPO) process at Los Alamos.  Some interviewees thought the DPO process at EM could be better utilized.  The experts said DPO needed to be better promoted at Pantex. 

Processes for raising and resolving SC-related questions exist but are neither trusted nor used

Another minor issue.  The experts said the procedures at Los Alamos should be reevaluated and enforced.

Conclusion

I did not read every word of this 155 page report but it appears some facilities have issues akin to those identified at the Vit Plant but their scope and/or intensity generally appear to be less.

The DOE submittal is technically responsive to the DNFSB commitment but is not useful without further analysis.  The submittal evidences more foot dragging by DOE to cover up the likely fact that the Vit Plant's SC problems are more significant than other facilities' and buy time to attempt to correct those problems.


* Defense Nuclear Facilities Safety Board, Recommendation 2011-1 to the Secretary of Energy "Safety Culture at the Waste Treatment and Immobilization Plant" (Jun 9, 2011).  We have posted on the DOE-DNFS imbroglio here, here and here.
   
**  G.S. Podansky (DOE) to P.S. Winokur (DNFSB), letter transmitting five independent safety culture assessments (Dec. 12, 2012).

Monday, October 8, 2012

DOE Nuclear Safety Workshop

The DOE held a Nuclear Safety Workshop on September 19-20, 2012.  Safety culture (SC) was the topic at two of the technical breakout sessions, one with outside (non-DOE) presenters and the other with DOE-related presenters.  Here’s our take on the outsiders’ presentations.

Chemical Safety Board (CSB)

This presentation* introduced the CSB and its mission and methods.  The CSB investigates chemical accidents and makes recommendations to prevent recurrences.  It has no regulatory authority. 

Its investigations focus on improving safety, not assigning blame.  The CSB analyzes systemic factors that may have contributed to an accident and recognizes that “Addressing the immediate cause only prevents that exact accident from occurring again.” (p. 5) 

The agency analyzes how safety systems work in real life and “why conditions or decisions leading to an accident were seen as normal, rational, or acceptable prior to the accident.” (p. 6)  They consider organizational and social causes, including “safety culture, organizational structure, cost pressures, regulatory gaps and ineffective enforcement, and performance agreements or bonus structures.” (ibid.)

The presentation included examples of findings from CSB investigations into the BP Texas City and Deepwater Horizon incidents.  The CSB’s SC model is adapted from the Schein construct.  What’s interesting is their set of artifacts includes many “soft” items such as complacency, normalization of deviance, management commitment to safety, work pressure, and tolerance of inadequate systems.

This is a brief and informative presentation, and well worth a look.  Perhaps because the CSB is unencumbered by regulatory protocol, it seems freer to go where the evidence leads it when investigating incidents.  We are impressed by their approach.
 
NRC

The NRC presentation** reviewed the basics of the Reactor Oversight Process (ROP) and then drilled down into how potential SC issues are identified and addressed.  Within the ROP, “. . . a safety culture aspect is assigned if it is the most significant contributor to an inspection finding.” (p.12)  After such a finding, the NRC may perform an SC assessment (per IP 95002) or request the licensee to perform one, which the NRC then reviews (per IP 95003).

This presentation is bureaucratic but provides a useful road map.  Looking at the overall approach, it is even more disingenuous for the NRC to claim that it doesn’t regulate SC.

IAEA

There was nothing new here.  This infomercial for IAEA*** covered the basic history of SC and reviewed contents of related IAEA documents, including laundry lists of desired organizational attributes.  The three-factor IAEA SC figure presented is basically the Schein model, with different labels.  The components of a correct culture change initiative are equally recognizable: communication, continuous improvement, trust, respect, etc.

The presentation had one repeatable quote: “Culture can be seen as something we can influence, rather than something we can control” (p. 10)

Conclusion

SC conferences and workshops are often worthless but sometimes one does learn things.  In this case, the CSB presentation was refreshingly complete and the NRC presentation was perhaps more revealing than the presenter intended.


*  M.A. Griffon, U.S. Chemical Safety and Hazards Investigation Board, “CSB Investigations and Safety Culture,” presented at the DOE Nuclear Safety Workshop (Sept. 19, 2012). 

**  U. Shoop, NRC, “Safety Culture in the U.S. Nuclear Regulatory Commission’s Reactor Oversight Process,” presented at the DOE Nuclear Safety Workshop (Sept. 19, 2012).

***  M. Haage, IAEA, “What is Safety Culture & Why is it Important?” presented at the DOE Nuclear Safety Workshop (Sept. 19, 2012).

Friday, October 5, 2012

The Corporate Culture Survival Guide by Edgar Shein

Our September 21, 2012 post introduced a few key elements of Prof. Edgar Schein’s “mental model” of organizational culture.  Our focus in that post was to decry how Schein’s basic construct of culture had been adopted by the nuclear industry but then twisted to fit company and regulatory desires for simple-minded mechanisms for assessing culture and cultural interventions.

In this post, we want to expand on Schein’s model of what culture is, how it can be assessed, and how its evolution can be influenced by management initiatives.  Where appropriate, we will provide our perspective based on our beliefs and experience.  All the quotes below come from Schein’s The Corporate Culture Survival Guide.*

What is Culture?

Schein’s familiar model shows three levels of culture: artifacts, espoused values and underlying assumptions.  In his view, the real culture is the bottom level: “Culture is the shared tacit assumptions of a group that have been learned through coping with external tasks and dealing with internal relationships.” (p. 217)  The strength of an organization’s culture is a function of the intensity of shared experiences and the relative success the organization has achieved.  “Culture . . . influences how you think and feel as well as how you act.” (p. 75)  Culture is thus a product of social learning. 

Our view does not conflict with Schein’s.  In our systems approach, culture is a variable that provides context for, but does not solely determine, organizational and individual decisions. 

How can Culture be Assessed?

Surveys

“You cannot use a survey to assess culture.” (p. 219)  The specific weaknesses of surveys are discussed elsewhere (pp. 78-80) but his bottom line is good enough for us.  We agree completely.

Interviews

Individual interviews can be used when interviewees would be inhibited in a group setting but Schein tries to avoid them in favor of group interviews because the latter are more likely to correctly identify the true underlying assumptions. 

In contrast, the NEI and IAEA safety culture evaluation protocols use interviews extensively, and we’ve commented on them here and here

Group discussion 


Schein’s recommended method for deciphering a company’s culture is a facilitated group exercise that attempts to identify the deeper (real) assumptions that drive the creation of artifacts by looking at conflicts between the artifacts and the espoused values. (pp. 82-87)   

How can Culture be Influenced?

In Schein’s view, culture cannot be directly controlled but managers can influence and evolve a culture.  In fact, “Managing cultural evolution is one of the primary tasks of leadership.” (p. 219)

His basic model for cultural change is creating the motivation to change, followed by learning and then internalizing new concepts, meanings and standards. (p. 106).  This can be a challenging effort; resistance to change is widespread, especially if the organization has been successful in the past.  Implementing change involves motivating people to change by increasing their survival anxiety or guilt; then promoting new ways of thinking, which can lead to learning anxiety (fear of loss or failure).  Learning anxiety can be ameliorated by increasing the learner’s psychological safety by using multiple steps, including training, role models and consistent systems and structures.  Our promotion of simulation is based on our belief that simulation can provide a platform for learners to practice new behaviors in a controlled and forgiving setting.

If time is of the essence or major transformational change is necessary, then the situation requires the removal and replacement of the key cultural carriers.  Replacement of management team members has often occurred at nuclear plants to address perceived performance/culture issues.
 
Schein says employees can be coerced into behaving differently but they will only internalize the new ways of doing business if the new behavior leads to better outcomes.  That may be true but we tend toward a more pragmatic approach and agree with Commissioner Apostolakis when he said: “. . . we really care about what people do and maybe not why they do it . . . .”

Bottom Line
Prof. Schein has provided a powerful model for visualizing organizational culture and we applaud his work.  Our own modeling efforts incorporate many of his factors, although not always in the same words.  In addition, we consider other factors that influence organizational behavior and feed back into culture, e.g., the priorities and resources provided by a corporate parent.


*  E.H. Schein, The Corporate Culture Survival Guide, new and revised ed. (San Francisco: Jossey-Bass, 2009).  

Friday, September 21, 2012

SafetyMatters and the Schein Model of Culture

A reader recently asked: “Do you subscribe to Edgar Schein's culture model?”  The short-form answer is a qualified “Yes.”  Prof. Schein has developed significant and widely accepted insights into the structure of organizational culture.  In its simplest form, his model of culture has three levels: the organization’s (usually invisible) underlying beliefs and assumptions, its espoused values, and its visible artifacts such as behavior and performance.  He describes the responsibility of management, through its leadership, to articulate the espoused values with policies and strategies and thus shape culture to align with management’s vision for the organization.  Schein’s is a useful mental model for conceptualizing culture and management responsibilities.*     

However, we have issues with the way some people have applied his work to safety culture.  For starters, there is the apparent belief that these levels are related in a linear fashion, more particularly, that management by promulgating and reinforcing the correct values can influence the underlying beliefs, and together they will guide the organization to deliver the desired behaviors, i.e., the target level of safety performance.  This kind of thinking has problems.

First, it’s too simplistic.  Safety performance doesn’t arise only because of management’s espoused values and what the rest of the organization supposedly believes.  As discussed in many of our posts, we see a much more complex, multidimensional and interactive system that yields outcomes which reflect, in greater or lesser terms, desired levels of safety.  We have suggested that it is the totality of such outcomes that is representative of the safety culture in fact.** 

Second, it leads to attempts to measure and influence safety culture that are often ineffective and even misleading.  We wonder whether the heavy emphasis on values and leadership attitudes and behaviors - or traits - that the Schein model encourages, creates a form versus substance trap.  This emphasis carries over to safety culture surveys - currently the linchpin for identifying and “correcting” deficient safety culture -  and even doubles down by measuring the perception of attitudes and behaviors.  While attitudes and behaviors may in fact have a beneficial effect on the organizational environment in which people perform - we view them as good habits - we are not convinced they are the only determinants of the actions, decisions and choices made by the organization.  Is it possible that this approach creates an organization more concerned with how it looks and how it is perceived than with what it does?   If everyone is checking their safety likeness in the cultural mirror might this distract from focusing on how and why actual safety-related decisions are being made?

We think there is good support for our skepticism.  For every significant safety event in recent years - the BP refinery fire, the Massey coal mine explosion, the shuttle disasters, the Deepwater oil rig explosion, and the many instances of safety culture issues at nuclear plants - the organization and senior management had been espousing as their belief that “safety is the highest priority.”  Clearly that was more illusion than reality.

To give a final upward thrust to the apple cart, we don’t think that the current focus on nuclear safety culture is primarily about culture.  Rather we see “safety culture” more as a proxy for management’s safety performance - and perhaps a back door for the NRC to regulate while disclaiming same.*** 


*  We have mentioned Prof. Schein in several prior blog posts: June 26, 2012, December 8, 2011, August 11, 2010, March 29, 2010, and August 17, 2009.

**  This past year we have posted several times on decisions as one type of visible result (artifact) of the many variables that influence organizational behavior.  In addition, please revisit two of Prof. Perin’s case studies, summarized here.  They describe well-intentioned people, who probably would score well on a safety culture survey, who made plant problems much worse through a series of decisions that had many more influences than management’s entreaties and staff’s underlying beliefs.

***  Back in 2006, the NRC staff proposed to enhance the ROP to more fully address safety culture, saying that “Safety culture includes . . . features that are not readily visible such as basic assumptions and beliefs of both managers and individuals, which may be at the root cause of repetitive and far-reaching safety performance problems.”  It wouldn’t surprise us if that’s an underlying assumption at the agency.  See L.A. Reyes to the Commissioners, SECY-06-0122 “Policy Issue Information: Safety Culture Initiative Activities to Enhance the Reactor Oversight Process and Outcomes of the Initiatives” (May 24, 2006) p. 7 ADAMS ML061320282.  

Friday, July 20, 2012

Cognitive Dissonance at Palisades

“Cognitive dissonance” is the tension that arises from holding two conflicting thoughts in one’s mind at the same time.  Here’s a candidate example, a single brief document that presents two different perspectives on safety culture issues at Palisades.

On June 26, 2012, the NRC requested information on Palisades’ safety culture issues, including the results of a 2012 safety culture assessment conducted by an outside firm, Conger & Elsea, Inc (CEI).  In reply, on July 9, 2012 Entergy submitted a cover letter and the executive summary of the CEI assessment.*  The cover letter says “Areas for Improvement (AFls) identified by CEI over1apped many of the issues already identified by station and corporate leadership in the Performance Recovery Plan. Because station and corporate management were implementing the Performance Recovery Plan in April 2012, many of the actions needed to address the nuclear safety culture assessment were already under way.”

Further, “Gaps identified between the station Performance Recovery Plan and the safety culture assessment are being addressed in a Safety Culture Action Plan. . . . [which is] a living document and a foundation for actively engaging station workers to identify, create and complete other actions deemed to be necessary to improve the nuclear safety culture at PNP.”

Seems like management has matters in hand.  But let’s look at some of the issues identified in the CEI assessment.

“. . . important decision making processes are governed by corporate procedures. . . .  However, several events have occurred in recent Palisades history in which deviation from those processes contributed to the occurrence or severity of an event.”

“. . . there is a lack of confidence and trust by the majority of employees (both staff and management) at the Plant in all levels of management to be open, to make the right decisions, and to really mean what they say. This is indicated by perceptions [of] the repeated emphasis of production over safety exhibited through decisions around resources.” [emphasis added]

“There is a lack in the belief that Palisades Management really wants problems or concerns reported or that the issues will be addressed. The way that CAP is currently being implemented is not perceived as a value added process for the Plant.”

The assessment also identifies issues related to Safety Conscious Work Environment and accountability throughout the organization.

So management is implying things are under control but the assessment identified serious issues.  As our Bob Cudlin has been explaining in his series of posts on decision making, pressures associated with goal conflict permeate an entire organization and the problems that arise cannot be fixed overnight.  In addition, there’s no reason for a plant to have an ineffective CAP but if the CAP isn’t working, that’s not going to be quickly fixed either.


*  Letter, A.J. Vitale to NRC, “Reply to Request for Information” (July 9,2012) ADAMS ML12193A111.

Friday, March 23, 2012

Going Beyond SCART: A More Useful Guidebook for Evaluating Safety Culture

Our March 11 post reviewed the IAEA SCART guidelines.  We found its safety culture characteristics and attributes comprehensive but its “guiding questions” for evaluators were thin gruel, especially in the areas we consider critical for safety culture: decision making, corrective action, work backlogs and management incentives.

This post reviews another document that combines the SCART guidelines, other IAEA documents and the author’s insights to yield a much more robust guidebook for evaluating a facility’s safety culture.  It’s called “Guidelines for Regulatory Assessment of Safety Culture in Licensees’ Organisations.”*  It starts with the SCART characteristics and attributes but gives more guidance to an evaluator: recommendations for documents to review, what to look for during the evaluation, additional (and more critical) guiding questions, and warning signs that can indicate safety culture weaknesses or problems.

Specific guidance in the areas we consider critical is generally more complete.  For example, in the area of decision making, evaluators are told to look for a documented process applicable to all matters that affect safety, attend meetings to observe the decision-making process, note the formalization of the decision making process and how/if long-term consequences of decisions are considered.  Goal conflict is explicitly addressed, including how differing opinions, conflict based on different experiences, and questioning attitudes are dealt with, and the evidence of fair and impartial methods to resolve conflicts.  Interestingly, example conflicts are not limited to the usual safety vs. cost or production but include safety vs. safety, e.g., a proposed change that would increase plant safety but cause additional personnel rad exposure to implement.  Evidence of unresolved conflicts is a definite warning flag for the evaluator. 

Corrective action (CA) also gets more attention, with questions and flags covering CA prioritization based on safety significance, the timely implementation of fixes, lack of CA after procedure violations or regulatory findings, verification that fixes are implemented and effective, and overall support or lack thereof for the CAP. 

Additional questions and flags cover backlogs in maintenance, corrective actions, procedure changes, unanalyzed physical or procedural problems, and training.

However, the treatment of management incentives is still weak, basically the same as the SCART guidelines.  We recommend a more detailed evaluation of the senior managers’ compensation scheme or, in more direct language, how much do they get paid for production, and how much for safety?

The intended audience for this document is a regulator charged with assessing a licensee’s safety culture.  As we have previously discussed, some regulatory agencies are evaluating this approach.  For now, that’s a no-go in the U.S.  In any case, these guidelines provide a good checklist for self-assessors, internal auditors and external consultants.


*  M. Tronea, “Guidelines for Regulatory Oversight of Safety Culture in Licensees’ Organisations” Draft, rev. 8 (Bucharest, Romania:  National Commission for Nuclear Activities Control [CNCAN], April 2011).  In addition to being on the staff of CNCAN, the nuclear regulatory authority of Romania, Dr. Tronea is the founder/manager of the LinkedIn Nuclear Safety group.  

Sunday, March 11, 2012

IAEA’s Safety Culture Assessment Review Team (SCART) Program

The International Atomic Energy Agency (IAEA) offers the SCART service to Member States.  A SCART’s goal is to assess the safety culture in a nuclear facility and provide recommendations for enhancing safety culture going forward.  In this post, we provide an overview of the program and our evaluation of it.

What is SCART?

“SCART is an assessment of safety culture based on IAEA standards and guidelines by a team of international and independent safety culture experts.”*  A SCART mission can take up to a year start-to-finish, including a pre-SCART visit to finalize the scope of the  assessment.  The SCART on-site assessment takes two full weeks, including the review team’s on-site organizational activities.  The assessment uses document reviews, interviews and observations to gather data, and a fairly prescribed methodology for drawing inferences from the data. 

The review team utilizes an evaluation framework consisting of five key safety culture characteristics, which are assessed using 37 attributes.  The attributes describe “specific organizational performance or attitude . . . which, if fulfilled, would characterize this performance or attitude as belonging to a strong safety culture.”**

The review team’s findings describe the current state of safety culture at the host facility; the team’s recommendations and suggestions describe ways the safety culture could be improved.

Is it any good?

At first blush, SCART appears overly bureaucratic and time-consuming.  However, the assessment team is likely to be comprised of IAEA staff and outside experts from different countries, and the target facility is likely in yet another country.  In addition, much of prescribed methodology is aimed at ensuring the team covers all important topics and reaches robust, i.e., repeatable, conclusions.  All this takes time and a detailed game plan.

But what about the game plan itself?  

The review team forms opinions on five safety culture characteristics by assessing 37 attributes.  The characteristics are non-controversial and the set of attributes appears comprehensive. (SCART Guidelines, pp. 25-26)  Supporting them is a set of over 300 suggested “guiding questions” for the interviews.  For us, the most important aspects are the attributes and questions that address topics we consider essential for an effective safety culture: a successful corrective action program, acceptable work backlogs, a decision making process that appropriately values safety, and management incentives.

Corrective Action Program

Attribute E.5: Learning is facilitated through the ability to recognize and diagnose deviations, to formulate and implement solutions and to monitor the effects of corrective actions” (Guidelines, p. 47)

The relevant*** guiding questions are: “Can staff members or contractors point to examples of problems they have reported which have been fixed?”  “How high is the rate of repeat events or errors?”

There’s a lot more than can be asked about the CAP.  For example, Who can initiate an action request?  How are requests evaluated and prioritized; does safety receive consistent attention and appropriate priority?  What are the backlogs and trends?  What items are subject to root cause analysis?  Does root cause analysis find the real causes of problems, i.e., do the subject problems cease to occur after they have been fixed?  

Work Backlogs

“Attribute A.2: Safety is a primary consideration in the allocation of resources:” (Guidelines, p. 28)

The relevant guiding questions are: “Can staff members and contractors describe examples when the allocation of resources affected the backlog of maintenance tasks and nuclear facility modifications? What was the process to resolve the conflict?”

How about: What are the backlogs and trends in every major department?  Are backlogs at an acceptable level?  Why or why not?  If not, then is there a plan to clear the backlogs?  Are resources available to implement the plan?

Decision Making Process

Two attributes refer to decision making.  “Attribute A.1: The high priority given to safety is shown in documentation, communications and decision making:” (Guidelines, p. 27)   “Attribute A.5.: A proactive and long term approach to safety issues is shown in decision-making:” (Guidelines, p. 29)

The relevant guiding questions for A.1 are: “During periods of heavy work-load, in what way do managers ensure that staff members and contractors are reminded that unnecessary haste and shortcuts are inappropriate?  Can staff members and contractors describe situations when the rationale for significant decisions related to safety was communicated to a large group of individuals in the nuclear facility?  Can staff members and contractors describe situations when assumptions and conclusions of earlier safety decisions were challenged in the light of new information, operating experience or changes in context?”

The relevant guiding questions for A.5 are: “What is the approach of managers at all levels when they have to cope with an unforeseen event requiring more staff at short notice?  What happens if, for any reason, production requirements are permitted to interfere with scheduled training modules? What kind of a system for prioritizing maintenance work along safety requirements is established?

We would add:   How does the decision-making process handle competing goals, set priorities, treat devil’s advocates who raise concerns about possible unfavorable outcomes, and assign resources?  Are the most qualified people involved in key decisions, regardless of their position or rank?  How are safety concerns handled in making real-time decisions?

Management Incentives

Incentives are discussed under Attribute A.5 (see preceding section).

The relevant guiding questions are: “What is the major focus of incentives and priorities for senior management?  How are management incentive strategies discussed on the corporate level?”

We would add: How is safety incorporated into management incentives, if at all?  If safety is addressed, is it limited to industrial safety?  Is there any disincentive, e.g., loss of bonus, if safety-related incidents occur or recur?   


With over 300 guiding questions, I may have missed some that address our key issues.  But the ones identified above seem a little thin in their treatment of the most important issues related to safety culture.  We are not saying the other attributes and questions are not important—but they do not address the core of safety culture’s impact on organizational behavior. 

Has SCART Been Applied?

Yes.  Since 2006, IAEA has conducted three SCART evaluations, two of which occurred at nuclear power plants.  (A request to IAEA asking if additional evaluations have taken place went unanswered.)  I think we can safely say it is not wildly popular.

Conclusion

The SCART materials provide a good reference for anyone trying to figure out how to evaluate their facility’s safety culture.  The comprehensive, step-by-step approach ensures that all attributes are covered and individual expert opinions are melded into team opinions for each attribute and characteristic.  However, we doubt anyone would ever use it as a template for self-assessment.  It is too resource-intensive, treats key areas  lightly and basically creates a static, as opposed to dynamic, snapshot of safety culture.  The overall impression reminds me of the apocryphal tale of the man who wrote a book titled 1000 Ways to Make Love; unfortunately, he didn’t know any women.


*  C. Viktorsson, IAEA, “Understanding and Assessing Safety Culture,” Symposium on Nuclear Safety Culture: Fostering Safety Culture in Japan’s Nuclear Industry: How To Make It Robust?  (Mar 22-23, 2006) p. 18.

**  SCART Guidelines: Reference Report for IAEA Safety Culture Assessment Review Team (SCART), IAEA, Vienna (July 2008) p. 4.

***  Each attribute is followed by many guiding questions.  I have selected the questions that appear most related to our key topics.

Monday, December 5, 2011

Regulatory Assessment of Safety Culture—Not Made in U.S.A.

Last February, the International Atomic Energy (IAEA) hosted a four-day meeting of regulators and licensees on safety culture.*  “The general objective of the meeting [was] to establish a common opinion on how regulatory oversight of safety culture can be developed to foster safety culture.”  In fewer words, how can the regulator oversee and assess safety culture?

While no groundbreaking new methods for evaluating a nuclear organization’s safety culture were presented, the mere fact there is a perception that oversight methods need to be developed is encouraging.  In addition, outside the U.S., it appears more likely that regulators are expected to engage in safety culture oversight if not formal regulation.

Representatives from several countries made presentations.  The NRC presentation discussed the then-current status of the effort that led to the NRC safety culture policy statement announced in June.  The presentations covering Belgium, Bulgaria, Indonesia, Romania, Switzerland and Ukraine described different efforts to include safety culture assessment into licensee evaluations.

Perhaps the most interesting material was a report on an attendee survey** administered at the start of the meeting.  The survey covered “national regulatory approaches used in the oversight of safety culture.” (p.3) 18 member states completed the survey.  Following are a few key findings:

The states were split about 50-50 between having and not having regulatory requirements related to safety culture. (p. 7)  The IAEA is encouraging regulators to get more involved in evaluating safety culture and some countries are responding to that push.

To minimize subjectivity in safety culture oversight, regulators try to use oversight practices that are transparent,  understandable, objective, predictable, and both risk-informed and performance-based. (p. 13)  This is not news but it is a good thing; it means regulators are trying to use the same standards for evaluating safety culture as they use for other licensee activities.

Licensee decision-making processes are assessed using observations of work groups, probabilistic risk analysis, and during the technical inspection. (p. 15)  This seems incomplete or even weak to us.  In-depth analysis of critical decisions is necessary to reveal the underlying assumptions (the hidden, true culture) that shape decision-making.

Challenges include the difficulty in giving an appropriate priority to safety in certain real-time decision making situations and the work pressure in achieving production targets/ keeping to the schedule of outages. (p. 16)  We have been pounding the drum about goal conflict for a long time and this survey finding simply confirms that the issue still exists.

Bottom Line

The meeting was generally consistent with our views.  Regulators and licensees need to focus on cultural artifacts, especially decisions and decision making, in the short run while trying to influence the underlying assumptions in the long run to reduce or eliminate the potential for unexpected negative outcomes.



**  A. Kerhoas, "Synthesis of Questionnaire Survey."

Wednesday, November 23, 2011

Lawyering Up

When concerns are raised about the safety culture of an organization with very significant safety responsibilities what’s one to do?  How about, bring in the lawyers.  That appears to be the news out of the Vit Plant* in Hanford, WA.  With considerable fanfare Bechtel unveiled a new website committed to their management of the vit plant.  The site provides an array of policies, articles, reports, and messages regarding safety and quality.

One of the major pieces of information on the site is a recent assessment of the state of safety culture at the vit plant.**  The conclusion of the assessment is quite positive: “Overall, we view the results from this assessment as quite strong, and similar to prior assessments conduct [sic] by the Project.” (p. 16)  The prior assessments were the 2008 and 2009 Vit Plant Opinion Surveys.

However our readers may also recall that earlier this year the Defense Nuclear Facilities Safety Board (DNFSB) issued its report that at the safety culture at the WTP plant is “flawed”.  In a previous post we quoted from the DNFSB report as follows:

“The HSS [DOE's Office of Health, Safety and Security] review of the safety culture on the WTP project 'indicates that BNI [Bechtel National Inc.] has established and implemented generally effective, formal processes for identifying, documenting, and resolving nuclear safety, quality, and technical concerns and issues raised by employees and for managing complex technical issues.'  However, the Board finds that these processes are infrequently used, not universally trusted by the WTP project staff, vulnerable to pressures caused by budget or schedule [emphasis added], and are therefore not effective.”

Thus the DNFSB clearly has a much different view of the state of safety culture at the vit plant than does DOE or Bechtel.  We note that the DNFSB report does not appear to be one of the numerous references available at the new website.  Links to the original DOE report and the recent assessment are provided.  There is also a November 17, 2011 message to all employees from Frank Russo, Project Director*** which introduces and summarizes the 2011 Opinion Survey on the project’s nuclear safety and quality culture (NSQC).  Neither the recent assessment nor the opinion survey addresses the issues raised by the DNFSB; it is as if the DNFSB review never happened.

What really caught our attention in the recent assessment is who wrote the report - a law firm.  Their assessment was based on in-depth interviews of 121 randomly selected employees using a 19 question protocol (the report states that the protocol is attached however it is not part of the web link).  But the law firm did not actually conduct the interviews - “investigators” from the BSII internal audit department did so and took notes that were then provided to the lawyers.  This may give new meaning to the concept of “defense in depth”.

The same law firm also analyzed the results from the 2011 Opinion Survey.  In the message to employees from , Russo asserts that the law firm has “substantial experience in interpreting [emphasis added] NSQC assessments”.  He goes on to say that the questions for the survey were developed by the WTP Independent Safety and Quality Culture Assessment (ISQCA) Team.  In our view, this executive level team has without question “substantial experience” in safety culture.  Supposedly the ISQCA team was tasked with assessing the site’s culture - why then did they only develop the questions and a law firm interpret the answers?  Strikes us as very odd. 

We don’t know the true state of safety culture at the vit plant and unfortunately, the work sponsored by vit plant management does little to provide such insight or to fully vet and respond to the serious deficiencies cited in the DNFSB assessment.  If we were employees at the plant we would be anxious to hear directly from the ISQCA team. 

Reading the law firm report provides little comfort.  We have commented many times about the inherent limitations of surveys and interviews to solicit attitudes and perceptions.  When the raw materials are interview notes of a small fraction of the employees, and assessed by lawyers who were not present in the interviews, we become more skeptical.  Several quotes from the report related to the Employee Concerns Program illustrate our concern.

“The overwhelming majority of interviewees have never used ECP. Only 6.5% of the interviewees surveyed had ever used the program.  [Note: this means a total of nine interviewees.] There is a major difference between the views of interviewees with no personal experience with ECP and those who have used the program: the majority of the interviewees who have not used the program have a positive impression of the program, while more than half of the interviewees who have used the program have a negative impression of it.” (p. 5, emphasis added)

Our favorite quote out of the report is the following.  “Two interviewees who commented on the [ECP] program appear to have confused it with Human Resources.” (p. 6)  One only wonders if the comments were favorable.

Eventually the report gets around to a conclusion that we probably could not say any better.  “We recognize that an interview population of nine employees who have used the ECP in the past is insufficient to draw any meaningful conclusions about the program.” (p. 17)

We’re left with the following question: Why go about an assessment of safety culture in such an obtuse manner, one that is superficial in its “interpretation” of very limited data,  laden with anecdotal material, and ultimately over reaching in its conclusions?


*  The "Vit Plant" is the common name for the Hanford Waste Treatment Plant (WTP).

**  Pillsbury Winthrop Shaw Pittman, LLP, "Assessment of a Safety Conscious Work Environment at the Hanford Waste Treatment Plant" (undated).  The report contains no information on when the interviews or analysis were performed.  Because a footnote refers to the 2009 Opinion Survey and a report addendum refers to an October, 2010 DOE report, we assume the assessment was performed in early-to-mid 2010.

*** WTP Comm, "Message from Frank: 2011 NSQC Employee Survey Results" (Nov. 17, 2011).  

Thursday, January 6, 2011

Nuclear Safety Culture Assessment Manual

July 9, 2012 update: How to Get the NEI Nuclear Safety Culture Assessment Manual

The manual is available in the NRC ADAMS database, Accession Numbers ML091810801, ML091810803, ML091810805, ML091810807, ML091810808 and ML091810809.

**********************************************************
 
As recently reported at TheDay.com,* NEI has published a “Nuclear Safety Culture Assessment Manual,” a document that provides guidance for conducting a safety culture (SC) assessment at a nuclear power plant.  The industry has issued the manual and conducted some pilot program assessments in an effort to influence and stay ahead of the NRC’s initiative to finalize a SC policy statement this year.  The NRC is formulating a policy (as opposed to a regulatory requirement) in this area because it apparently believes that SC cannot be directly regulated and/or any attempt to assess SC comes too close to evaluating (or interfering with) plant management, a task the agency has sought to avoid. 

Basically, the manual describes an assessment methodology based on the eight INPO principles for creating/maintaining a strong nuclear safety culture.  It is a comprehensive how-to document including assessment team organization, schedules, interview guidance and questions, sample communication memos, and report templates.  The manual has a strongly prescriptive approach, i.e., it seeks to create a standardized approach which should facilitate comparisons between different facilities and the same facility over time. 

The best news from our perspective is that the NEI assessment approach relies heavily on interviews; it uses a site survey instrument only to identify pre-assessment areas of interest.  It’s no secret that we are skeptical about over-inference with respect to the health of a plant’s safety culture from the snapshot a survey provides.  The assessment also uses direct observations of behavior of employees at all levels during scheduled activities, such and meetings and briefings, and ad-hoc observation opportunities.

A big question is: In a week-long self assessment, can a team discern the degree to which an organization satisfies key principles, e.g., the level of trust in the organization or whether leaders demonstrate a commitment to safety?  I think we have to answer that with “Maybe.”  Skilled and experienced interviewers can probably determine the general status of these variables but may not develop a complete picture of all the nuances.  BUT, their evaluation will likely be more useful than any survey.

There is one obvious criticism with the NEI approach which industry critics have quickly identified.  As David Collins puts it in TheDay.com article, “[T]he industry is monitoring itself - this is the fox monitoring the henhouse."  While the manual is proposed for use by anyone performing a safety culture assessment, including a truly independent third party, the reality is the industry expects the primary users to be utilities performing self assessments or “independent” assessments, which include non-utility people on the team. 


*  P. Daddona, “Nuclear group puts methods into use to foster 'a safety culture',” TheDay.com
(Dec 21, 2010).

Tuesday, November 9, 2010

Human Beings . . . Conscious Decisions

In a  New York Times article* dated November 8, 2010, there was a headline to the effect that Fred Bartlit, the independent investigator for the presidential panel on the BP oil rig disaster earlier this year had not found that “cost trumped safety” in decisions leading up to the accident.  The article noted that this finding contradicted determinations by other investigators including those sponsored by Congress.  We had previously posted on this subject, including taking notice of the earlier findings of cost trade-offs, and wanted to weigh in based on this new information.

First we should acknowledge that we have no independent knowledge of the facts associated with the blowout and are simply reacting to the published findings of current investigations.  In our prior posts we had posited that cost pressures could be part of the equation in the leadup to the spill.  On June 8, 2010 we observed:

“...it is clear that the environment leading up to the blowout included fairly significant schedule and cost pressures. What is not clear at this time is to what extent those business pressures contributed to the outcome. There are numerous cited instances where best practices were not followed and concerns or recommendations for prudent actions were brushed aside. One wishes the reporters had pursued this issue in more depth to find out ‘Why?’ ”

And we recall one of the initial observations made by an OSHA official shortly after the accident as detailed in our April 26, 2010 post:

“In the words of an OSHA official BP still has a ‘serious, systemic safety problem’ across the company.”

So it appears we have been cautious in reaching any conclusions about BP’s safety management.  That said, we do want to put into context the finding by Mr. Bartlit.  First we would note that he is, by profession, a trial lawyer and may be both approaching the issue and articulating his finding with a decidedly legal focus.  The specific quotes attributed to him are as follows:

“. . . we have not found a situation where we can say a man had a choice between safety and dollars and put his money on dollars” and “To date we have not seen a single instance where a human being made a conscious decision to favor dollars over safety,...”

It is not surprising that a lawyer would focus on culpability in terms of individual actions.  When things go wrong, most industries, nuclear included, look to assign blame to individuals and move on.  It is also worth noting that the investigator emphasized that no one had made a “conscious” decision to favor cost over safety.  We think it is important to keep in mind that safety management and failures of safety decision making may or may not involve conscious decisions.  As we have stated many times in other posts, safety can be undermined through very subtle mechanisms such that even those involved may not appreciate the effects, e.g., the normalization of deviance.  Finally we think the OSHA investigator may have been closer to the truth with his observation about “systemic” safety problems.  It may be that Mr. Bartlit, and other investigators, will be found to have suffered from what is termed “attribution error” where simple explanations and causes are favored and the more complex system-based dynamics are not fully assessed or understood in the effort to answer “Why?”  

* J.M. Broder, "Investigator Finds No Evidence That BP Took Shortcuts to Save Money," New York Times (Nov 8, 2010).

Thursday, October 28, 2010

Safety Culture Surveys in Aviation

Like nuclear power, commercial aviation is a high-reliability industry whose regulator (the FAA) is interested in knowing the state of safety culture.  At an air carrier, the safety culture needs to support cooperation, coordination, consistency and integration across departments and at multiple physical locations.

And, like nuclear power, employee surveys are used to assess safety culture.  We recently read a report* on how one aviation survey process works.  The report is somewhat lengthy so we have excerpted and summarized points that we believe will be interesting to you.

The survey and analysis tool is called the Safety Culture Indicator Scale Measurement System (SCISMS), “an organizational self-assessment instrument designed to aid operators in measuring indicators of their organization’s safety culture, targeting areas that work particularly well and areas in need of improvement.” (p. 2)  SCISMS provides “an integrative framework that includes both organizational level formal safety management systems, and individual level safety-related behavior.” (p. 8)

The framework addresses safety culture in four main factors:  Organizational Commitment to Safety, Operations Interactions, Formal Safety Indicators, and Informal Safety Indicators.  Each factor is further divided into three sub-factors.  A typical survey contains 100+ questions and the questions usually vary for different departments.

In addition to assessing the main factors, “The SCISMS contains two outcome scales: Perceived Personal Risk/Safety Behavior and Perceived Organizational Risk . . . . It is important to note that these measures reflect employees’ perceptions of the state of safety within the airline, and as such reflect the safety climate. They should not be interpreted as absolute or objective measures of safety behavior or risk.” (p. 15)  In other words, the survey factors and sub-factors are not related to external measurements of safety performance, but the survey-takers’ perceptions of risk in their work environment.

Summary results are communicated back to participating companies in the form of a two-dimensional Safety Culture Grid.  The two dimensions are employees’ perceptions of safety vs management’s perceptions of safety.  The grid displays summary measures from the surveys; the measures can be examined for consistency (one factor or department vs others), direction (relative strength of the safety culture) and concurrence of employee and management survey responses.

Our Take on SCISMS

We have found summary level graphics to be very important in communicating key information to clients and the Safety Culture Grid appears like it could be effective.  One look at the grid shows the degree to which the various factors have similar or different scores, the relative strength of the safety culture, and the perceptual alignment of managers and employees with respect to the organization’s safety culture.   Grids can be constructed to show findings across factors or departments within one company or across multiple companies for an industry comparison. 

Our big problem is with the outcome variables.  Given that the survey contains perceptions of both what’s going on and what it means in terms of creating safety risks, it is no surprise that the correlations between factor and outcome data are moderate to strong.  “Correlations with Safety Behavior range from r = .32 - .60 . . . . [and] Correlations between the subscales and Perceived Risk are generally even stronger, ranging from r = -.38 to -.71” (p. 25)  Given the structure of the instrument, one might ask why the correlations are not even larger.  We’d like to see some intelligent linkage between safety culture results and measures of safety performance, either objective measures or expert evaluations.

The Socio-Anthropological and Organizational Psychological Perspectives

We have commented on the importance of mental models (here, here and here) when viewing or assessing safety culture.  While not essential to understanding SCISMS, this report fairly clearly describes two different perspectives of safety culture: the socio-anthropological and organizational psychological.  The former “highlights the underlying structure of symbols, myths, heroes, social drama, and rituals manifested in the shared values, norms, and meanings of groups within an organization . . . . the deeper cultural structure is often not immediately interpretable by outsiders. This perspective also generally considers that the culture is an emergent property of the organization . . . and therefore cannot be completely understood through traditional analytical methods that attempt to break down a phenomenon in order to study its individual components . . . .”

In contrast, “The organizational psychological perspective . . . . assumes that organizational culture can be broken down into smaller components that are empirically more tractable and more easily manipulated . . . and in turn, can be used to build organizational commitment, convey a philosophy of management, legitimize activity and motivate personnel.” (pp.7-8) 

The authors characterize the difference between the two viewpoints as qualitative vs quantitative and we think that is a fair description.


*  T.L. von Thaden and A.M. Gibbons, “The Safety Culture Indicator Scale Measurement System (SCISMS)” (Jul 2008) Technical Report HFD-08-03/FAA-08-02. Savoy, IL: University of Illinois, Human Factors Division.

Monday, August 23, 2010

Safety Climate Surveys (Part 1)

In our August 11, 2010 post we quoted from a paper* addressing safety culture on off-shore oil facilities.  While the paper is a bit off-topic for SafetyMatters (the focus is more on industrial safety and individual, as opposed to group, perceptions), it provides a very good example of how safety climate survey data should be collected and rigorously analyzed, and hypotheses tested.  In addition, one of the findings is quite interesting.

The researchers knew from survey data which respondents had experienced an accident at a facility (not just those facilities where they were currently working), and which respondents had not.  They also knew which of the surveyed facilities had a historically higher proportion of accidents and which had a lower proportion.  “In this case, . . . respondents who had not experienced an accident provided significantly less favorable scores on installations with low accident proportions. Additionally, respondents who had experienced an accident provided significantly less favorable scores on installations with high accident proportions.” (p. 656)  In other words, workers who had been through an accident recognized a relatively safer (riskier) environment better than workers who had not.  While this is certainly more evidence that experience is the best teacher, we think it might have an implication for the commercial nuclear industry.

Unlike offshore oil workers, the overwhelming majority of nuclear power plant employees have never experienced a significant incident (we’re excluding ordinary personnel mishaps).  Thus, their work experience is of limited use in helping them assess just how strong their safety culture actually is.  Does this make these employees more vulnerable to complacency or slowing running off the rails a la NASA?  

*  Mearns K, Whitaker S & Flin R, “Safety climate, safety management practices and safety performance in offshore environments.”  Safety Science 41(8) 2003 (Oct) pp 641-680.

Wednesday, August 11, 2010

Down Under Perspective on Surveys

Now from Australia we have come across more research results related to some of the key findings we discussed in our August 2, 2010 post “Mission Impossible”. Recall from that post that research comparing the results of safety surveys prior to a significant event at an offshore oil platform with post-event investigations, revealed significant differences in cultural attributes.

This 2006 paper* draws on a variety of other published works and the author’s own experience in analyzing major safety events. Note that the author refers to safety culture surveys as “perception surveys”, since they focus on people’s perceptions of attitudes, values and behaviors.

“The survey method is well suited to studying individual attitudes and values and it might be thought that the method is thereby biased in favour of a definition of culture in these terms. However, the survey method is equally suited to studying practices, or ‘the way we do things around here’. The only qualification is that survey research of “the way we do things around” here necessarily measures people’s perceptions rather than what actually happens, which may not necessarily coincide.” (p.5) As we have argued, and this paper agrees, it is actual behaviors and outcomes that are most important. The question is, can actual behaviors be discerned or predicted on the basis of surveys? The answer is not clear.

“The question of whether or how the cultures so identified [e.g., by culture surveys] impact on safety is a separate question. Mearns and co-workers argue that there is some, though rather limited, evidence that organisations which do well in safety climate surveys actually have fewer accidents” (p. 14 citing Mearns et al)**

I kind of liked a distinction made early on in the paper, that it is better to ascertain an organization’s “culture” and then assess the impact of that culture on safety, then to directly assess “safety culture”. This approach emphasizes the internal dynamics and the interaction of values and safety priorities with other competing business and environmental pressures. As this paper notes, “. . .the survey method tells us very little about dynamic processes - how the organisation goes about solving its problems. This is an important limitation. . . .Schein makes a similar point when he notes that members of a culture are most likely to reveal themselves when they have problems to solve. . . .(p. 6)

*  Andrew Hopkins, "Studying Organisational Cultures and their Effects on Safety," paper prepared for presentation to the International Conference on Occupational Risk Prevention, Seville, May 2006 (National Research Centre for OHS Regulation, Australian National University).

**  Mearns K, Whitaker S & Flin R, “Safety climate, safety management practices and safety performance in offshore environments”. Safety Science 41(8) 2003 (Oct) pp 641-680.

Monday, August 2, 2010

Mission Impossible

We are back to the topic of safety culture surveys with a new post regarding an important piece of research by Dr. Stian Antonsen of the Norwegian University of Science and Technology.  He presents an empirical analysis of the following question:

    “..whether it is possible to ‘predict’ if an organization is prone to having major accidents on the basis of safety culture assessments.”*

We have previously posted a number of times on the use and efficacy of safety culture assessments.  As we observed in an August 17, 2009 post, “Both the NRC and the nuclear industry appear aligned on the use of assessments as a response to performance issues and even as an ongoing prophylactic tool.  But, are these assessments useful?  Or accurate?  Do they provide insights into the origins of cultural deficiencies?”

Safety culture surveys have become ubiquitous across the U.S. nuclear industry.  This reliance on surveys may be justified, Antonsen observes, to the extent they provide a “snapshot” of “attitudes, values and perceptions about organizational practices…”  But Antonsen cautions that the ability of surveys to predict organizational accidents has not been established empirically and cites some researchers who suspect surveys “‘invite respondents to espouse rationalisations, aspirations, cognitions or attitudes at best’ and that ‘we simply don’t know how to interpret the scales and factors resulting from this research’”.  Furthermore, surveys present questions where the favorable or desired answers may be obvious.  “The risk is, therefore, that the respondents’ answers reflect the way they feel they should feel, think and act regarding safety, rather than the way they actually do feel, think and act…”  As we have stated in a white paper** on nuclear safety management, “it is hard to avoid the trap that beliefs may be definitive but decisions and actions often are much more nuanced.”

To investigate the utility of safety culture surveys Antonsen compared results of a safety survey conducted of the employees of an offshore oil platform (Snorre Alpha) prior to a major operational incident, with the results of detailed investigations and analyses following the incident.  The survey questionnaire included twenty questions similar to those found in nuclear plant surveys.  Answers were structured on a six-point Likert scale, also similar to nuclear plant surveys.  The overall result of the survey was that employees had a highly positive view of safety culture on the rig.

The after incident analysis was performed by the Norwegian Petroleum Safety Authority  and a causal analysis was subsequently performed by Statoil (the rig owner) and a team of researchers.  The findings from the original survey and the later incident investigations were “dramatically different” as to the Snorre Alpha safety culture.  Perhaps one of the telling differences was that the post hoc analyses identified that the rig culture included meeting production targets as a dominant cultural value.  The bottom line finding was that the survey failed to identify significant organizational problems that later emerged in the incident investigations.

Antonsen evaluates possible reasons for the disconnect between surveys and performance outcomes.  He also comments on the useful role surveys can play; for example inter-organizational comparisons and inferring cultural traits.  In the end the research sounds a cautionary note on the link between survey-based measures and the “real” conditions that determine safety outcomes.

Post Script: Antonsen’s “Mission Impossible” paper was published in December 2009.  We now have seen another oil rig accident with the recent explosion and oil spill from BP’s Deepwater Horizon rig.  As we noted in our July 22, 2010 post, a safety culture survey had been performed of that rig’s staff several weeks prior to the explosion with overall positive results.  The investigations of this latest event could well provide additional empirical support for the "Mission Impossible" study results. 

* The study is “Safety Culture Assessment: A Mission Impossible?”  The link connects to the abstract; the paper is available for purchase at the same site.

**  Robert L. Cudlin, "Practicing Nuclear Safety Management" (March 2008), p. 3.

Wednesday, June 30, 2010

Can Safety Culture Be Regulated? (Part 2)

Part 1 of this topic covered the factors important to safety culture and amenable to measurement or assessment, the “known knowns.”   In this Part 2 we’ll review other factors we believe are important to safety culture but cannot be assessed very well, if at all, the “known unknowns” and the potential for factors or relationships important to safety culture that we don’t know about, the “unknown unknowns.”

Known Unknowns

These are factors that are probably important to regulating safety culture but cannot be assessed or cannot be assessed very well.  The hazard they pose is that deficient or declining performance may, over time, damage and degrade a previously adequate safety culture.

Measuring Safety Culture

This is the largest issue facing a regulator.  There is no meter or method that can be applied to an organization to obtain the value of some safety culture metric.  It’s challenging (impossible?) to robustly and validly assess, much less regulate, a variable that cannot be measured.  For a more complete discussion of this issue, please see our June 15, 2010 post

Trust

If the plant staff does not trust management to do the right thing, even when it costs significant resources, then safety culture will be negatively affected.  How does one measure trust, with a survey?  I don’t think surveys offer more than an instantaneous estimate of any trust metric’s value.

Complacency

Organizations that accept things as they are, or always have been, and see no opportunity or need for improvement are guilty of complacency or worse, hubris.  Lack of organizational reinforcement for a questioning attitude, especially when the questions may result in lost production or financial costs, is a de facto endorsement of complacency.  Complacency is often easy to see a posteriori, hard to detect as it occurs.  

Management competence

Does management implement and maintain consistent and effective management policies and processes?  Is the potential for goal conflict recognized and dealt with (i.e., are priorities set) in a transparent and widely accepted manner?  Organizations may get opinions on their managers’ competence, but not from the regulator.

The NRC does not evaluate plant or owner management competence.  They used to, or at least appeared to be trying to.  Remember the NRC senior management meetings, trending letters, and the Watch List?  While all the “problem” plants had material or work process issues, I believe a contributing factor was the regulator had lost confidence in the competence of plant management.  This system led to the epidemic of shutdown plants in the 1990s.*   In reaction, politicians became concerned over the financial losses to plant owners and employees, and the Commission become concerned that the staff’s explicit/implicit management evaluation process was neither robust and nor valid.

So the NRC replaced a data-informed subjective process with the Reactor Oversight Program (ROP) which looks at a set of “objective” performance indicators and a more subjective inference of cross-cutting issues: human performance, finding and fixing problems (CAP, a known), and management attention to safety and workers' ability to raise safety issues (SCWE, part known and part unknown).  I don’t believe that anyone, especially an outsider like a regulator, can get a reasonable picture of a plant’s safety culture from the “Rope.”  There most certainly are no leading or predictive safety performance indicators in this system.

External influences

These factors include changes in plant ownership, financial health of the owner, environmental regulations, employee perceptions about management’s “real” priorities, third-party assessments, local socio-political pressures and the like.  Any change in these factors could have some effect on safety culture.

Unknown Unknowns

These are the factors that affect safety culture but we don’t know about.  While a lot of smart people have invested significant time and effort in identifying factors that influence safety culture, new possibilities can still emerge.

For example, a new factor has just appeared on our radar screen: executive compensation.  Bob Cudlin has been researching the compensation packages for senior nuclear executives and some of the numbers are eye-popping, especially in comparison to historical utility norms.  Bob will soon post on his findings, including where safety figures into the compensation schemes, an important consideration since much executive compensation is incentive-based.

In addition, it could well be that there are interactions (feedback loops and the like), perhaps varying in structure and intensity over time, between and among the known and unknown factors, that have varying impacts on the evolutionary arc of an organization’s safety culture.  Because of such factors, our hope that safety culture is essentially stable, with a relatively long decay time, may be false; safety culture may be susceptible to sudden drop-offs. 

The Bottom Line

Can safety culture be regulated?  At the current state of knowledge, with some “known knowns” but no standard approach to measuring safety culture and no leading safety performance indicators, we’d have to say “Yes, but only to some degree.”  The regulator may claim to have a handle on an organization’s safety culture through SCWE observations and indirect evidence, but we don’t think the regulator is in a good position to predict or even anticipate the next issue or incident related to safety culture in the nuclear industry. 

* In the U.S. in 1997, one couldn’t swing a dead cat without hitting a shutdown nuclear power plant.  17 units were shutdown during all or part of that year, out of a total population of 108 units.