Showing posts with label SC Survey. Show all posts
Showing posts with label SC Survey. Show all posts

Friday, September 21, 2012

SafetyMatters and the Schein Model of Culture

A reader recently asked: “Do you subscribe to Edgar Schein's culture model?”  The short-form answer is a qualified “Yes.”  Prof. Schein has developed significant and widely accepted insights into the structure of organizational culture.  In its simplest form, his model of culture has three levels: the organization’s (usually invisible) underlying beliefs and assumptions, its espoused values, and its visible artifacts such as behavior and performance.  He describes the responsibility of management, through its leadership, to articulate the espoused values with policies and strategies and thus shape culture to align with management’s vision for the organization.  Schein’s is a useful mental model for conceptualizing culture and management responsibilities.*     

However, we have issues with the way some people have applied his work to safety culture.  For starters, there is the apparent belief that these levels are related in a linear fashion, more particularly, that management by promulgating and reinforcing the correct values can influence the underlying beliefs, and together they will guide the organization to deliver the desired behaviors, i.e., the target level of safety performance.  This kind of thinking has problems.

First, it’s too simplistic.  Safety performance doesn’t arise only because of management’s espoused values and what the rest of the organization supposedly believes.  As discussed in many of our posts, we see a much more complex, multidimensional and interactive system that yields outcomes which reflect, in greater or lesser terms, desired levels of safety.  We have suggested that it is the totality of such outcomes that is representative of the safety culture in fact.** 

Second, it leads to attempts to measure and influence safety culture that are often ineffective and even misleading.  We wonder whether the heavy emphasis on values and leadership attitudes and behaviors - or traits - that the Schein model encourages, creates a form versus substance trap.  This emphasis carries over to safety culture surveys - currently the linchpin for identifying and “correcting” deficient safety culture -  and even doubles down by measuring the perception of attitudes and behaviors.  While attitudes and behaviors may in fact have a beneficial effect on the organizational environment in which people perform - we view them as good habits - we are not convinced they are the only determinants of the actions, decisions and choices made by the organization.  Is it possible that this approach creates an organization more concerned with how it looks and how it is perceived than with what it does?   If everyone is checking their safety likeness in the cultural mirror might this distract from focusing on how and why actual safety-related decisions are being made?

We think there is good support for our skepticism.  For every significant safety event in recent years - the BP refinery fire, the Massey coal mine explosion, the shuttle disasters, the Deepwater oil rig explosion, and the many instances of safety culture issues at nuclear plants - the organization and senior management had been espousing as their belief that “safety is the highest priority.”  Clearly that was more illusion than reality.

To give a final upward thrust to the apple cart, we don’t think that the current focus on nuclear safety culture is primarily about culture.  Rather we see “safety culture” more as a proxy for management’s safety performance - and perhaps a back door for the NRC to regulate while disclaiming same.*** 


*  We have mentioned Prof. Schein in several prior blog posts: June 26, 2012, December 8, 2011, August 11, 2010, March 29, 2010, and August 17, 2009.

**  This past year we have posted several times on decisions as one type of visible result (artifact) of the many variables that influence organizational behavior.  In addition, please revisit two of Prof. Perin’s case studies, summarized here.  They describe well-intentioned people, who probably would score well on a safety culture survey, who made plant problems much worse through a series of decisions that had many more influences than management’s entreaties and staff’s underlying beliefs.

***  Back in 2006, the NRC staff proposed to enhance the ROP to more fully address safety culture, saying that “Safety culture includes . . . features that are not readily visible such as basic assumptions and beliefs of both managers and individuals, which may be at the root cause of repetitive and far-reaching safety performance problems.”  It wouldn’t surprise us if that’s an underlying assumption at the agency.  See L.A. Reyes to the Commissioners, SECY-06-0122 “Policy Issue Information: Safety Culture Initiative Activities to Enhance the Reactor Oversight Process and Outcomes of the Initiatives” (May 24, 2006) p. 7 ADAMS ML061320282.  

Monday, April 16, 2012

The Many Causes of Safety Culture Performance

The promulgation of the NRC’s safety culture policy statement and industry efforts to remain out in front of regulatory scrutiny have led to increasing attention to identifying safety culture issues and achieving a consistently strong safety culture.

The typical scenario for the identification of safety culture problems starts with performance deficiencies of one sort or another, identified by the NRC through the inspection process or internally through various quality processes.  When the circumstances of the deficiencies suggest that safety culture traits, values or behaviors are involved, safety culture may be deemed in need of strengthening and a standard prescription is triggered.  This usually includes the inevitable safety culture assessment, retraining, re-iteration of safety priorities, re-training in safety culture principles, etc.  The safety culture surveys focus on perceptions of problems and organizational “hot spots” but rarely delve deeply into underlying causes.  Safety culture surveys generate anecdotal data based on the perceptions of individuals, primarily focused on whether safety culture traits are well established but generally not focused on asking “why” there are deficiencies.

This approach to safety culture seems to us to suffer from several limitations.  One is that the standard prescription does not necessarily yield improved, sustainable results, an indication that symptoms are being treated instead of causes.  And therein is the source of the other limitation, a lack of explicit consideration of the possible causes that have led to safety culture being deficient.  The standard prescribed fixes include an implicit presumption that safety culture issues are the result of inadequate training, insufficient reinforcement of safety culture values, and sometimes the catchall of “leadership” shortcomings. 

We think there are a number of potential causes that are important to ensuring strong safety culture but are not receiving the explicit attention they deserve.  Whatever the true causes we believe that there will be multiple causes acting in a systematic manner - i.e., causes that interact and feedback in complex combinations to either reinforce or erode the safety culture state.  For now we want to use this post to highlight the need to think more about the reasons for safety culture problems and whether a “causal chain” exists.  Nuclear safety relies heavily on the concept of root causes as a means to understand the origin of problems and a belief that “fix-the-root cause” will “fix-the-problem”.  But a linear approach may not be effective in understanding or addressing complex organizational dynamics, and concerted efforts in one dimension may lead to emergent issues elsewhere.

In upcoming posts we’ll explore specific causes of safety culture performance and elicit readers’ input on their views and experience.

Thursday, January 5, 2012

2011 End of Year Summary

We thought we would take this opportunity to do a little rummaging around in the Google analytics and report on some of the statistics for the safetymatters blog.

The first thing that caught our attention was the big increase in page views (see chart below) for the blog this past year.  We are now averaging more than 1000 per month and we appreciate every one of the readers who visits the blog.  We hope that the increased readership reflects that the content is interesting, thought provoking and perhaps even a bit provocative.  We are pretty sure people who are interested in nuclear safety culture cannot find comparable content elsewhere.

The following table lists the top ten blog posts.  The overwhelming favorite has been the "Normalization of Deviation" post from March 10, 2010.  We have consistently commented positively on this concept introduced by Diane Vaughan in her book The Challenger Launch Decision.  Most recently Red Conner noted in his December 8, 2011 post the potential role of normalization of deviation in contributing to complacency.  This may appear to be a bit of a departure from the general concept of complacency as primarily a passive occurrence.  Red notes that the gradual and sometimes hardly perceptive acceptance of lesser standards or non-conforming results may be more insidious than a failure to challenge the status quo.  We would appreciate hearing from readers on their views of “normalization”, whether they believe it is occurring in their organizations (and if so how is it detected?) and what steps might be taken to minimize its effect.



A common denominator among a number of the popular posts is safety culture assessment, whether in the form of surveys, performance indicators, or other means to gauge the current state of an organization.  Our sense is there is a widespread appetite for approaches to measuring safety culture in some meaningful way; such interest perhaps also indicates that current methods, heavily dependent on surveys, are not meeting needs.  What is even more clear in our research is the lack of initiative by the industry and regulators to promote or fund research into this critical area.   

A final observation:  The Google stats on frequency of page views indicate two of the top three pages were the “Score Decision” pages for the two decision examples we put forward.  They each had a 100 or more views.  Unfortunately only a small percentage of the page views translated into scoring inputs for the decisions.  We’re not sure why the lack of inputs since they are anonymous and purely a matter of the reader’s judgment.  Having a larger data set from which to evaluate the decision scoring process would be very useful and we would encourage anyone who did visit but not score to reconsider.  And of course, anyone who hasn’t yet visited these examples, please do and see how you rate these actual decisions from operating nuclear plants.

Wednesday, November 23, 2011

Lawyering Up

When concerns are raised about the safety culture of an organization with very significant safety responsibilities what’s one to do?  How about, bring in the lawyers.  That appears to be the news out of the Vit Plant* in Hanford, WA.  With considerable fanfare Bechtel unveiled a new website committed to their management of the vit plant.  The site provides an array of policies, articles, reports, and messages regarding safety and quality.

One of the major pieces of information on the site is a recent assessment of the state of safety culture at the vit plant.**  The conclusion of the assessment is quite positive: “Overall, we view the results from this assessment as quite strong, and similar to prior assessments conduct [sic] by the Project.” (p. 16)  The prior assessments were the 2008 and 2009 Vit Plant Opinion Surveys.

However our readers may also recall that earlier this year the Defense Nuclear Facilities Safety Board (DNFSB) issued its report that at the safety culture at the WTP plant is “flawed”.  In a previous post we quoted from the DNFSB report as follows:

“The HSS [DOE's Office of Health, Safety and Security] review of the safety culture on the WTP project 'indicates that BNI [Bechtel National Inc.] has established and implemented generally effective, formal processes for identifying, documenting, and resolving nuclear safety, quality, and technical concerns and issues raised by employees and for managing complex technical issues.'  However, the Board finds that these processes are infrequently used, not universally trusted by the WTP project staff, vulnerable to pressures caused by budget or schedule [emphasis added], and are therefore not effective.”

Thus the DNFSB clearly has a much different view of the state of safety culture at the vit plant than does DOE or Bechtel.  We note that the DNFSB report does not appear to be one of the numerous references available at the new website.  Links to the original DOE report and the recent assessment are provided.  There is also a November 17, 2011 message to all employees from Frank Russo, Project Director*** which introduces and summarizes the 2011 Opinion Survey on the project’s nuclear safety and quality culture (NSQC).  Neither the recent assessment nor the opinion survey addresses the issues raised by the DNFSB; it is as if the DNFSB review never happened.

What really caught our attention in the recent assessment is who wrote the report - a law firm.  Their assessment was based on in-depth interviews of 121 randomly selected employees using a 19 question protocol (the report states that the protocol is attached however it is not part of the web link).  But the law firm did not actually conduct the interviews - “investigators” from the BSII internal audit department did so and took notes that were then provided to the lawyers.  This may give new meaning to the concept of “defense in depth”.

The same law firm also analyzed the results from the 2011 Opinion Survey.  In the message to employees from , Russo asserts that the law firm has “substantial experience in interpreting [emphasis added] NSQC assessments”.  He goes on to say that the questions for the survey were developed by the WTP Independent Safety and Quality Culture Assessment (ISQCA) Team.  In our view, this executive level team has without question “substantial experience” in safety culture.  Supposedly the ISQCA team was tasked with assessing the site’s culture - why then did they only develop the questions and a law firm interpret the answers?  Strikes us as very odd. 

We don’t know the true state of safety culture at the vit plant and unfortunately, the work sponsored by vit plant management does little to provide such insight or to fully vet and respond to the serious deficiencies cited in the DNFSB assessment.  If we were employees at the plant we would be anxious to hear directly from the ISQCA team. 

Reading the law firm report provides little comfort.  We have commented many times about the inherent limitations of surveys and interviews to solicit attitudes and perceptions.  When the raw materials are interview notes of a small fraction of the employees, and assessed by lawyers who were not present in the interviews, we become more skeptical.  Several quotes from the report related to the Employee Concerns Program illustrate our concern.

“The overwhelming majority of interviewees have never used ECP. Only 6.5% of the interviewees surveyed had ever used the program.  [Note: this means a total of nine interviewees.] There is a major difference between the views of interviewees with no personal experience with ECP and those who have used the program: the majority of the interviewees who have not used the program have a positive impression of the program, while more than half of the interviewees who have used the program have a negative impression of it.” (p. 5, emphasis added)

Our favorite quote out of the report is the following.  “Two interviewees who commented on the [ECP] program appear to have confused it with Human Resources.” (p. 6)  One only wonders if the comments were favorable.

Eventually the report gets around to a conclusion that we probably could not say any better.  “We recognize that an interview population of nine employees who have used the ECP in the past is insufficient to draw any meaningful conclusions about the program.” (p. 17)

We’re left with the following question: Why go about an assessment of safety culture in such an obtuse manner, one that is superficial in its “interpretation” of very limited data,  laden with anecdotal material, and ultimately over reaching in its conclusions?


*  The "Vit Plant" is the common name for the Hanford Waste Treatment Plant (WTP).

**  Pillsbury Winthrop Shaw Pittman, LLP, "Assessment of a Safety Conscious Work Environment at the Hanford Waste Treatment Plant" (undated).  The report contains no information on when the interviews or analysis were performed.  Because a footnote refers to the 2009 Opinion Survey and a report addendum refers to an October, 2010 DOE report, we assume the assessment was performed in early-to-mid 2010.

*** WTP Comm, "Message from Frank: 2011 NSQC Employee Survey Results" (Nov. 17, 2011).  

Friday, November 4, 2011

A Factory for Producing Decisions

The subject of this post is the compelling insights of Daniel Kahneman into issues of behavioral economics and how we think and make decisions.  Kahneman is one of the most influential thinkers of our time and a Nobel laureate.  Two links are provided for our readers who would like additional information.  One is via the McKinsey Quarterly, a video interview* done several years ago.  It runs about 17 minutes.  The second is a current review in The Atlantic** of Kahneman’s just released book, Thinking Fast and Slow.

Kahneman begins the McKinsey interview by suggesting that we think of organizations as “factories for producing decisions” and therefore, think of decisions as a product.  This seems to make a lot of sense when applied to nuclear operating organizations - they are the veritable “River Rouge” of decision factories.  What may be unusual for nuclear organizations is the large percentage of decisions that directly or indirectly include safety dimensions, dimensions that can be uncertain and/or significantly judgmental, and which often conflict with other business goals.  So nuclear organizations have to deliver two products: competitively priced megawatts and decisions that preserve adequate safety.

To Kahneman decisions as product logically raises the issue of quality control as a means to ensure the quality of decisions.  At one level quality control might focus on mistakes and ensuring that decisions avoid recurrence of mistakes.  But Kahneman sees the quality function going further into the psychology of the decision process to ensure, e.g., that the best information is available to decision makers, that the talents of the group surrounding the ultimate decision maker are being used effectively, and the presence of an unbiased decision-making environment.

He notes that there is an enormous amount of resistance within organizations to improving decision processes. People naturally feel threatened if their decisions are questioned or second guessed.  So it may be very difficult or even impossible to improve the quality of decisions if the leadership is threatened too much.  But, are there ways to avoid this?  Kahneman suggests the “premortem” (think of it as the analog to a post mortem).  When a decision is being formulated (not yet made), convene a group meeting with the following premise: It is a year from now, we have implemented the decision under consideration, it has been a complete disaster.  Have each individual write down “what happened?”

The objective of the premortem is to legitimize dissent and minimize the innate “bias toward optimism” in decision analysis.  It is based on the observation that as organizations converge toward a decision, dissent becomes progressively more difficult and costly and people who warn or dissent can be viewed as disloyal.  The premortem essentially sets up a competitive situation to see who can come up with the flaw in the plan.  In essence everyone takes on the role of dissenter.  Kahneman’s belief is that the process will yield some new insights - that may not change the decision but will lead to adjustments to make the decision more robust. 

Kahneman’s ideas about decisions resonate with our thinking that the most useful focus for nuclear safety culture is the quality of organizational decisions.  It also contrasts with a recent instance of a nuclear plant run afoul of the NRC (Browns Ferry) and now tagged with a degraded cornerstone and increased inspections.  As usual in the nuclear industry, TVA has called on an outside contractor to come in and perform a safety culture survey, to “... find out if people feel empowered to raise safety concerns….”***  It may be interesting to see how people feel, but we believe it would be far more powerful and useful to analyze a significant sample of recent organizational decisions to determine if the decisions reflect an appropriate level of concern for safety.  Feelings (perceptions) are not a substitute for what is actually occurring in the decision process. 

We have been working to develop ways to grade whether decisions support strong safety culture, including offering opportunities on this blog for readers to “score” actual plant decisions.  In addition we have highlighted the work of Constance Perin including her book, Shouldering Risks, which reveals the value of dissecting decision mechanics.  Perin’s observations about group and individual status and credibility and their implications for dissent and information sharing directly parallel Kahneman’s focus on the need to legitimize dissent.  We hope some of this thinking ultimately overcomes the current bias in nuclear organizations to reflexively turn to surveys and the inevitable retraining in safety culture principles.


*  "Daniel Kahneman on behavioral economics," McKinsey Quarterly video interview (May 2008).

** M. Popova, "The Anti-Gladwell: Kahneman's New Way to Think About Thinking," The Atlantic website (Nov. 1, 2011).

*** A. Smith, "Nuke plant inspections proceeding as planned," Athens [Ala.] News Courier website (Nov. 2, 2011).

Friday, October 14, 2011

Decision No. 2 Scoring Results

In July we initiated a process for readers to participate in evaluating the extent to which actual decisions made at nuclear plants were consistent with a strong safety culture.  (The decision scoring framework is discussed here and the results for the first decision are discussed here.)  Example decision 2 involved a temporary repair to a Service Water System piping elbow.  Performance of a permanent code repair was postponed until the next cold shutdown or refuel outage.

We asked readers to assess the decision in two dimensions: potential safety impact and the strength of the decision, using anchored scales to quantify the scores.  The chart shows the scoring results.  Our interpretation of the results is as follows:

As with the first decision, most of the scores did coalesce in a limited range for each scoring dimension.  Based on the anchored scales, this meant most people thought the safety impact was fairly significant, likely due to the extended time period of the temporary repair which could extend to the next refuel outage.  The people that scored safety significance in this range also scored the decision strength as one that reasonably balanced safety and other operational priorities.  Our interpretation here is that people viewed the temporary repair as a reasonable interim measure, sufficient to maintain an adequate safety margin.  Notwithstanding that most scores were in the mid range, there were also decision strength scores as low as 3 (safety had lower priority than desired) and as high as 9 (safety had high priority where competing priorities were significant).  Across this range of decision strength scores, the scores for safety impact were consistent at 8.  This clearly illustrates the potential for varying perceptions of whether a decision is consistent with a strong safety culture.  The reasons for the variation could be based on how people felt about the efficacy of the temp repair or simply different standards or expectations for how aggressively one should address the leakage problem.

It is not very difficult to see how this scoring variability could translate into similarly mixed safety culture survey results.  But unlike survey questions which tend to be fairly general and abstract, the decision scoring results provide a definitive focus for assessing the “why” of safety culture perceptions.  Training and self assessment activities could benefit from these data as well.  Perhaps most intriguing is the question of what level of decision strength is expected in an organization with a “strong” safety culture.  Is it 5 (reasonably balances…) or is something higher, in the 6 to 7 range, expected?  We note that the average decision strength for example 2 was about 5.2.

Stay tuned for more on decision scoring.

Friday, July 15, 2011

Decision Scoring No. 2

This post introduces the second decision scoring example.  Click here, or the box above this post, to access the detailed decision summary and scoring feature.  

This example involves a proposed non-code repair to a leak in the elbow of service water system piping.  By opting for a non-code, temporary repair, a near term plant shutdown will be avoided but the permanent repair will be deferred for as long as 20 months.  In grading this decision for safety impact and decision strength, it may be helpful to think about what alternatives were available to this licensee.  We could think of several:

-    not perform a temporary repair as current leakage was within tech spec limits, but implement an augmented inspection and monitoring program to timely identify any further degradation.

-    perform the temporary repair as described but commit to perform the permanent repair within a shorter time period, say 6 months.

-    immediately shut down and perform the code repair.

Each of these alternatives would likely affect the potential safety impact of this leak condition and influence the perception of the decision strength.  For example a decision to shut down immediately and perform the code repair would likely be viewed as quite conservative, certainly more conservative than the other options.  Such a decision might provide the strongest reinforcement of safety culture.  The point is that none of these decisions is necessarily right or wrong, or good or bad.  They do however reflect more or less conservatism, and ultimately say something about safety culture.

Wednesday, July 13, 2011

Decision No. 1 Scoring Results


We wanted to present the results to date for the first of the decision scoring examples.  (The decision scoring framework is discussed here.)  This decision involved the replacement of a bearing in the air handling unit for a safety related pump room.  After declaring the air unit inoperable, the bearing was replaced within the LCO time window.

We asked readers to assess the decision in two dimensions: potential safety impact and the strength of the decision, using anchored scales to quantify the scores.  The chart to the left shows the scoring results with the size of the data symbols related to the number of responses.  Our interpretation of the results is as follows:

First, most of the scores did coalesce in the mid ranges of each scoring dimension.  Based on the anchored scales, this meant most people thought the safety impact associated with the air handling unit problem was fairly minimal and did not extend out in time.  This is consistent with the fact that the air handler bearing was replaced within the LCO time window.  The people that scored safety significance in this mid range also scored the decision strength as one that reasonably balanced safety and other operational priorities.  This seems consistent to us with the fact that the licensee had also ordered a new shaft for the air handler and would install it at the next outage - the new shaft being necessary for addressing the cause of the bearing problem.  Notwithstanding that most scores were in the mid range, we find it interesting that there is still a spread from 4-7 in the scoring of decision strength, and somewhat smaller spread of 4-6 in safety impact.  This would be an attribute of decision scores that might be tracked closely to see identify situations where the spreads change over time - perhaps signaling that either there is disagreement regarding the merits of the decisions or that there is a need for better communication of the bases for decisions.

Second, while not a definitive trend, it is apparent that in the mid-range scores people tended to see decision strength in terms of safety impact.  In other words, in situations where the safety impact was viewed as greater (e.g., 6 or so), the perceived strength of the decision was viewed as somewhat less than when the safety impact was viewed as somewhat lower (e.g., 4 or so).  This trend was emphasized by the scores that rated decision strength at 9 based on safety impact of 2.  There is intrinsic logic to this and also may highlight to managers that an organization’s perception of safety priorities will be directly influenced by their understanding of the safety significance of the issues involved.  One can also see the potential for decision scores “explaining” safety culture survey results which often indicate a relatively high percentage of respondents “somewhat agreeing” that e.g., safety is a high priority, a smaller percentage “mostly agreeing” and a smaller percentage yet, “strongly agreeing”. 

Third, there were some scores that appeared to us to be “outside the ballpark”.  These were the scores that rated safety impact at 10 did not seem consistent with our reading of the air handling unit issue, including the note indicating that the licensee had assessed the safety significance as minimal.

Stay tuned for the next decision scoring example and please provide your input.

Tuesday, June 21, 2011

Decisions….Decisions

Safety Culture Performance Measures

Developing forward looking performance measures for safety culture remains a key challenge today and is the logical next step following the promulgation of the NRC’s policy statement on safety culture.  The need remains high as safety culture issues continue to be identified by the NRC subsequent to weaknesses developing in the safety culture and ultimately manifesting in traditional (lagging) performance indicators.

Current practice has continued to rely on safety culture surveys which focus almost entirely on attitudes and perceptions about safety.  But other cultural values are also present in nuclear operations - such as meeting production goals - and it is the rationalization of competing values on a daily basis that is at the heart of safety culture.  In essence decision makers are pulled in several directions by these competing priorities and must reach answers that accord safety its appropriate priority.

Our focus is on safety management decisions made every day at nuclear plants; e.g., operability, exceeding LCO limits, LER determinations, JCOs, as well as many determinations associated with problem reporting, and corrective action.  We are developing methods to “score” decisions based on how well they balance competing priorities and to relate those scores to inference of safety culture.  As part of that process we are asking our readers to participate in the scoring of decisions that we will post each week - and then share the results and interpretation.  The scoring method will be a more limited version of our developmental effort but should illustrate some of the benefits of a decision-centric view of safety culture.

Look in the right column for the links to Score Decisions.  They will take you to the decision summaries and score cards.  We look forward to your participation and welcome any questions or comments.

Friday, March 11, 2011

Safety Culture Performance Indicators

In our recent post on safety culture management in the DOE complex, we concentrated on documents created by the DOE team.  But there was also some good material in the references assembled by the team.  For example, we saw some interesting thoughts on performance indicators in a paper by Andrew Hopkins, a sociology professor at The Australian National University.*  Although the paper was prepared for an oil and gas industry conference, the focus on overall process safety has parallels with nuclear power production.

Contrary to the view of many safety culture pundits, including ourselves, Professor Hopkins is not particularly interested in separating lagging from leading indicators; he says that trying to separate them may not be a useful exercise.  Instead, he is interested in a company’s efforts to develop a set of useful indicators that in total measure or reflect the state of the organization’s risk control system.  In his words, “. . . the important thing is to identify measures of how well the process safety controls are functioning.  Whether we call them lead or lag indicators is a secondary matter.  Companies I have studied that are actively seeking to identify indicators of process safety do not make use of the lead/lag distinction in any systematic way. They use indicators of failure in use, when these are available, as well as indicators arising out their own safety management activities, where appropriate, without thought as to whether they be lead or lag. . . . Improving performance in relation to these indicators must enhance process safety. [emphasis added]” (p. 11)

Are his observations useful for people trying to evaluate the overall health of a nuclear organization’s safety culture?  Possibly.  Organizations use a multitude of safety culture assessment techniques including (but not limited to) interviews; observations; surveys; assessments of the CAP and other administrative processes, and management metrics such as maintenance performance, all believed to be correlated to safety culture.  Maybe it would be OK to dial back our concern with identifying which of them are leading (if any) and which are lagging.  More importantly, perhaps we should be asking how confident we are that an improvement in any one of them implies that the overall safety culture is in better shape. 

*  A. Hopkins, "Thinking About Process Safety Indicators," Working Paper 53, National Research Centre for OHS Regulation, Australian National University (May 2007).  We have referred to Professor Hopkins’ work before (here and here).

Thursday, March 3, 2011

Safety Culture in the DOE Complex

This post reviews a Department of Energy (DOE) effort to provide safety culture assessment and improvement tools for its own operations and those of its contractors.

Introduction

The DOE is responsible for a vast array of organizations that work on DOE’s programs.  These organizations range from very small to huge in size and include private contractors, government facilities, specialty shops, niche manufacturers, labs and factories.  Many are engaged in high-hazard activities (including nuclear) so DOE is interested in promoting an effective safety culture across the complex.

To that end, a task team* was established in 2007 “to identify a consensus set of safety culture principles, along with implementation practices that could be used by DOE . . .  and their contractors. . . . The goal of this effort was to achieve an improved safety culture through ISMS [Integrated Safety Management System] continuous improvement, building on operating experience from similar industries, such as the domestic and international commercial nuclear and chemical industries.”  (Final Report**, p. 2)

It appears the team performed most of its research during 2008, conducted a pilot program in 2009 and published its final report in 2010.  Research included reviewing the space shuttle and Texas City disasters, the Davis-Besse incident, works by gurus such as James Reason, and guidance and practices published by NASA, NRC, IAEA, INPO and OSHA.

Major Results

The team developed a definition of safety culture and described a process whereby using organizations could assess their safety culture and, if necessary, take steps to improve it.

The team’s definition of safety culture:

“An organization’s values and behaviors modeled by its leaders and internalized by its members, which serve to make safe performance of work the overriding priority to protect the workers, public, and the environment.” (Final Report, p. 5)

After presenting this definition, the report goes on to say “The Team believes that voluntary, proactive pursuit of excellence is preferable to regulatory approaches to address safety culture because it is difficult to regulate values and behaviors. DOE is not currently considering regulation or requirements relative to safety culture.” (Final Report, pp. 5-6)

The team identified three focus areas that were judged to have the most impact on improving safety and production performance within the DOE complex: Leadership, Employee/Worker Engagement, and Organizational Learning. For each of these three focus areas, the team identified related attributes.

The overall process for a using organization is to review the focus areas and attributes, assess the current safety culture, select and use appropriate improvement tools, and reinforce results. 

The list of tools to assess safety culture includes direct observations, causal factors analysis (CFA), surveys, interviews, review of key processes, performance indicators, Voluntary Protection Program (VPP) assessments, stream analysis and Human Performance Improvement (HPI) assessments.***  The Final Report also mentioned performance metrics and workshops. (Final Report, p. 9)

Tools to improve safety culture include senior management commitment, clear expectations, ISMS training, managers spending time in the field, coaching and mentoring, Behavior Based Safety (BBS), VPP, Six Sigma, the problem identification process, and HPI.****  The Final Report also mentioned High Reliability Organization (HRO), Safety Conscious Work Environment (SCWE) and Differing Professional Opinion (DPO). (Final Report, p. 9)  Whew.

The results of a one-year pilot program at multiple contractors were evaluated and the lessons learned were incorporated in the final report.

Our Assessment

Given the diversity of the DOE complex, it’s obvious that no “one size fits all” approach is likely to be effective.  But it’s not clear that what the team has provided will be all that effective either.  The team’s product is really a collection of concepts and tools culled from the work of outsiders, combined with DOE’s existing management programs, and repackaged as a combination of overall process and laundry lists.  Users are left to determine for themselves exactly which sub-set of tools might be useful in their individual situations.

It’s not that the report is bad.  For example, the general discussion of safety culture improvement emphasizes the importance of creating a learning organization focused on continuous improvement.  In addition, a major point they got right was recognizing that safety can contribute to better mission performance.  “The strong correlation between good safety performance with good mission performance (or productivity or reliability) has been observed in many different contexts, including industrial, chemical, and nuclear operations.” (Final Report, p. 20)

On the other hand, the team has adopted the works of others but does not appear to recognize how, in a systems sense, safety culture is interwoven into the fabric of an organization.  For example, feedback loops from the multitude of possible interventions to overall safety culture are not even mentioned.  And this is not a trivial issue.  An intervention can provide an initial boost to safety culture but then safety culture may start to decay because of saturation effects, especially if the organization is hit with one intervention after another.

In addition, some of the major, omnipresent threats to safety culture do not get the emphasis they deserve.  Goal conflict, normalization of deviance and institutional complacency are included in a list of issues from the Columbia, Davis-Besse and Texas City events (Final Report, p. 13-15) but the authors do not give them the overarching importance they merit.  Goal conflict, often expressed as safety vs mission, should obviously be avoided but its insidiousness is not adequately recognized; the other two factors are treated in a similar manner. 

Two final picky points:  First, the report says it’s difficult to regulate behavior.  That’s true but companies and government do it all the time.  DOE could definitely promulgate a behavior-based safety culture regulatory requirement if it chose to do so.  Second, the final report (p. 9) mentions leading (vs lagging) indicators as part of assessment but the guidelines do not provide any examples.  If someone has some useful leading indicators, we’d definitely like to know about them. 

Bottom line, the DOE effort draws from many sources and probably represents consensus building among stakeholders on an epic scale.  However, the team provides no new insights into safety culture and, in fact, may not be taking advantage of the state of the art in our understanding of how safety culture interacts with other organizational attributes. 


*  Energy Facility Contractors Group (EFCOG)/DOE Integrated Safety Management System (ISMS) Safety Culture Task Team.

**  J. McDonald, P. Worthington, N. Barker, G. Podonsky, “EFCOG/DOE ISMS Safety Culture Task Team Final Report”  (Jun 4, 2010).

***  EFCOG/DOE ISMS Safety Culture Task Team, “Assessing Safety Culture in DOE Facilities,” EFCOG meeting handout (Jan 23, 2009).

****  EFCOG/DOE ISMS Safety Culture Task Team, “Activities to Improve Safety Culture in DOE Facilities,” EFCOG meeting handout (Jan 23, 2009).

Thursday, October 28, 2010

Safety Culture Surveys in Aviation

Like nuclear power, commercial aviation is a high-reliability industry whose regulator (the FAA) is interested in knowing the state of safety culture.  At an air carrier, the safety culture needs to support cooperation, coordination, consistency and integration across departments and at multiple physical locations.

And, like nuclear power, employee surveys are used to assess safety culture.  We recently read a report* on how one aviation survey process works.  The report is somewhat lengthy so we have excerpted and summarized points that we believe will be interesting to you.

The survey and analysis tool is called the Safety Culture Indicator Scale Measurement System (SCISMS), “an organizational self-assessment instrument designed to aid operators in measuring indicators of their organization’s safety culture, targeting areas that work particularly well and areas in need of improvement.” (p. 2)  SCISMS provides “an integrative framework that includes both organizational level formal safety management systems, and individual level safety-related behavior.” (p. 8)

The framework addresses safety culture in four main factors:  Organizational Commitment to Safety, Operations Interactions, Formal Safety Indicators, and Informal Safety Indicators.  Each factor is further divided into three sub-factors.  A typical survey contains 100+ questions and the questions usually vary for different departments.

In addition to assessing the main factors, “The SCISMS contains two outcome scales: Perceived Personal Risk/Safety Behavior and Perceived Organizational Risk . . . . It is important to note that these measures reflect employees’ perceptions of the state of safety within the airline, and as such reflect the safety climate. They should not be interpreted as absolute or objective measures of safety behavior or risk.” (p. 15)  In other words, the survey factors and sub-factors are not related to external measurements of safety performance, but the survey-takers’ perceptions of risk in their work environment.

Summary results are communicated back to participating companies in the form of a two-dimensional Safety Culture Grid.  The two dimensions are employees’ perceptions of safety vs management’s perceptions of safety.  The grid displays summary measures from the surveys; the measures can be examined for consistency (one factor or department vs others), direction (relative strength of the safety culture) and concurrence of employee and management survey responses.

Our Take on SCISMS

We have found summary level graphics to be very important in communicating key information to clients and the Safety Culture Grid appears like it could be effective.  One look at the grid shows the degree to which the various factors have similar or different scores, the relative strength of the safety culture, and the perceptual alignment of managers and employees with respect to the organization’s safety culture.   Grids can be constructed to show findings across factors or departments within one company or across multiple companies for an industry comparison. 

Our big problem is with the outcome variables.  Given that the survey contains perceptions of both what’s going on and what it means in terms of creating safety risks, it is no surprise that the correlations between factor and outcome data are moderate to strong.  “Correlations with Safety Behavior range from r = .32 - .60 . . . . [and] Correlations between the subscales and Perceived Risk are generally even stronger, ranging from r = -.38 to -.71” (p. 25)  Given the structure of the instrument, one might ask why the correlations are not even larger.  We’d like to see some intelligent linkage between safety culture results and measures of safety performance, either objective measures or expert evaluations.

The Socio-Anthropological and Organizational Psychological Perspectives

We have commented on the importance of mental models (here, here and here) when viewing or assessing safety culture.  While not essential to understanding SCISMS, this report fairly clearly describes two different perspectives of safety culture: the socio-anthropological and organizational psychological.  The former “highlights the underlying structure of symbols, myths, heroes, social drama, and rituals manifested in the shared values, norms, and meanings of groups within an organization . . . . the deeper cultural structure is often not immediately interpretable by outsiders. This perspective also generally considers that the culture is an emergent property of the organization . . . and therefore cannot be completely understood through traditional analytical methods that attempt to break down a phenomenon in order to study its individual components . . . .”

In contrast, “The organizational psychological perspective . . . . assumes that organizational culture can be broken down into smaller components that are empirically more tractable and more easily manipulated . . . and in turn, can be used to build organizational commitment, convey a philosophy of management, legitimize activity and motivate personnel.” (pp.7-8) 

The authors characterize the difference between the two viewpoints as qualitative vs quantitative and we think that is a fair description.


*  T.L. von Thaden and A.M. Gibbons, “The Safety Culture Indicator Scale Measurement System (SCISMS)” (Jul 2008) Technical Report HFD-08-03/FAA-08-02. Savoy, IL: University of Illinois, Human Factors Division.

Friday, October 22, 2010

NRC Safety Culture Workshop

The information from the Sept 28, 2010 NRC safety culture meeting is available on the NRC website.  This was a meeting to review the draft safety culture policy statement, definition and traits.

As you probably know, the NRC definition now focuses on organizational “traits.”   According to the NRC, “A trait . . . is a pattern of thinking, feeling, and behaving that emphasizes safety, particularly in goal conflict situations, e.g., production vs. safety, schedule vs. safety, and cost of the effort vs. safety.”*  We applaud this recognition of goal conflicts as potential threats to effective safety management and a strong safety culture.

Several stakeholders made presentations at the meeting but the most interesting one was by INPO’s Dr. Ken Koves.**  He reported on a study that addressed two questions:
  • “How well do the factors from a safety culture survey align with the safety culture traits that were identified during the Feb 2010 workshop?
  • Do the factors relate to other measures of safety performance?” (p. 4)
The rest of this post summarizes and critiques the INPO study.

Methodology

For starters, INPO constructed and administered a safety culture survey.  The survey itself is interesting because it covered 63 sites and had 2876 respondents, not just a single facility or company.  They then performed a principal component analysis to reduce the survey data to nine factors.  Next, they mapped the nine survey factors against the safety culture traits from the NRC's Feb 2010 workshop, INPO principles, and Reactor Oversight Program components and found them generally consistent.  We have no issue with that conclusion. 

Finally, they ran correlations between the nine survey factors and INPO/NRC safety-related performance measures.  I assume the correlations included in his presentation are statistically significant.  Dr. Koves concludes that “Survey factors are related to other measures of organizational effectiveness and equipment performance . . . .” (p. 19)

The NRC reviewed the INPO study and found the “methods, data analyses and interpretations [were] appropriate.” ***

The Good News

Kudos to INPO for performing this study.  This analysis is the first (only?) large-scale attempt of which I am aware to relate safety culture survey data to anything else.  While we want to avoid over-inferring from the analysis, primarily because we have neither the raw data nor the complete analysis, we can find support in the correlation tables for things we’ve been saying for the last year on this blog.

For example, the factor with the highest average correlation to the performance measures is Management Decision Making, i.e., what management actually does in terms of allocating resources, setting priorities and walking the talk.  Prioritizing Safety, i.e., telling everyone how important it is and promulgating safety policies, is 7th (out of 9) on the list.  This reinforces what we have been saying all along: Management actions speak louder than words.

Second, the performance measures with the highest average correlation to the safety culture survey factors are the Human Error Rate and Unplanned Auto Scrams.  I take this to indicate that surveys at plants with obvious performance problems are more likely to recognize those problems.  We have been saying the value of safety culture surveys is limited, but can be more useful when perception (survey responses) agrees with reality (actual conditions).  Highly visible problems may drive perception and reality toward congruence.  For more information on perception vs. reality, see Bob Cudlin’s recent posts here and here.

Notwithstanding the foregoing, our concerns with this study far outweigh our comfort at seeing some putative findings that support our theses.

Issues and Questions

The industry has invested a lot in safety culture surveys and they, NRC and INPO have a definite interest (for different reasons) in promoting the validity and usefulness of safety culture survey data.  However, the published correlations are moderate, at best.  Should the public feel more secure over a positive safety culture survey because there's a "significant" correlation between survey results and some performance measures, some of which are judgment calls themselves?  Is this an effort to create a perception of management, measurement and control in a situation where the public has few other avenues for obtaining information about how well these organizations are actually protecting the public?

More important, what are the linkages (causal, logical or other) between safety culture survey results and safety-related performance data (evaluations and objective performance metrics) such as those listed in the INPO presentation?  Most folks know that correlation is not causation, i.e., just because two variables move together with some consistency doesn’t mean that one causes the other but what evidence exists that there is any relationship between the survey factors and the metrics?  Our skepticism might be assuaged if the analysts took some of the correlations, say, decision making and unplanned reactor scrams, and drilled into the scrams data for at least anecdotal evidence of how non-conservative decision making contributed to x number of scrams. We would be surprised to learn that anyone has followed the string on any scram events all the way back to safety culture.

Wrapping Up

The INPO analysis is a worthy first effort to tie safety culture survey results to other measures of safety-related performance but the analysis is far too incomplete to earn our endorsement.  We look forward to seeing any follow-on research that addresses our concerns.


*  “Presentation for Safety Club Public Meeting - Traits Comparison Charts,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102670381, p. 4.

**  G.K. Koves, “Safety Culture Traits Validation in Power Reactors,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010).

***  V. Barnes, “NRC Independent Evaluation of INPO’s Safety Culture Traits Validation Study,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102660125, p. 8.

Wednesday, October 20, 2010

Perception and Reality

In our October 18, 2010 post on how perception and reality may factor into safety culture surveys we ended with a question about the limits of the usefulness of surveys without a separate assessment to confirm the actual conditions within the organization.  Specifically, it makes us wonder, can a survey reliably distinguish between the following three situations:

-    an organization with strong safety culture with positive survey perceptions;
-    an organization with compromised safety culture but still reporting positive survey perceptions due to imperfect knowledge or other motivations;
-    an organization with compromised safety culture but still reporting positive survey perceptions due to complacency or normalization of lesser standards.

In our August 23, 2010 post we had raised a similar issue as follows:

“the overwhelming majority of nuclear power plant employees have never experienced a significant incident (we’re excluding ordinary personnel mishaps).  Thus, their work experience is of limited use in helping them assess just how strong their safety culture actually is.”

With what we know today it appears to us that safety culture survey results alone should not be used to reach conclusions about the state of safety culture in the organization or as a predictor of future safety performance.  Even comparisons across plants and the industry seem open to question due to the potential for significant and perhaps unknowable variation of perceptions of those surveyed. 

How would we see surveys contributing to knowledge of the safety culture in an organization?  In general we would say that certain survey questions can provide useful information where the objective is to elicit the perceptions of employees (versus a factual determination) on certain issues.  There is still the impediment that some employees’ perceptions will be colored, e.g., they will discern the “right” answer or will be motivated by other factors to bias their answers. 

What kind of questions might be perception-based?  We would say in areas where the perceptions of the organization are as important or of as much interest as the actual reality.  For example, whether the organization perceives that there is a bias for production goals over safety goals.  The existence of such a perception could have wide ranging impacts on individuals including their willingness to raise concerns or rigorously pursue their causes.  Even if the perceptions derived from the survey are not consistent with reality, it is important to understand that the perception exists and take steps to correct it.  Questions that go to ascertaining trust in management would also be useful as trust is largely a matter of perception.  It is not enough for management to be trustworthy.  Management must also be perceived as trustworthy to realize its benefit.   

The complication is that perception and reality are pulling in different directions. This signifies that although reality is certainly always present, perception is pulling at it and in many instances shaping reality. The impact of this relationship is that if not properly managed, perception will take over and will lessen if not eliminate the other attributes, especially reality.

It would suggest that a useful goal or trait of safety culture is to bring perception as close to reality as possible.  Perceptions that are inflated or unduly negative only distort the dynamics of safety management.  As with most complex systems, perceptions generally exist with some degree of time delay relative to actual reality.  Things improve, but perceptions lag as it takes time for information to flow, attitudes to adjust to new information, and new perceptions take hold.  Using perception data from surveys combined with the forensics of assessments can provide the necessary calibration to bring perception and reality into alignment.

Monday, October 18, 2010

Perception Is/Is Not Reality?

This post will continue our thoughts re the use of safety culture surveys.  The Oxford Dictionary says reality is the state of things as they actually exist, rather than as they may appear or may be thought to be.  Another theory of reality is that there is no objective reality.  Such belief is that there simply and literally is no reality beyond the perceptions, beliefs and attitudes we each have about reality.  In other words, “perception is reality”.  So, when a safety culture surveys is conducted, what reality is it measuring?  Is the purpose of the survey to determine an “objective” reality based on what an informed and knowledgeable person would say?  Or is the purpose simply to catalog the range of perceptions of reality held by those surveyed, whether accurate or not?  Why does it matter?

In our August 11, 2010 post we noted that UK researcher Dr. Kathryn Mearns referred to safety culture surveys as “perception surveys”, since they focus on people’s perceptions of attitudes, values and behaviors.  In a followup post on August 27, 2010 reporting some followup communications with Dr. Mearns we quoted her as follows:

“I see the survey results as a ‘temperature check’ but it needs a more detailed diagnosis to find out what really ails the safety culture.”

If one agrees that surveys are perception-based, it creates something of a dilemma as to which reality is of interest.  If “things as they actually exist” is important, then surveys alone may be of limited value, even misleading, without thorough diagnostic assessments, which is Dr. Mearns' point.  On the other hand, if perception itself is important, then surveys offer a window into that reality.  We think both realities have their place.

We find some empirical support for these ideas from the results of a recent safety culture assessment at Nuclear Fuel Services.*  The report is quite lengthy (over 300 pages) and exhaustive in its detail.  The assessment was done as part of a commitment by the owners of Nuclear Fuel Services (NFS) to the NRC and in response to ongoing safety performance issues at its facilities.  The assessment was performed by an independent team and included a safety culture survey.  It is the survey results that we focus on.

In reporting the results of the survey, the team identified a number of cautions as to the interpretation of NFS workforce perceptions.  The team found that survey numerical ratings were inflated due to the lack of an accurate frame of reference or adequate understanding of a particular cultural attribute.  This conclusion was based on the findings of the overall assessment project.  The team found the workforce perceptions to be “generally (and in some cases significantly) more positive than warranted” (p. 40) or justified by actual performance.

We found these results to be interesting in several respects.  First there is the acknowledgment that surveys simply compile the perceptions of individuals in the organization.  In the NFS case the assessment team concluded that the reported perceptions were inaccurate based on the team’s own detailed analysis of the organization.

Perhaps more interesting was that this inherent subjectivity of perceptions was attributed in this project to the lack of knowledge and frame of reference of the NFS staff, specifically related to standards of excellence associated with commercial nuclear sites.  This resonates with an observation from our August 23 post that “workers who had been through an accident recognized a relatively safer (riskier) environment better than workers who had not.”  In other words, people’s perceptions are influenced by the limits of their own experiences and context.  Makes sense.

The NFS assessment team goes on to indicate that the results of a prior safety culture survey a year earlier also are compromised based on the very time frame in which it was administered.  “It is reasonable to assume that the survey numerical ratings would have been lower if the survey had been administered after the workforce had become aware of the facts associated with the series of operational events that occurred” [prior to the survey].  (p. 41)  We would add there are probably numerous other factors that could easily bias perceptions, e.g., people being sensitive to what the “right answer” is and responding on that basis; complacency; the effect of externalities such as a significant corporate initiative dependent on the performance of the nuclear business; normalization of deviation; job-related incentives, etc.

We think it is very likely that the assessment team was correct in discounting the NFS survey results.  The question is, can any other survey results be relied on absent independent calibration by detailed organizational assessments?  We will take this up in a forthcoming post.

*  "Information to Fulfill Confirmatory Order, Section V, Paragraph 3.e" (Jun 29,2010)  ADAMS Accession Number ML101820096.

Monday, September 13, 2010

Here We Go Again

Back on March 22, 2010 we posted about the challenge of addressing safety culture issues through one-dimensional approaches such as focusing on leadership or reiterating training materials.  We observed that the conventional wisdom that culture is simply leadership driven does not address the underlying complexity of culture dynamics.  San Onofre may be the most recent case in point.  In 2008 new leadership was brought in to the station in response to ongoing culture issues.  Safety culture improved somewhat, at least according to surveys, then it resumed its decline. Last week leadership was changed again following continued pressure by the NRC on cross cutting issues.  Perhaps ironically, one of the more recent actions taken at the station in response to continuing allegations of a “chilled environment” was….leadership training.*

The evolution of events at San Onofre also reinforces another observation we have made about the reliance on safety culture surveys.  As with just about all similar situations, the prescription for weaknesses in “cornerstone” issues by both licensees and the NRC is: conduct a survey.  Looking back in the San Onofre case, the following was determined in its October 2009 survey:

Overall, the Independent Safety Culture Assessment determined that “the safety culture at SONGS is sufficient to support plant operations”.

SCE also reported to the NRC that the survey showed:

Site management is communicating strong and consistent safety messages, including:

-    Safety is the first priority
-    Site personnel are encouraged and expected to identify and report potential safety concerns**

The NRC then conducted additional inspections in early 2010.  “The inspection team determined that the safety culture at SONGS was adequate; however, several areas were identified that needed improvement .... All of the individuals interviewed expressed a willingness to raise safety concerns and were able to provide multiple examples of avenues available, such as their supervisor, writing a notification, other supervisors/managers, or the Nuclear Safety Concerns Program; however, approximately 25% of those interviewed indicated that they perceived that individuals would be retaliated against if they went to the NRC with a safety concern if they were not satisfied with their management’s response.”***

“When asked about the 2009 nuclear safety culture assessment, all of the individuals interviewed remembered having attended a briefing session on the results. However, only the general result of "safety culture was adequate” was recalled by those interviewed.”***

* "SONGS Hit with Stern NRC Rebuke," San Clemente Times (March 2, 2010).

** Slides presented at Nov 5, 2009 SCE-NRC meeting, attached to NRC Meeting Summary dated Nov 20,2009, ADAMS Accession Number ML093240212.

*** Letter dated Mar 2, 2010 from E. Collins (NRC) to R.T. Ridenoure (SCE), subject "Work Environment Issues at San Onofre Nuclear Generating Station—Chilling Effect," ADAMS Accession Number ML100601272.

Friday, August 27, 2010

Safety Climate Surveys (Part 2)

On August 23, 2010 we posted on a paper* reporting on a safety climate survey conducted at a number of off-shore oil facilities.  We noted that paper presented a rigorous analysis of the survey data and also discussed the limitations of the data and the analysis.  Our Bob Cudlin has been in contact with the paper’s lead author who provided a candid assessment of how survey data should be used.

In a private message to Bob, Dr. Mearns gave a general warning against over-inference from survey data and findings.  She says, “I think safety climate surveys have their place but they need to be done properly and unfortunately, many attempts to measure safety climate are poorly executed.   The data obtained from surveys are simply numbers but they don’t tell you much about what is actually going on within the organisation or the team regarding safety.  I see the survey results as a ‘temperature check’ but it needs a more detailed diagnosis to find out what really ails the safety culture.” 

We couldn’t have said it better ourselves. 

*  Mearns K, Whitaker S & Flin R, “Safety climate, safety management practices and safety performance in offshore environments.”  Safety Science 41(8) 2003 (Oct) pp 641-680.

Monday, August 23, 2010

Safety Climate Surveys (Part 1)

In our August 11, 2010 post we quoted from a paper* addressing safety culture on off-shore oil facilities.  While the paper is a bit off-topic for SafetyMatters (the focus is more on industrial safety and individual, as opposed to group, perceptions), it provides a very good example of how safety climate survey data should be collected and rigorously analyzed, and hypotheses tested.  In addition, one of the findings is quite interesting.

The researchers knew from survey data which respondents had experienced an accident at a facility (not just those facilities where they were currently working), and which respondents had not.  They also knew which of the surveyed facilities had a historically higher proportion of accidents and which had a lower proportion.  “In this case, . . . respondents who had not experienced an accident provided significantly less favorable scores on installations with low accident proportions. Additionally, respondents who had experienced an accident provided significantly less favorable scores on installations with high accident proportions.” (p. 656)  In other words, workers who had been through an accident recognized a relatively safer (riskier) environment better than workers who had not.  While this is certainly more evidence that experience is the best teacher, we think it might have an implication for the commercial nuclear industry.

Unlike offshore oil workers, the overwhelming majority of nuclear power plant employees have never experienced a significant incident (we’re excluding ordinary personnel mishaps).  Thus, their work experience is of limited use in helping them assess just how strong their safety culture actually is.  Does this make these employees more vulnerable to complacency or slowing running off the rails a la NASA?  

*  Mearns K, Whitaker S & Flin R, “Safety climate, safety management practices and safety performance in offshore environments.”  Safety Science 41(8) 2003 (Oct) pp 641-680.