Showing posts with label SC Survey. Show all posts
Showing posts with label SC Survey. Show all posts

Wednesday, August 11, 2010

Down Under Perspective on Surveys

Now from Australia we have come across more research results related to some of the key findings we discussed in our August 2, 2010 post “Mission Impossible”. Recall from that post that research comparing the results of safety surveys prior to a significant event at an offshore oil platform with post-event investigations, revealed significant differences in cultural attributes.

This 2006 paper* draws on a variety of other published works and the author’s own experience in analyzing major safety events. Note that the author refers to safety culture surveys as “perception surveys”, since they focus on people’s perceptions of attitudes, values and behaviors.

“The survey method is well suited to studying individual attitudes and values and it might be thought that the method is thereby biased in favour of a definition of culture in these terms. However, the survey method is equally suited to studying practices, or ‘the way we do things around here’. The only qualification is that survey research of “the way we do things around” here necessarily measures people’s perceptions rather than what actually happens, which may not necessarily coincide.” (p.5) As we have argued, and this paper agrees, it is actual behaviors and outcomes that are most important. The question is, can actual behaviors be discerned or predicted on the basis of surveys? The answer is not clear.

“The question of whether or how the cultures so identified [e.g., by culture surveys] impact on safety is a separate question. Mearns and co-workers argue that there is some, though rather limited, evidence that organisations which do well in safety climate surveys actually have fewer accidents” (p. 14 citing Mearns et al)**

I kind of liked a distinction made early on in the paper, that it is better to ascertain an organization’s “culture” and then assess the impact of that culture on safety, then to directly assess “safety culture”. This approach emphasizes the internal dynamics and the interaction of values and safety priorities with other competing business and environmental pressures. As this paper notes, “. . .the survey method tells us very little about dynamic processes - how the organisation goes about solving its problems. This is an important limitation. . . .Schein makes a similar point when he notes that members of a culture are most likely to reveal themselves when they have problems to solve. . . .(p. 6)

*  Andrew Hopkins, "Studying Organisational Cultures and their Effects on Safety," paper prepared for presentation to the International Conference on Occupational Risk Prevention, Seville, May 2006 (National Research Centre for OHS Regulation, Australian National University).

**  Mearns K, Whitaker S & Flin R, “Safety climate, safety management practices and safety performance in offshore environments”. Safety Science 41(8) 2003 (Oct) pp 641-680.

Monday, August 2, 2010

Mission Impossible

We are back to the topic of safety culture surveys with a new post regarding an important piece of research by Dr. Stian Antonsen of the Norwegian University of Science and Technology.  He presents an empirical analysis of the following question:

    “..whether it is possible to ‘predict’ if an organization is prone to having major accidents on the basis of safety culture assessments.”*

We have previously posted a number of times on the use and efficacy of safety culture assessments.  As we observed in an August 17, 2009 post, “Both the NRC and the nuclear industry appear aligned on the use of assessments as a response to performance issues and even as an ongoing prophylactic tool.  But, are these assessments useful?  Or accurate?  Do they provide insights into the origins of cultural deficiencies?”

Safety culture surveys have become ubiquitous across the U.S. nuclear industry.  This reliance on surveys may be justified, Antonsen observes, to the extent they provide a “snapshot” of “attitudes, values and perceptions about organizational practices…”  But Antonsen cautions that the ability of surveys to predict organizational accidents has not been established empirically and cites some researchers who suspect surveys “‘invite respondents to espouse rationalisations, aspirations, cognitions or attitudes at best’ and that ‘we simply don’t know how to interpret the scales and factors resulting from this research’”.  Furthermore, surveys present questions where the favorable or desired answers may be obvious.  “The risk is, therefore, that the respondents’ answers reflect the way they feel they should feel, think and act regarding safety, rather than the way they actually do feel, think and act…”  As we have stated in a white paper** on nuclear safety management, “it is hard to avoid the trap that beliefs may be definitive but decisions and actions often are much more nuanced.”

To investigate the utility of safety culture surveys Antonsen compared results of a safety survey conducted of the employees of an offshore oil platform (Snorre Alpha) prior to a major operational incident, with the results of detailed investigations and analyses following the incident.  The survey questionnaire included twenty questions similar to those found in nuclear plant surveys.  Answers were structured on a six-point Likert scale, also similar to nuclear plant surveys.  The overall result of the survey was that employees had a highly positive view of safety culture on the rig.

The after incident analysis was performed by the Norwegian Petroleum Safety Authority  and a causal analysis was subsequently performed by Statoil (the rig owner) and a team of researchers.  The findings from the original survey and the later incident investigations were “dramatically different” as to the Snorre Alpha safety culture.  Perhaps one of the telling differences was that the post hoc analyses identified that the rig culture included meeting production targets as a dominant cultural value.  The bottom line finding was that the survey failed to identify significant organizational problems that later emerged in the incident investigations.

Antonsen evaluates possible reasons for the disconnect between surveys and performance outcomes.  He also comments on the useful role surveys can play; for example inter-organizational comparisons and inferring cultural traits.  In the end the research sounds a cautionary note on the link between survey-based measures and the “real” conditions that determine safety outcomes.

Post Script: Antonsen’s “Mission Impossible” paper was published in December 2009.  We now have seen another oil rig accident with the recent explosion and oil spill from BP’s Deepwater Horizon rig.  As we noted in our July 22, 2010 post, a safety culture survey had been performed of that rig’s staff several weeks prior to the explosion with overall positive results.  The investigations of this latest event could well provide additional empirical support for the "Mission Impossible" study results. 

* The study is “Safety Culture Assessment: A Mission Impossible?”  The link connects to the abstract; the paper is available for purchase at the same site.

**  Robert L. Cudlin, "Practicing Nuclear Safety Management" (March 2008), p. 3.

Thursday, July 22, 2010

Transocean Safety Culture and Surveys

An article in today’s New York Times, “Workers on Doomed Rig Voiced Concern About Safety”  reports on the safety culture on the Deepwater Horizon drilling rig that exploded in the Gulf of Mexico.  The article reveals that a safety culture survey had been performed of the staff on the rig in the weeks prior to the explosion.  The survey was commissioned by Transocean and performed by Lloyd’s Register Group, a maritime and risk-management organization that conducted focus groups and one-on-one interviews with at least 40 Transocean workers.

There are two noteworthy findings from the safety culture survey.  While the headline is that workers voiced safety concerns, the survey results indicate:
“Almost everyone felt they could raise safety concerns and these issues would be acted upon if this was within the immediate control of the rig,” said the report, which also found that more than 97 percent of workers felt encouraged to raise ideas for safety improvements and more than 90 percent felt encouraged to participate in safety-improvement initiatives....But investigators also said, ‘It must be stated at this point, however, that the workforce felt that this level of influence was restricted to issues that could be resolved directly on the rig, and that they had little influence at Divisional or Corporate levels.’ “
This highlights several of the shortcomings of safety culture surveys.  One, the vast majority of respondents to the survey indicated they were comfortable raising safety concerns - yet subsequent events and decisions led to a major safety breakdown.  So, is there a response level that is indicative of how the organization is actually doing business or do respondents tell the the survey takers “what they want to hear”?  And, is comfort in raising a safety concern the appropriate standard, when the larger corporate environment may not be responsive to such concerns or bury them with resource and schedule mandates?  Second, this survey focused on the workers on the rig.  Apparently there was a reasonably good culture in that location but it did not extend to the larger organization.  Consistent with that perception are some of the preliminary reports that corporate was pushing production over safety which may have influenced risk taking on the rig.  This is reminiscent of the space shuttle Challenger where political pressure seeped down into the decision making process, subtly changing the perception of risk at the operational levels of NASA.  How useful are surveys if they do not capture the dynamics higher in the organization or the the insidious ability of exogenous factors to change risk perceptions?
The other aspect of the Transocean surveys came not from the survey results but the rationalization by Transocean of their safety performance.  They “noted that the Deepwater Horizon had seven consecutive years without a single lost-time incident or major environmental event.”  This highlights two fallacies.  One, that the absence of a major accident demonstrates that safety performance is meeting its goals.  Two, that industrial accident rates correlate to safety culture and prudent safety management.  They don’t.  Also, recall our recent posts regarding nuclear compensation where we noted that the the most common metric for determining safety performance incentives in the nuclear industry is industrial accident rate. 

The NY Times article may be found at http://www.nytimes.com/2010/07/23/us/23hearing.html.

Wednesday, June 30, 2010

Can Safety Culture Be Regulated? (Part 2)

Part 1 of this topic covered the factors important to safety culture and amenable to measurement or assessment, the “known knowns.”   In this Part 2 we’ll review other factors we believe are important to safety culture but cannot be assessed very well, if at all, the “known unknowns” and the potential for factors or relationships important to safety culture that we don’t know about, the “unknown unknowns.”

Known Unknowns

These are factors that are probably important to regulating safety culture but cannot be assessed or cannot be assessed very well.  The hazard they pose is that deficient or declining performance may, over time, damage and degrade a previously adequate safety culture.

Measuring Safety Culture

This is the largest issue facing a regulator.  There is no meter or method that can be applied to an organization to obtain the value of some safety culture metric.  It’s challenging (impossible?) to robustly and validly assess, much less regulate, a variable that cannot be measured.  For a more complete discussion of this issue, please see our June 15, 2010 post

Trust

If the plant staff does not trust management to do the right thing, even when it costs significant resources, then safety culture will be negatively affected.  How does one measure trust, with a survey?  I don’t think surveys offer more than an instantaneous estimate of any trust metric’s value.

Complacency

Organizations that accept things as they are, or always have been, and see no opportunity or need for improvement are guilty of complacency or worse, hubris.  Lack of organizational reinforcement for a questioning attitude, especially when the questions may result in lost production or financial costs, is a de facto endorsement of complacency.  Complacency is often easy to see a posteriori, hard to detect as it occurs.  

Management competence

Does management implement and maintain consistent and effective management policies and processes?  Is the potential for goal conflict recognized and dealt with (i.e., are priorities set) in a transparent and widely accepted manner?  Organizations may get opinions on their managers’ competence, but not from the regulator.

The NRC does not evaluate plant or owner management competence.  They used to, or at least appeared to be trying to.  Remember the NRC senior management meetings, trending letters, and the Watch List?  While all the “problem” plants had material or work process issues, I believe a contributing factor was the regulator had lost confidence in the competence of plant management.  This system led to the epidemic of shutdown plants in the 1990s.*   In reaction, politicians became concerned over the financial losses to plant owners and employees, and the Commission become concerned that the staff’s explicit/implicit management evaluation process was neither robust and nor valid.

So the NRC replaced a data-informed subjective process with the Reactor Oversight Program (ROP) which looks at a set of “objective” performance indicators and a more subjective inference of cross-cutting issues: human performance, finding and fixing problems (CAP, a known), and management attention to safety and workers' ability to raise safety issues (SCWE, part known and part unknown).  I don’t believe that anyone, especially an outsider like a regulator, can get a reasonable picture of a plant’s safety culture from the “Rope.”  There most certainly are no leading or predictive safety performance indicators in this system.

External influences

These factors include changes in plant ownership, financial health of the owner, environmental regulations, employee perceptions about management’s “real” priorities, third-party assessments, local socio-political pressures and the like.  Any change in these factors could have some effect on safety culture.

Unknown Unknowns

These are the factors that affect safety culture but we don’t know about.  While a lot of smart people have invested significant time and effort in identifying factors that influence safety culture, new possibilities can still emerge.

For example, a new factor has just appeared on our radar screen: executive compensation.  Bob Cudlin has been researching the compensation packages for senior nuclear executives and some of the numbers are eye-popping, especially in comparison to historical utility norms.  Bob will soon post on his findings, including where safety figures into the compensation schemes, an important consideration since much executive compensation is incentive-based.

In addition, it could well be that there are interactions (feedback loops and the like), perhaps varying in structure and intensity over time, between and among the known and unknown factors, that have varying impacts on the evolutionary arc of an organization’s safety culture.  Because of such factors, our hope that safety culture is essentially stable, with a relatively long decay time, may be false; safety culture may be susceptible to sudden drop-offs. 

The Bottom Line

Can safety culture be regulated?  At the current state of knowledge, with some “known knowns” but no standard approach to measuring safety culture and no leading safety performance indicators, we’d have to say “Yes, but only to some degree.”  The regulator may claim to have a handle on an organization’s safety culture through SCWE observations and indirect evidence, but we don’t think the regulator is in a good position to predict or even anticipate the next issue or incident related to safety culture in the nuclear industry. 

* In the U.S. in 1997, one couldn’t swing a dead cat without hitting a shutdown nuclear power plant.  17 units were shutdown during all or part of that year, out of a total population of 108 units. 

Monday, June 28, 2010

Can Safety Culture Be Regulated? (Part 1)

One of our recent posts questioned whether safety culture is measurable.  Now we will slide out a bit further on a limb and wonder aloud if safety culture can be effectively regulated.  We are not alone in thinking about this.  In fact, one expert has flatly stated “Since safety culture cannot be ‘regulated’, appraisal of the safety culture in operating organizations becomes a major challenge for regulatory authorities.“*

The recent incidents in the coal mining and oil drilling industries reinforce the idea that safety culture may not be amenable to regulation in the usual sense of the term, i.e., as compliance with rules and regulations based on behavior or artifacts that can be directly observed and judged.  The government can count regulatory infractions and casually observe employees, but can it look into an organization, assess what is there and then, if necessary, implement interventions that can be defended to the company, Congress and the public?

There are many variables, challenges and obstacles to consider in the effective regulation of safety culture.  To facilitate discussion of these factors, I have adapted the Rumsfeld (yes, that one) typology** and sorted some of them into “known knowns”, “unknown knowns”, and “unknown unknowns.”  The set of factors listed is intended to be illustrative and not claimed to be complete.

Known Knowns

These are factors that are widely believed to be important to safety culture and are amenable to assessment in some robust (repeatable) and valid (accurate) manner.  An adequate safety culture will not long tolerate sub-standard performance in these areas.  Conversely, deficient performance in any of these areas will, over time, damage and degrade a previously adequate safety culture.  We’re not claiming that these factors will always be accurately assessed but we’ll argue that it should be possible to do so.

Corrective action program (CAP)

This is the system for fixing problems.  Increasing corrective action backlogs, repeated occurrences of the same or similar problems, and failure to address the root causes of problems are signs that the organization can’t or won’t solve its problems.  In an adequate safety culture, the organization will fix the current instance of a problem and take steps to prevent the same or similar problems from recurring in the future.

Process reviews

The work of an organization gets done by implementing processes.  Procedural deficiencies, workarounds, and repeated human errors indicate an organization that can’t or won’t align its documented work processes with the way work is actually performed.  An important element of safety culture is that employees have confidence in procedures and processes. 

Self assessments

An adequate safety culture is characterized by few, if any, limits on the scope of assessments or the authority of assessors.  Assessments do not repeatedly identify the same or similar opportunities for improvement or promote trivial improvements (aka “rearranging the deck chairs”).  In addition, independent external evaluations are used to confirm the findings and recommendation of self assessments.

Management commitment

In an adequate safety culture, top management exhibits a real and visible commitment to safety management and safety culture.  Note that this is more limited than the state of overall management competence, which we’ll cover in part 2.

Safety conscious work environment (SCWE)

Are employees willing to make complaints about safety-related issues?  Do they fear retribution if they do so?  Are they telling the truth to regulators or surveyors?  In an adequate safety culture, the answers are “yes,” “no” and “yes.”  We are not convinced that SCWE is a true "known known" given the potential issues with the methods used to assess it (click the Safety Culture Survey label to see our previous comments on surveys and interviews) but we'll give the regulator the benefit of the doubt on this one.

A lot of information can be reliably collected on the “known knowns.”  For our purpose, though, there is a single strategic question with respect to them, viz., do the known knowns provide a sufficient dataset for assessing and regulating an organization’s safety culture?  We’ll hold off answering that question until part 2 where we’ll review other factors we believe are important to safety culture but cannot be assessed very well, if at all, and the potential for factors or relationships that are important to safety culture but we don’t even know about.

* Annick Carnino, "Management of Safety, Safety Culture and Self Assessment," Top Safe, 15-17 April 1998, Valencia, Spain.  Ms. Carnino is the former Director, Division of Nuclear Installation Safety, International Atomic Energy Agency.  This is a great paper, covering every important aspect of safety management, and reads like it was recently written.  It’s hard to believe it is over ten years old.

** NATO HQ, Brussels, Press Conference by U.S. Secretary of Defense Donald Rumsfeld, June 6, 2002. The exact quote: “There are known unknowns. That is to say, there are things we now know we don’t know. But there are also unknown unknowns.  These are the things we do not know we don’t know.”  Referenced by Errol Morris in a New York Times Opinionator article, “The Anosognosic’s Dilemma: Something’s Wrong but You’ll Never Know What It Is (Part 1)”, June 20, 2010.

Tuesday, June 15, 2010

Can Measuring Safety Culture Harm It?

That’s a question raised in a paper by Björn Wahlström and Carl Rollenhagen.* Among other issues, the authors question the reliability and validity of safety culture measurement tools, especially the questionnaires and interviews often used to assess safety culture. One problem is that such measurement tools, when applied by outsiders such as regulators, can result in the interviewees trying to game the outcome. “. . . the more or less explicit threat to shut down a badly performing plant will most likely at least in a hostile regulatory climate, bring deceit and delusion into a regulatory assessment of safety culture.” (§ 5.3)

Another potential problem is created by a string of good safety culture scores. We have often said success breeds complacency and an unjustified confidence that past results will lead to future success. The nuclear industry does not prepare for surprises yet, as the authors note, the current state of safety thinking was inspired by two major accidents, not incremental progress. (§ 5.2) Where is the next Black Swan lurking?

Surprise after success can occur on a much smaller scale. After the recent flap at Vermont Yankee, evaluators spent considerable time poring over the plant’s most recent safety culture survey to see what insight it offered into the behavior of the staff involved with the misleading report on leaking pipes. I don’t think they found much. Entergy’s law firm conducted interviews at the plant and concluded the safety culture was and is strong. See the opening paragraph for a possible interpretation.

The authors also note that if safety culture is an emergent property of an organization, then it may not be measurable at all because emergent properties develop without conscious control actions. (§ 4.2) See our earlier post for a discussion of safety culture as emergent property.

While safety culture may not be measurable, it is possible to assess it. The authors’ thoughts on how to perform useful assessments will be reviewed in a future post.

* Bj̦rn Wahlstr̦m, Carl Rollenhagen. Assessments of safety culture Рto measure or not? Paper presented at the 14th European Congress of Work and Organizational Psychology, May 13-16, 2009, Santiago de Compostela, Spain. The authors are also connected with the LearnSafe project, which we have discussed in earlier posts (click the LearnSafe label to see them.)

Monday, May 17, 2010

How Do We Know? Dave Collins, Millstone, Dominion and the NRC

This week the issues being raised by former Dominion engineer David Collins* regarding safety culture at Millstone are receiving increased attention. It appears David is raising two general issues: (1) Is Dominion's safety culture being eroded due to cost and competitive pressures and (2) is the NRC being effective in its regulatory oversight of safety culture?

So far the responses of the various players are along standard lines. Dominion contends it is simply harvesting cost efficiencies in its organization without compromising safety and specific problems are isolated events. The NRC has referred the issues to its Office of Investigation thus limiting transparency. INPO will not comment on its confidential assessments of nuclear owners.

What is one to make of this? First we have no special insights into the bases for the issues being raised by Collins. We have interacted with him in the past on safety culture issues and he is clearly a dedicated and knowledgeable individual with a strong commitment to nuclear safety culture. Thus we would be inclined to give his allegations serious consideration.

On the broader issues we see both opportunities and risks. We have emphasized cost and competitive pressures as a key to understanding how safety culture must balance multiple priorities to assure safety. What we do not see at the moment is how Dominion, the NRC or INPO would be able to determine whether, or to what extent, such pressures might be impacting decisions within Dominion. We doubt whether current approaches to assessing safety culture can be determinative; e.g., safety culture surveys, NRC performance indicators, or INPO assessments. In addition a plant-centric focus is unlikely to reveal systemic interactions or top-down signals that may result in decisional pressure at the plant levels. Recall that a significant exogenous pressure cited in the space shuttle Challenger accident was Congressional political pressure.

So while we understand the nature of the processes underway to evaluate Collins’ issues, it would be helpful if any or all the organizations could explain their methods for assessing these type of issues and the objective evidence to be used in making findings. The risk at this point is that the industry and NRC appear to be more focused on negating the allegations than in taking a hard look at their possible merits, including how exactly to evaluate them.

* Follow the link below for an overview of Collins' issues and other parties' responses.

Wednesday, April 28, 2010

Safety Culture: Cause or Context (part 2)

In an earlier post, we discussed how “mental models” of safety culture affect perceptions about how safety culture interacts with other organizational factors and what interventions can be taken if safety culture issues arise. We also described two mental models, the Causal Attitude and Engineered Organization. This post describes a different mental model, one that puts greater emphasis on safety culture as a context for organizational action.

Safety Culture as Emergent and Indeterminate

If the High Reliability Organization model is basically optimistic, the Emergent and Indeterminate model is more skeptical, even pessimistic as some authors believe that accidents are unavoidable in complex, closely linked systems. In this view, “the consequences of safety culture cannot be engineered and only probabilistically predicted.” Further, “safety is understood as an elusive, inspirational asymptote, and more often only one of a number of competing organizational objectives.” (p. 356)* Safety culture is not a cause of action, but provides the context in which action occurs. Efforts to exhaustively model (and thus eventually manage) the organization are doomed to failure because the organization is constantly adapting and evolving.

This model sees that the same processes that produce the ordinary and routine stuff of everyday organizational life also produce the messages of impending problems. But the organization’s necessary cognitive processes tend to normalize and homogenize; the organization can’t very well be expected to treat every input as novel or not previously experienced. In addition, distributed work processes and official security policies can limit the information available to individuals. Troublesome information may be buried or discredited. And finally, “Dangers that are neither spectacular, sudden, nor disastrous, or that do not resonate with symbolic fears, can remain ignored and unattended, . . . . “ (p. 357)

We don’t believe safety significant events are inevitable in nuclear organizations but we do believe that the hubris of organizational designers can lead to specific problems, viz., the tendency to ignore data that does not comport with established categories. In our work, we promote a systems approach, based on system dynamics and probabilistic thinking, but we recognize that any mental or physical model of an actual, evolving organization is just that, a model. And the problem with models is that their representation of reality, their “fit,” can change with time. With ongoing attention and effort, the fit may become better but that is a goal, not a guaranteed outcome.

Lessons Learned

What are the takeaways from this review? First, mental models are important. They provide a framework for understanding the world and its information flows, a framework that the holder may believe to be objective but is actually quite subjective and creates biases that can cause the holder to ignore information that doesn’t fit into the model.

Second, the people who are involved in the safety culture discussion do not share a common mental model of safety culture. They form their models with different assumptions, e.g., some think safety culture is a force that can and does affect the vector of organizational behavior, while others believe it is a context that influences, but does not determine, organizational and individual decisions.

Third, safety culture cannot be extracted from its immediate circumstances and examined in isolation. Safety culture always exists in some larger situation, a world of competing goals and significant uncertainty with respect to key factors that determine the organization’s future.

Fourth, there is a risk of over-reliance on surveys to provide some kind of "truth" about an organization’s safety culture, especially if actual experience is judged or minimized to fit the survey results. Since there is already debate about what surveys measure (safety culture or safety climate?), we advise caution.

Finally, in addition to appropriate models and analyses, training, supervision and management, the individual who senses that something is just not right and is supported by an organization that allows, rather than vilifies, alternative interpretations of data is a vital component of the safety system.


* This post draws on Susan S. Silbey, "Taming Prometheus: Talk of Safety and Culture," Annual Review of Sociology, Volume 35, September 2009, pp. 341-369.

Monday, April 19, 2010

The View On The Ground

A brief follow-up to the prior post re situational factors that could be in play in reaching a decision about resuming airline flights in Europe.  Fox News has been polling to assess the views of the public and the results of the poll are provided in the adjacent box.  Note that the overwhelming sentiment is that safety should be the priority.  Also note the wording of the choices, where the “yes” option appears to imply that flights should be resumed based on the “other” priorities such as money and passenger pressure, while the “no” option is based on safety being the priority.  Obviously the wording makes the “yes” option appear to be one where safety may be sacrificed.

So the results are hardly surprising.  But what do the results really mean?  For one it reminds us of the importance of the wording of questions in a survey.  It also illustrates how easy it is to get a large positive response that “safety should be the priority”.  Would the responses have been different if the “yes” option made it clear that airlines believed it was safe to fly and had done test flights to verify?  Does the wording of the “no” option create a false impression that an “all clear” (presumably by regulators) would equate to absolute safety, or at least be arrived at without consideration of other factors such as the need to get air travel resumed? 

Note:  Regulatory authorities in Europe agreed late today to allow limited resumption of air travel starting Tuesday.  Is this an “all clear” or a more nuanced determination that it is safe enough?

Sunday, April 18, 2010

Safety Culture: Cause or Context (part 1)

As we have mentioned before, we are perplexed that people are still spending time working on safety culture definitions. After all, it’s not because of some definitional issue that problems associated with safety culture arise at nuclear plants. Perhaps one contributing factor to the ongoing discussion is that people hold different views of what the essence of safety culture is, views that are influenced by individuals’ backgrounds, experiences and expectations. Consultants, lawyers, engineers, managers, workers and social scientists can and do have different perceptions of safety culture. Using a term from system dynamics, they have different “mental models.”

Examining these mental models is not an empty semantic exercise; one’s mental model of safety culture determines (a) the degree to which one believes it is measurable, manageable or independent, i.e. separate from other organizational features, (b) whether safety culture is causally related to actions or simply a context for actions, and (c) most importantly, what specific strategies for improving safety performance might work.

To help identify different mental models, we will refer to a 2009 academic article by Susan Silbey,* a sociology professor at MIT. Her article does a good job of reviewing the voluminous safety culture literature and assigning authors and concepts into three main categories: Culture as (a) Causal Attitude, (b) Engineered Organization, and (c) Emergent and Indeterminate. To fit into our blog format, we will greatly summarize her paper, focusing on points that illustrate our notion of different mental models, and publish this analysis in two parts.

Safety Culture as Causal Attitude

In this model, safety culture is a general concept that refers to an organization’s collective values, beliefs, assumptions, and norms, often assessed using survey instruments. Explanations of accidents and incidents that focus on or blame an organization’s safety culture are really saying that the then-existing safety culture somehow caused the negative events to occur or can be linked to the events by some causal chain. (For an example of this approach, refer to the Baker Report on the 2005 BP Texas City refinery accident.)

Adopting this mental model, it follows logically that the corrective action should be to fix the safety culture. We’ve all seen, or been a part of, this – a new management team, more training, different procedures, meetings, closer supervision – all intended to fix something that cannot be seen but is explicitly or implicitly believed to be changeable and to some extent measurable.

This approach can and does work in the short run. Problems can arise in the longer-term as non-safety performance goals demand attention; apparent success in the safety area breeds complacency; or repetitive, monotonous reinforcement becomes less effective, leading to safety culture decay. See our post of March 22, 2010 for a discussion of the decay phenomenon.

Perhaps because this model reinforces the notion that safety culture is an independent organizational characteristic, the model encourages involved parties (plant owners, regulators, the public) to view safety culture with a relatively narrow field of view. Periodic surveys and regulatory observations conclude a plant’s safety culture is satisfactory and everyone who counts accepts that conclusion. But then an event occurs like the recent situation at Vermont Yankee and suddenly people (or at least we) are asking: How can eleven employees at a plant with a good safety culture (as indicated by survey) produce or endorse a report that can mislead reviewers on a topic that can affect public health and safety?

Safety Culture as Engineered Organization

The model is evidenced in the work of the High Reliability Organization (HRO) writers. Their general concept of safety culture appears similar to the Causal Attitude camp but HRO differs in “its explicit articulation of the organizational configuration and practices that should make organizations more reliably safe.” (Silbey, p. 353) It focuses on an organization’s learning culture where “organizational learning takes place through trial and error, supplemented by anticipatory simulations.” Believers are basically optimistic that effective organizational prescriptions for achieving safety goals can be identified, specified and implemented.

This model appears to work best in a command and control organization, i.e., the military. Why? Primarily because a specific military service is characterized by a homogeneous organizational culture, i.e., norms are shared both hierarchically (up and down) and across the service. Frequent personnel transfers at all organizational levels remove people from one situation and reinsert them into another, similar situation. Many of the physical settings are similar – one ship of a certain type and class looks pretty much like another; military bases have a common set of facilities.

In contrast, commercial nuclear plants represent a somewhat different population. Many staff members work more or less permanently at a specific plant and the industry could not have come up with more unique physical plant configurations if it had tried. Perhaps it is not surprising that HRO research, including reviews of nuclear plants, has shown strong cultural homogeneity within individual organizations but lack of a shared culture across organizations.

At its best, the model can instill “processes of collective mindfulness” or “interpretive work directed at weak signals.” At its worst, if everyone sees things alike, an organization can “[drift] toward[s] inertia without consideration that things could be different.” (Weick 1999, quoted in Silbey, p.354) Because HRO is highly dependent on cultural homogeneity, it may be less conscious of growing problems if the organization starts to slowly go off the rails, a la the space shuttle Challenger.

We have seen efforts to implement this model at individual nuclear plants, usually by trying to get everything done “the Navy way.” We have even promoted this view when we talked back in the late 1990s about the benefits of industry consolidation and the best practices that were being implemented by Advanced Nuclear Enterprises (a term Bob coined in 1996). Today, we can see that this model provides a temporary, partial answer but can face challenges in the longer run if it does not constantly adjust to the dynamic nature of safety culture.

Stay tuned for Safety Culture: Cause or Context (part 2).

* Susan S. Silbey, "Taming Prometheus: Talk of Safety and Culture," Annual Review of Sociology, Volume 35, September 2009, pp. 341-369.

Wednesday, August 26, 2009

Can Assessments Identify Complacency? Can Assessments Breed Complacency?

To delve a little deeper into this question, on Slide 10 of the NEI presentation there is a typical summary graphic of assessment results.  The chart catalogs the responses of members of the organization by the eight INPO principles of safety culture.  This summary indicates a variety of responses to the individual principles – for 3 or 4 of the principles there seems to be a fairly strong consensus that the right things are happening.  But 5 of the 8 principles show greater than a 20 score negative responses and 2 of the principles show greater than a 40 score negatives. 

First, what can or should one conclude about the overall state of safety culture in this organization given these results?  One wonders if these results were shown to a number of experts, whether their interpretations would be consistent or whether they would even purport to associate the results with a finding.  As discussed in a prior post, this issue is fundamental to the nature of safety culture, whether it is amenable to direct measurement, and whether assessment results really say anything about the safety health of the organization.

But the more particular question for this post is whether an assessment can detect complacency in an organization and its potential for latent risk to the organization’s safety performance.  In a post dated July 30, 2009 I referred to the problems presented by complacency, particularly in organizations experiencing few operational challenges.  That environment can be ripe for a weak culture to develop or be sustained. Could that environment also bias the responses to assessment questions, reinforcing the incorrect perception that safety culture is healthy?  It may be that this type of situation is of most relevance in today’s nuclear industry where the vast majority of plants are operating at high capacity factors and experiencing few significant operational events.  It is not clear to this commentator that assessments can be designed to explicitly detect complacency, and even the use of assessment results in conjunction with other data (data likely to look normal when overall performance is good) may not be credible in raising an alarm.

Link to NEI presentation.

Monday, August 24, 2009

Assessment Results – A Rose is a Rose

The famous words of Gertrude Stein are most often associated with the notion that when all is said and done, a thing is what it is.  We offer this idea as we continue to look at the meaning of safety culture assessment results – are the results just the results, or do they signify some meaning or interpretation beyond the results?

To illustrate some of the issues I will use an NEI presentation made to the NRC on February 3, 2009.  On Slide 2 there is a statement that the USA methodology (for safety culture surveys and assessments) has been used successfully for five years.   One question is what does it mean that an assessment was successful?  The intent is not to pick on this particular methodology but to open the question of exactly what is the expected result of performing an assessment.

It may be that “successful” means that the organizations being assessed have found the process and results to be useful or interesting, e.g., by stimulating discussion or furthering exploration of issues associated with the results.  There are many, myself included, who believe anything that stimulates an organization to discuss and contemplate safety management issues is beneficial.  On the other hand it may be that organizations (and regulators??) believe assessments are successful because they can use the results to make a determination that a safety culture is “acceptable” or “strong” or “needs improvement”.  Can assessments really carry the weight of this expectation?  Or is a rose just a rose?

Slide 11 highlights these questions by indicating a validation of the assessment methodology is to be carried out.  “Validation” seems to suggest that assessments mean something beyond their immediate results.  It may also suggest that assessment results can be compared to some “known” value to determine whether the assessment accurately measured or predicted that value.  We will have to wait and see what is intended and how the validation is performed.  At the same time we will be keeping in mind the observation of Professor Wilpert in my post of August 17, 2009 that “culture is not a quantifiable phenomenon”.

Link to presentation
.

Monday, August 17, 2009

Safety Culture Assessment

A topic that we will visit regularly is the use of safety culture assessments to assign quantitative values to the condition of a specific organization and even the individual departments and working groups within the organization.  One reason for this focus is the emphasis on safety culture assessments as a response to situations where organizational performance does not meet expectations and “culture” is believed to be a factor.  Both the NRC and the nuclear industry appear aligned on the use of assessments as a response to performance issues and even as an ongoing prophylactic tool.  But, are these assessments useful?  Or accurate?  Do they provide insights into the origins of cultural deficiencies?

One question that frequently comes to mind is, can safety culture be separated from the manifestation of culture in terms of the specific actions and decisions taken by an organization?  For example, if an organization makes some decisions that are clearly at odds with “safety being the overriding priority”, can the culture of the organization not be deficient?  But if an assessment of the culture is performed, and the espoused beliefs and priorities are generally supportive of safety, what is to be made of those responses? 

The reference material for this post comes from some work led by the late Bernhard Wilpert of the Berlin University of Technology.  (We will sample a variety of his work in the safety management area in future posts.)   It is a brief slide presentation titled, “Challenges and Opportunities of Assessing Safety Culture”.  Slide 3 for example revisits E. H. Schein’s multi-dimensional formulation of safety culture which suggests that assessments must be able to expose all levels of culture and their integrated effect. 

Two observations from these slides seem of particular note.  They are both under Item 4, Methodological Challenges.  The first observation is that culture is not a quantifiable phenomenon and does not lend itself easily to benchmarking.  This bears consideration as most assessment methods being used today employ some statistical comparisons to assessments at other plants, including percentile type ranking.   The other observation in the slide is that culture results from the learning experience of its members.  This is of particular interest to us as it supports some of the thinking associated with a systems dynamics approach.  A systems view involves the development of shared “mental models” of how safety management “works”; the goal being that individual actions and decisions can be understood within a commonly understood framework.  The systems process becomes, in essence, the mechanism for translating beliefs into actions.


Link to slide presentation

Monday, July 27, 2009

Organizational Learning (MIT #2)

The central question posed in the MIT paper is: What are the contributors to an organization’s ability to learn and sustain a robust safety culture?  According to the authors, “Here, the focus shifts from prescribing elements of an effective safety culture to managers to an examination of why it is that organizations so often fail to learn…. instead of focusing on enforcement, individuals might question why the rules were not originally followed” [p. 4]  In our paper “Practicing Nuclear Safety Management” we ask the same question.  I have reviewed the presentation materials from the NRC’s safety culture meetings earlier this year.  There is almost total emphasis on actions such as safety culture surveys to assess the state of the organization and various remedial measures to correct any deviations from prescribed rules, but no real questioning of what causes personnel to disregard established safety expectations.

If any readers can provide examples, e.g., presentation materials or assessments, where nuclear organizations have attempted to answer the question “Why?”, please provide a comment below along with appropriate links to the references.  It would greatly help the discussion.