Showing posts with label SCWE. Show all posts
Showing posts with label SCWE. Show all posts

Sunday, April 22, 2012

Science Culture: A Lesson for Nuclear Safety Culture?

An article in the New York Times* earlier this week caught our attention as part of our contemplation of the causes of safety culture issues and effectiveness.  The article itself is about the increasing incidence of misconduct by scientists in their research and publications, particularly in scientific journals.  There may in fact be a variety of factors that are responsible, including just the sheer accessibility of journal published research and the increased opportunity that errors will be spotted.  But the main thrust of the article is that other more insidious forces may be responsible:

“But other forces are more pernicious.  To survive professionally, scientists feel the need to publish as many papers as possible….And sometimes they cut corners or even commit misconduct to get there.”

The article goes on to describe how in the scientific community the ability to publish is key to professional recognition, advancement and award of grant money.  There is enormous pressure to publish first and publish often to overcome “cutthroat competition”.

So how do retractions of scientific papers relate to nuclear safety culture?  In the most general sense the presence and impact of “pressure” on scientists reminds us of the situation in nuclear generation - now very much a high stakes business - and the consequent pressure on nuclear managers to meet business goals and in some cases, personal compensation goals.  Nuclear personnel (engineers, managers, operators, craftsmen, etc.), like the scientists in this article, are highly trained and expected to observe certain cultural norms; a strong safety culture is expected.  For scientists there is adherence to the scientific method itself and the standards for integrity of their peer community.  Yet both may be compromised when the desire for professional success becomes dominant.

The scientific environment is in most ways much simpler than a nuclear operating organization and this may help shed light on the causes of normative failures.  Nuclear organizations are inherently large and complex.  The consideration of culture often becomes enmeshed in issues such as leadership, communications, expectations, pronouncements regarding safety priorities, perceptions, SCWE, etc.  In the simpler scientific world, scientists are essentially sole proprietors of their careers, even if they work for large entities.  They face challenges to their advancement and viability, they make choices, and sometimes they make compromises.  Can reality in the nuclear operating environment be similar, or is nuclear somehow unique and different?  


*  C. Zimmer, “A Sharp Rise in Retractions Prompts Calls for Reform,” New York Times (Apr. 16, 2012).

Wednesday, December 21, 2011

From SCWE to Safety Culture—Time for the Soapbox

Is a satisfactory Safety Conscious Work Environment (SCWE) the same as an effective safety culture (SC)?  Absolutely not.  However, some of the reports and commentary we’ve seen on troubled facilities appear to mash the terms together.  I can’t prove it, but I suspect facilities that rely heavily on lawyers to rationalize their operations are encouraged to try to pass off SCWE as SC.  In any case, following is a review of the basic components of SC:

Safety Conscious Work Environment

An acceptable SCWE* is one where employees are encouraged and feel free to raise safety-related issues without fear of retaliation by their employer.  Note that it does not necessarily address individual employees’ knowledge of or interest in such issues.

Problem Identification and Resolution (PI&R)

PI&R is usually manifested in a facility’s corrective action program (CAP).  An acceptable CAP has a robust, transparent process for evaluating, prioritizing and resolving specific issues.  The prioritization step includes an appropriate weight for an issue’s safety-related elements.  CAP backlogs are managed to levels that employees and regulators associate with timely resolution of issues.

However, the CAP often only deals with identified issues.  Effective organizations must also anticipate problems and develop plans for addressing them.  Again, safety must have an appropriate priority.

Organizational Decision Making

The best way to evaluate an organization’s culture, including safety culture, is through an in-depth analysis of a representative sample of key decisions.  How did the decision-making process handle competing goals, set priorities, treat devil’s advocates who raised concerns about possible unfavorable outcomes, and assign resources?  Were the most qualified people involved in the decisions, regardless of their position or rank?  Note that this evaluation should not be limited to situations where the decisions led to unfavorable consequences; after all, most decisions lead to acceptable outcomes.  The question here is “How were safety concerns handled in the decision making process, independent of the outcome?”

Management Behavior

What is management’s role in all this?  Facility and corporate managers must “walk the talk” as role models demonstrating the importance of safety in all aspects of organizational life.  They must provide personal leadership that reinforces safety.  They must establish a recognition and reward system that reinforces safety.  Most importantly, they must establish and maintain the explicit and implicit weighting factors that go into all decisions.  All of these actions reinforce the desired underlying assumptions with respect to safety throughout the organization. 

Conclusion

Establishing a sound safety culture is not rocket science but it does require focus and understanding (a “mental model”) of how things work.  SCWE, PI&R, Decision Making and Management Behavior are all necessary components of safety culture.  Not to put too fine a point on it, but safety culture is a lot more than quoting a survey result that says “workers feel free to ask safety-related questions.”


*  SCWE questions have also been raised on the LinkedIn Nuclear Safety and Nuclear Safety Culture discussion forums.  Some of the commentary is simple bloviating but there are enough nuggets of fact or insight to make these forums worth following.

Thursday, March 3, 2011

Safety Culture in the DOE Complex

This post reviews a Department of Energy (DOE) effort to provide safety culture assessment and improvement tools for its own operations and those of its contractors.

Introduction

The DOE is responsible for a vast array of organizations that work on DOE’s programs.  These organizations range from very small to huge in size and include private contractors, government facilities, specialty shops, niche manufacturers, labs and factories.  Many are engaged in high-hazard activities (including nuclear) so DOE is interested in promoting an effective safety culture across the complex.

To that end, a task team* was established in 2007 “to identify a consensus set of safety culture principles, along with implementation practices that could be used by DOE . . .  and their contractors. . . . The goal of this effort was to achieve an improved safety culture through ISMS [Integrated Safety Management System] continuous improvement, building on operating experience from similar industries, such as the domestic and international commercial nuclear and chemical industries.”  (Final Report**, p. 2)

It appears the team performed most of its research during 2008, conducted a pilot program in 2009 and published its final report in 2010.  Research included reviewing the space shuttle and Texas City disasters, the Davis-Besse incident, works by gurus such as James Reason, and guidance and practices published by NASA, NRC, IAEA, INPO and OSHA.

Major Results

The team developed a definition of safety culture and described a process whereby using organizations could assess their safety culture and, if necessary, take steps to improve it.

The team’s definition of safety culture:

“An organization’s values and behaviors modeled by its leaders and internalized by its members, which serve to make safe performance of work the overriding priority to protect the workers, public, and the environment.” (Final Report, p. 5)

After presenting this definition, the report goes on to say “The Team believes that voluntary, proactive pursuit of excellence is preferable to regulatory approaches to address safety culture because it is difficult to regulate values and behaviors. DOE is not currently considering regulation or requirements relative to safety culture.” (Final Report, pp. 5-6)

The team identified three focus areas that were judged to have the most impact on improving safety and production performance within the DOE complex: Leadership, Employee/Worker Engagement, and Organizational Learning. For each of these three focus areas, the team identified related attributes.

The overall process for a using organization is to review the focus areas and attributes, assess the current safety culture, select and use appropriate improvement tools, and reinforce results. 

The list of tools to assess safety culture includes direct observations, causal factors analysis (CFA), surveys, interviews, review of key processes, performance indicators, Voluntary Protection Program (VPP) assessments, stream analysis and Human Performance Improvement (HPI) assessments.***  The Final Report also mentioned performance metrics and workshops. (Final Report, p. 9)

Tools to improve safety culture include senior management commitment, clear expectations, ISMS training, managers spending time in the field, coaching and mentoring, Behavior Based Safety (BBS), VPP, Six Sigma, the problem identification process, and HPI.****  The Final Report also mentioned High Reliability Organization (HRO), Safety Conscious Work Environment (SCWE) and Differing Professional Opinion (DPO). (Final Report, p. 9)  Whew.

The results of a one-year pilot program at multiple contractors were evaluated and the lessons learned were incorporated in the final report.

Our Assessment

Given the diversity of the DOE complex, it’s obvious that no “one size fits all” approach is likely to be effective.  But it’s not clear that what the team has provided will be all that effective either.  The team’s product is really a collection of concepts and tools culled from the work of outsiders, combined with DOE’s existing management programs, and repackaged as a combination of overall process and laundry lists.  Users are left to determine for themselves exactly which sub-set of tools might be useful in their individual situations.

It’s not that the report is bad.  For example, the general discussion of safety culture improvement emphasizes the importance of creating a learning organization focused on continuous improvement.  In addition, a major point they got right was recognizing that safety can contribute to better mission performance.  “The strong correlation between good safety performance with good mission performance (or productivity or reliability) has been observed in many different contexts, including industrial, chemical, and nuclear operations.” (Final Report, p. 20)

On the other hand, the team has adopted the works of others but does not appear to recognize how, in a systems sense, safety culture is interwoven into the fabric of an organization.  For example, feedback loops from the multitude of possible interventions to overall safety culture are not even mentioned.  And this is not a trivial issue.  An intervention can provide an initial boost to safety culture but then safety culture may start to decay because of saturation effects, especially if the organization is hit with one intervention after another.

In addition, some of the major, omnipresent threats to safety culture do not get the emphasis they deserve.  Goal conflict, normalization of deviance and institutional complacency are included in a list of issues from the Columbia, Davis-Besse and Texas City events (Final Report, p. 13-15) but the authors do not give them the overarching importance they merit.  Goal conflict, often expressed as safety vs mission, should obviously be avoided but its insidiousness is not adequately recognized; the other two factors are treated in a similar manner. 

Two final picky points:  First, the report says it’s difficult to regulate behavior.  That’s true but companies and government do it all the time.  DOE could definitely promulgate a behavior-based safety culture regulatory requirement if it chose to do so.  Second, the final report (p. 9) mentions leading (vs lagging) indicators as part of assessment but the guidelines do not provide any examples.  If someone has some useful leading indicators, we’d definitely like to know about them. 

Bottom line, the DOE effort draws from many sources and probably represents consensus building among stakeholders on an epic scale.  However, the team provides no new insights into safety culture and, in fact, may not be taking advantage of the state of the art in our understanding of how safety culture interacts with other organizational attributes. 


*  Energy Facility Contractors Group (EFCOG)/DOE Integrated Safety Management System (ISMS) Safety Culture Task Team.

**  J. McDonald, P. Worthington, N. Barker, G. Podonsky, “EFCOG/DOE ISMS Safety Culture Task Team Final Report”  (Jun 4, 2010).

***  EFCOG/DOE ISMS Safety Culture Task Team, “Assessing Safety Culture in DOE Facilities,” EFCOG meeting handout (Jan 23, 2009).

****  EFCOG/DOE ISMS Safety Culture Task Team, “Activities to Improve Safety Culture in DOE Facilities,” EFCOG meeting handout (Jan 23, 2009).

Wednesday, June 30, 2010

Can Safety Culture Be Regulated? (Part 2)

Part 1 of this topic covered the factors important to safety culture and amenable to measurement or assessment, the “known knowns.”   In this Part 2 we’ll review other factors we believe are important to safety culture but cannot be assessed very well, if at all, the “known unknowns” and the potential for factors or relationships important to safety culture that we don’t know about, the “unknown unknowns.”

Known Unknowns

These are factors that are probably important to regulating safety culture but cannot be assessed or cannot be assessed very well.  The hazard they pose is that deficient or declining performance may, over time, damage and degrade a previously adequate safety culture.

Measuring Safety Culture

This is the largest issue facing a regulator.  There is no meter or method that can be applied to an organization to obtain the value of some safety culture metric.  It’s challenging (impossible?) to robustly and validly assess, much less regulate, a variable that cannot be measured.  For a more complete discussion of this issue, please see our June 15, 2010 post

Trust

If the plant staff does not trust management to do the right thing, even when it costs significant resources, then safety culture will be negatively affected.  How does one measure trust, with a survey?  I don’t think surveys offer more than an instantaneous estimate of any trust metric’s value.

Complacency

Organizations that accept things as they are, or always have been, and see no opportunity or need for improvement are guilty of complacency or worse, hubris.  Lack of organizational reinforcement for a questioning attitude, especially when the questions may result in lost production or financial costs, is a de facto endorsement of complacency.  Complacency is often easy to see a posteriori, hard to detect as it occurs.  

Management competence

Does management implement and maintain consistent and effective management policies and processes?  Is the potential for goal conflict recognized and dealt with (i.e., are priorities set) in a transparent and widely accepted manner?  Organizations may get opinions on their managers’ competence, but not from the regulator.

The NRC does not evaluate plant or owner management competence.  They used to, or at least appeared to be trying to.  Remember the NRC senior management meetings, trending letters, and the Watch List?  While all the “problem” plants had material or work process issues, I believe a contributing factor was the regulator had lost confidence in the competence of plant management.  This system led to the epidemic of shutdown plants in the 1990s.*   In reaction, politicians became concerned over the financial losses to plant owners and employees, and the Commission become concerned that the staff’s explicit/implicit management evaluation process was neither robust and nor valid.

So the NRC replaced a data-informed subjective process with the Reactor Oversight Program (ROP) which looks at a set of “objective” performance indicators and a more subjective inference of cross-cutting issues: human performance, finding and fixing problems (CAP, a known), and management attention to safety and workers' ability to raise safety issues (SCWE, part known and part unknown).  I don’t believe that anyone, especially an outsider like a regulator, can get a reasonable picture of a plant’s safety culture from the “Rope.”  There most certainly are no leading or predictive safety performance indicators in this system.

External influences

These factors include changes in plant ownership, financial health of the owner, environmental regulations, employee perceptions about management’s “real” priorities, third-party assessments, local socio-political pressures and the like.  Any change in these factors could have some effect on safety culture.

Unknown Unknowns

These are the factors that affect safety culture but we don’t know about.  While a lot of smart people have invested significant time and effort in identifying factors that influence safety culture, new possibilities can still emerge.

For example, a new factor has just appeared on our radar screen: executive compensation.  Bob Cudlin has been researching the compensation packages for senior nuclear executives and some of the numbers are eye-popping, especially in comparison to historical utility norms.  Bob will soon post on his findings, including where safety figures into the compensation schemes, an important consideration since much executive compensation is incentive-based.

In addition, it could well be that there are interactions (feedback loops and the like), perhaps varying in structure and intensity over time, between and among the known and unknown factors, that have varying impacts on the evolutionary arc of an organization’s safety culture.  Because of such factors, our hope that safety culture is essentially stable, with a relatively long decay time, may be false; safety culture may be susceptible to sudden drop-offs. 

The Bottom Line

Can safety culture be regulated?  At the current state of knowledge, with some “known knowns” but no standard approach to measuring safety culture and no leading safety performance indicators, we’d have to say “Yes, but only to some degree.”  The regulator may claim to have a handle on an organization’s safety culture through SCWE observations and indirect evidence, but we don’t think the regulator is in a good position to predict or even anticipate the next issue or incident related to safety culture in the nuclear industry. 

* In the U.S. in 1997, one couldn’t swing a dead cat without hitting a shutdown nuclear power plant.  17 units were shutdown during all or part of that year, out of a total population of 108 units. 

Monday, June 28, 2010

Can Safety Culture Be Regulated? (Part 1)

One of our recent posts questioned whether safety culture is measurable.  Now we will slide out a bit further on a limb and wonder aloud if safety culture can be effectively regulated.  We are not alone in thinking about this.  In fact, one expert has flatly stated “Since safety culture cannot be ‘regulated’, appraisal of the safety culture in operating organizations becomes a major challenge for regulatory authorities.“*

The recent incidents in the coal mining and oil drilling industries reinforce the idea that safety culture may not be amenable to regulation in the usual sense of the term, i.e., as compliance with rules and regulations based on behavior or artifacts that can be directly observed and judged.  The government can count regulatory infractions and casually observe employees, but can it look into an organization, assess what is there and then, if necessary, implement interventions that can be defended to the company, Congress and the public?

There are many variables, challenges and obstacles to consider in the effective regulation of safety culture.  To facilitate discussion of these factors, I have adapted the Rumsfeld (yes, that one) typology** and sorted some of them into “known knowns”, “unknown knowns”, and “unknown unknowns.”  The set of factors listed is intended to be illustrative and not claimed to be complete.

Known Knowns

These are factors that are widely believed to be important to safety culture and are amenable to assessment in some robust (repeatable) and valid (accurate) manner.  An adequate safety culture will not long tolerate sub-standard performance in these areas.  Conversely, deficient performance in any of these areas will, over time, damage and degrade a previously adequate safety culture.  We’re not claiming that these factors will always be accurately assessed but we’ll argue that it should be possible to do so.

Corrective action program (CAP)

This is the system for fixing problems.  Increasing corrective action backlogs, repeated occurrences of the same or similar problems, and failure to address the root causes of problems are signs that the organization can’t or won’t solve its problems.  In an adequate safety culture, the organization will fix the current instance of a problem and take steps to prevent the same or similar problems from recurring in the future.

Process reviews

The work of an organization gets done by implementing processes.  Procedural deficiencies, workarounds, and repeated human errors indicate an organization that can’t or won’t align its documented work processes with the way work is actually performed.  An important element of safety culture is that employees have confidence in procedures and processes. 

Self assessments

An adequate safety culture is characterized by few, if any, limits on the scope of assessments or the authority of assessors.  Assessments do not repeatedly identify the same or similar opportunities for improvement or promote trivial improvements (aka “rearranging the deck chairs”).  In addition, independent external evaluations are used to confirm the findings and recommendation of self assessments.

Management commitment

In an adequate safety culture, top management exhibits a real and visible commitment to safety management and safety culture.  Note that this is more limited than the state of overall management competence, which we’ll cover in part 2.

Safety conscious work environment (SCWE)

Are employees willing to make complaints about safety-related issues?  Do they fear retribution if they do so?  Are they telling the truth to regulators or surveyors?  In an adequate safety culture, the answers are “yes,” “no” and “yes.”  We are not convinced that SCWE is a true "known known" given the potential issues with the methods used to assess it (click the Safety Culture Survey label to see our previous comments on surveys and interviews) but we'll give the regulator the benefit of the doubt on this one.

A lot of information can be reliably collected on the “known knowns.”  For our purpose, though, there is a single strategic question with respect to them, viz., do the known knowns provide a sufficient dataset for assessing and regulating an organization’s safety culture?  We’ll hold off answering that question until part 2 where we’ll review other factors we believe are important to safety culture but cannot be assessed very well, if at all, and the potential for factors or relationships that are important to safety culture but we don’t even know about.

* Annick Carnino, "Management of Safety, Safety Culture and Self Assessment," Top Safe, 15-17 April 1998, Valencia, Spain.  Ms. Carnino is the former Director, Division of Nuclear Installation Safety, International Atomic Energy Agency.  This is a great paper, covering every important aspect of safety management, and reads like it was recently written.  It’s hard to believe it is over ten years old.

** NATO HQ, Brussels, Press Conference by U.S. Secretary of Defense Donald Rumsfeld, June 6, 2002. The exact quote: “There are known unknowns. That is to say, there are things we now know we don’t know. But there are also unknown unknowns.  These are the things we do not know we don’t know.”  Referenced by Errol Morris in a New York Times Opinionator article, “The Anosognosic’s Dilemma: Something’s Wrong but You’ll Never Know What It Is (Part 1)”, June 20, 2010.