Friday, June 29, 2012

Modeling Safety Culture (Part 2): Safety Culture as Pressure Boundary

No, this is not an attempt to incorporate safety culture into the ASME code.  As introduced in Part 1 we want to offer a relatively simple construct for safety culture - hoping to provide a useful starting point for a model of safety culture and a bridge between safety culture as amorphous values and beliefs, and safety culture that helps achieve desired balances in outcomes.

We propose that safety culture be considered “the willingness and ability of an organization to resist undue pressure on safety from competing business priorities”.  Clearly this is a 30,000 foot view of safety culture and does not try to address the myriad ways in which it materializes within the organization. This is intentional since there are so many possible moving parts at the individual level making it too easy to lose sight of the macro forces. 

The following diagram conceptualizes the boundary between safety priorities (i.e., safety culture) and other organizational priorities (business pressure).  The plotted line is essentially a threshold where the pressure for maintaining safety priorities (created by culture) may start to yield to increasing amounts of pressure to address other business priorities.  In the region to the left of the plot line, safety and business priorities exist in an equilibrium.  To the right of the line business pressure exceeds that of the safety culture and can lead to compromises.  Note that this construct supports the view that strong safety performance is consistent with strong overall performance.  Strong overall performance, in areas such as production, cost and schedule, ensure that business pressures are relatively low and in equilibrium with reasonably strong safety culture.  (A larger figure with additional explanatory notes is available here.)



The arc of the plot line suggests that the safety/business threshold increases (requires greater business pressure) as safety culture becomes stronger.  It also illustrates that safety priorities may be maintained even at lower safety culture strengths when there is little competing business pressure.  This aspect seems particularly consistent with determinations at certain plants that safety culture is “adequate” but still requires strengthening.  It also provides an appealing explanation for how complacency can over time erode a relatively strong safety culture . If overall performance is good, resulting in minimal business pressures, the culture might not be “challenged” or noticed even as culture becomes degraded.

Another perspective on safety culture as pressure boundary is what happens when business pressure elevates to a point where the threshold is crossed.  One reason that organizations with strong culture may be able to resist more pressure is a greater ability to manage business challenges that arise and/or a willingness to adjust business goals before they become overwhelming.  And even at the threshold such organizations may be better able to identify compensatory actions that have only minimal and short term safety impacts.  For organizations with weaker safety culture, the threshold may lead to more immediate and direct tradeoffs of safety priorities.  In addition, the feedback effects of safety compromises (e.g., larger backlogs of unresolved problems) can compound business performance deficiencies and further increase business pressure.  One possible insight from the pressure model is that in some cases, perceived safety culture issues may be more a situation of reasonably strong safety culture being over matched by excessive business pressures.  The solution may be more about relieving business pressures than exclusively trying to reinforce culture.

In Part 3 we hope to further develop this approach through some simple simulations that illustrate the interaction of managing resources and balancing pressures.  In the meantime we would like to hear reactions from readers to this concept.

Tuesday, June 26, 2012

Modeling Safety Culture (Part 1)

Our June 12th post on the nature of decision making raised concerns about current perceptions of safety culture and the lack of a crisp mental model.  We contended that decisions were the critical manifestation of safety culture and should be understood as an ongoing process to achieve superior performance across all key organizational assets.  A recent post on LinkedIn by our friend Bill Mullins provided a real world example of this process from his days as a Rad Protection Manager.

“As a former Plant Radiation Protection Manager with lots of outage experience, my risk-balancing challenge arose across an evolving portfolio of work…We had to make allocations of finite human capital - radiation protection technicians, supervisors, and radiological engineers - day in a day out, in a way that matched the tempo of the ‘work proceeding safely.’"*

What would a model of safety culture look like?  In terms of a model that describes how safety culture is operationalized, there is not much to cite.  NEI has weighed in with a “safety culture process” diagram which may or may not be a model but includes elements such as CAP that one might expect to see in a model.  A fundamental consideration of any model is how to represent safety culture; does safety culture “determine” actions taken by an organization (a causal relationship), or just provide a context within which actions are taken, or is it really a product, or integration, of the actions taken?   

There is a very interesting overview of these issues in an article by M. D. Cooper titled, appropriately, “Toward a Model of Safety Culture.”  One intriguing assertion by the author is safety culture must be able to be managed and manipulated, contrary to many, including Schein, who take a different view (that it is inherent in the social system). (p. 116)  In another departure from Schein Cooper finds fault with a “linear” view of safety culture where attitudes directly result in behaviors. (p. 122)  Ultimately Cooper suggests an approach where reciprocal relationships between personal and situational aspects yield what we view as culture.  (This article is also worth a read for the observations about the limits of safety culture surveys and whether the goal of initiatives taken in response to surveys is improving safety culture—or improving safety culture survey results.)

Our own view is more in the direction of Cooper.  We think safety culture can be thought of as a force or pressure within the organization to ensure that actions and decisions reflect safety.  But safety competes with other forces arising from competing business goals, incentives and even personal interests.  The actual actions and decisions turn on the combined balance of these various pressures.***  Over time the integrated effect of the actions manifest the true priority of safety, and thus the safety culture.  

Such a process is not linear, thus to the question of does safety culture determine outcomes or vice versa, the answer is “yes”.  The diagram below illustrates the basic relationships between safety culture, management actions, business performance and safety performance. It is a cyclic and continuously looping process, driven by goals and modulated by results.  The basic idea is that safety culture exists in an equilibrium with safety and business performance much of the time.  However when business performance cannot meet its goals, it creates pressure on management and its ability to continue to give safety the appropriate priority.  (A larger figure with additional explanatory notes is available here.)




*  The link to the thread (including Bill's comment) is here.  This may be difficult for readers who are not LinkedIn members to access.

**  M.D. Cooper, “Toward a Model of Safety Culture,” Safety Science 36 (2000): 111-136.

*** As summarized in an MIT Sloan Management Review article we blogged about on Sept. 1, 2010, “All decisions….are values-based.  That is, a decision necessarily involves an implicit or explicit trade-off of values.”  Safety culture is merely one of the values that is involved in this computation.

Saturday, June 23, 2012

More Markey Malarkey?

As you know, Rep. Edward Markey (D-MA) is no friend of the NRC and has a record of complaining about NRC management practices and errors, retaliation against NRC employees who disagree with their managers, the other Commissioners outvoting outgoing Chairman Jazcko on post-Fukushima proposals,* etc. 

As a consequence, a new NRC-related emission from the Congressman’s office is of little interest to us.  However, his June 4, 2012 letter to Chairman Jazcko** got our attention.  While it recaps and supposedly updates prior complaints about the conduct of NRC managers and retaliation against employees, it also adds a couple of new items: (1) a claim that NRC employees don’t trust the NRC Inspector General (IG) to fairly investigate the issues previously raised and (2) a call for an independent investigation of the NRC’s safety culture (SC). 

I have not yet seen any NRC response to the Markey letter but it’s interesting to speculate how this might this play out.

It would not surprise me if the NRC develops a two-pronged approach: (1) show support for their IG by assigning specific instances of alleged misconduct to the IG office for investigation and (2) create some sort of broader (agency-wide) initiative to reinforce SC policy and traits.  Expect a lot of parsing, posturing and pronouncements, some retraining, and perhaps a reprimanded manager.  It may also present an opportunity for incoming Chairman Macfarlane to articulate her understanding of and expectations for SC. 

Unfortunately, what you won’t see is an in-depth analysis of either the professional decision-making system that allows internal controversies to simmer until they boil over, or the real (as opposed to nominal) management reward system that encourages an agency middle manager to act in such an unprofessional manner (if indeed anyone did).  Who would risk his career by downgrading findings and/or retaliating against subordinates unless there was some considerable agency or personal pressure to do so?  But it’s not unthinkable.  An earlier Markey letter, citing information received from NRC staff, points to an item in the regional plan, “which apparently awards Senior Executive Service bonuses in a manner that scales inversely with the number of enforcement actions that are challenged and overturned by licensees.”***  Is this a smoking gun or just someone blowing smoke?  


*  Jaczko served as a Congressional Science Fellow in Rep. Markey’s office so the Congressman is likely complaining about the other Commissioners picking on his guy.

**  Letter E.J. Markey to G. Jaczko Re: Region IV follow-up (June 4, 2012).

***  Letter E.J. Markey to G. Jaczko Re: Texas Headquarters (May 9, 2012).

Tuesday, June 12, 2012

The Nature of Decision Making

This post may seem a bit on the abstract side of things but is intended to lay some foundation for future discussions on how to represent and model safety culture.  We have posted previously about the various definitions of nuclear safety culture that are in vogue.  Generally we find the definitions to be of limited value for at least two reasons: one, they focus on lists of desired traits and values but do not address the real conflicts and impediments to achieving those values; and two, they don’t illuminate how a strong safety culture comes about, or even whether it is something that can be actively managed.  Recent discussions on some of the LinkedIn forums include lots of references to good leadership practices and the like, essentially painting a picture that safety culture is a matter of having “the right stuff”.  But how much of safety culture is a product of leadership traits if those traits do not translate into hard day-to-day decisions that are consistent with safety priorities? 

This train of thought always leads us back to focusing on decision making as the backbone of safety culture.  In turn it makes us ask how can we look at decisions as a balancing function that accounts for a variety of inputs and yields appropriate actions on an ongoing basis.  We found the following formulation quite helpful:

“...decision making is conceived as a continuous process for converting varying information flows into signals that determine action….In system dynamics, a decision function does not portray a choice among alternatives….we are viewing decision processes from a distance where discrete choices disappear, leaving only broad organizational pressures that shape action.”*

We have taken Morecroft’s approach and adapted it to nuclear safety culture context.  The diagram below shows the status of key organizational assets (we have used three - generation, budget and safety - as illustrations) being accessed (black arrows); processing the information through various layers that interpret, limit and rationalize as the basis for decisions; and the resulting decisions being fed back (orange arrows) to adjust performance of each of the assets.  (A larger figure with additional explanatory notes is available here.)



In other words, decision making is viewed as a process and not as discrete events.  Decision making is constantly impacted by the status of all asset stocks in the business and produces a stream of decisions in response, resulting in adjustments to each of the stocks.  When we define safety culture in terms of assigning the highest priority to safety consistent with its significance, we are effectively indicating how the stream of decisions should allocate resources among the various organizational assets.

Part of the problem we see in various definitions or “explanations” of safety culture is in its complexity and multiplicity of attributes, values, and traits that must be accommodated.  The bounded rationality aspect of a system dynamics approach stems from a belief that people can only process and utilize limited sets of inputs, generally far less than are available.  Thus in our formulation of a safety culture “model” you will see that the performance of key business assets are based on just a few key attributes that input to decisions and trigger the prioritization process.

We expect some people will have difficulty viewing safety culture in terms of information flows, decision streams, and allocations of resources.  However a process based model is a big step toward consideration of how to manage, measure and achieve goals for safety culture performance.


*  John Morecroft, Strategic Modelling and Business Dynamics (John Wiley & Sons, 2007) p. 212.

Saturday, May 26, 2012

Most of Us Cheat—a Little

A recent Wall Street Journal essay* presented the author’s research into patterns of cheating by people.  He found that a few people are honest, a few people are total liars and most folks cheat a little.  Why?  “. . . the behavior of almost everyone is driven by two opposing motivations. On the one hand, we want to benefit from cheating and get as much money and glory as possible; on the other hand, we want to view ourselves as honest, honorable people. Sadly, it is this kind of small-scale mass cheating, not the high-profile cases, that is most corrosive to society.”

This behavioral tendency can present a challenge to maintaining a strong safety culture.  Fortunately, the author found one type of intervention that decreased the incidence of lying: “. . . reminders of morality—right at the point where people are making a decision—appear to have an outsize effect on behavior.”  In other words, asking subjects to think about the 10 Commandments or the school honor code before starting the research task resulted in less cheating.  So did having people sign their insurance forms at the top, before reporting their annual mileage, rather than the bottom, after the fudging had already been done.  Preaching and teaching about safety culture has a role, but the focus should be on the point where safety-related decisions are made and actions occur.    

I don’t want to oversell these findings.  Most of the research involved individual college students, not professionals working in large organizations with defined processes and built-in checks and balances.  But the findings do suggest that zero tolerance for certain behaviors has its place.  As the author concludes: “. . . although it is obviously important to pay attention to flagrant misbehaviors, it is probably even more important to discourage the small and more ubiquitous forms of dishonesty . . . This is especially true given what we know about the contagious nature of cheating and the way that small transgressions can grease the psychological skids to larger ones.”


*  D. Ariely, “Why We Lie,” Wall Street Journal online (May 26, 2012). 

Tuesday, May 22, 2012

The NRC Chairman, Acta Est Fabula

With today’s announcement the drama surrounding the Chairman of the NRC has played out to its foreseeable conclusion.  The merits of the Chairman’s leadership of the agency are beyond the scope of this blog, but there are a few aspects of his tenure that may be relevant to nuclear safety culture in high performing organizations, not to mention in high places.

First we should note that we have previously blogged about speeches and papers (here, here and here) given by the Chairman wherein he emphasized the importance of safety culture to nuclear safety.  In general we applauded his emphasis on safety culture as being necessary to raise the attention level of the industry.  Over time, as the NRC’s focus became absorbed with the Safety Culture Policy Statement we became less enamored with the Chairman’s satisfaction with achieving consensus among stakeholders as almost an end to itself.  The resultant policy statement with a heavy tilt to attitudes and values seemed to lack the kind of coherence that a regulatory agency needs to establish inspectable results.  As Commissioner Apostolakis so cogently observed, “...we really care about what people do and maybe not why they do it….”

Continuing with that thought, and if the assertions made by the four other Commissioners are accurate, what the Chairman’s did as agency head seems to have included intimidation, lack of transparency, manipulation of resources, and other behaviors not on the safety culture list of traits.  It illustrates, again, how easy it is for organizational leaders to mouth the correct words about safety culture yet behave in a contradictory manner.  We strongly suspect that this is another situation where the gravitational force of conflicting priorities - in this case a political agenda - was sufficient to bend the boundary line between strong leadership and self interest.

Thursday, May 17, 2012

NEI Safety Culture Initiative: A Good Start but Incomplete

The March 2012 NRC Regulatory Information Conference included a session on the NRC’s Safety Culture Policy Statement.  NRC personnel made most of the session presentations but there was one industry report on the NEI’s safety culture initiative.  The NEI presentation* included the figure shown below which we’ll assume represents industry’s current schematic for how a site’s safety culture should be assessed and maintained. 



The good news here is the central role of the site’s corrective action program (CAP).  The CAP is where identified issues get evaluated, prioritized and assigned; it is a major source for changes to the physical plant and plant procedures.  A strong safety culture is reflected in an efficient, effective CAP and vice versa.

Another positive aspect is the highlighted role of site management in responding to safety culture issues by implementing appropriate changes in site policies, programs, training, etc.

We also approve of presentation text that outlined industry’s objective to have “A repeatable, holistic approach for assessing safety culture on a continuing basis” and to use “Frequent evaluations [to] promote sensitivity to faint signals.”  

Opportunities for Improvement

There are some other factors, not shown in the figure or the text, that are also essential for establishing and maintaining a strong safety culture.  One of these is the site’s decision making process, or processes.  Is decision making consistently conservative, transparent, robust and fair?  How is goal conflict handled?  How about differences of opinion?  Are sensors in place to detect risk perception creep or normalization of deviance? 

Management commitment to safety is another factor.  Does management exercise leadership to reinforce safety culture and is management trusted by the organization?

A third set of factors establishes the context for decision making and culture.  What are corporate’s priorities?  What resources are available to the site?  Absent sufficient resources, the CAP and other mechanisms will assign work that can’t be accomplished, backlogs will grow and the organization will begin to wonder just how important safety is.  Finally, what are management’s performance objectives and incentive plan?

One may argue that the above “opportunities” are beyond the scope of the industry safety culture objective.  Well, yes and no.  While they may be beyond the scope of the specific presentation, we believe that nuclear safety culture can only be understood and  possibly influenced by accepting a complete, dynamic model of ALL the factors that affect, and are affected by, safety culture.  Lack of a system view is like trying to drive a car with some of the controls missing—it will eventually run off the road. 


*  J.E. Slider, Nuclear Energy Institute, “Status of the Industry’s Nuclear Safety Culture Initiative,” presented at the NRC Regulatory Information Conference (March 15, 2012).

Monday, May 14, 2012

NEA 2008-2011 Construction Experience Report: Not Much There for Safety Culture Aficionados.

This month the Nuclear Energy Agency, a part of the Organization for Economic Co-Operation and Development, published a report on problems identified and lessons learned at nuclear plants during the construction phase.  The report focuses on three plants currently under construction and also includes incidents from a larger population of plants and brief reviews of other related studies. 

The report identifies a litany of problems that have occurred during plant construction; it is of interest to us because it frequently mentions safety culture as something that needs to be emphasized to prevent such problems.  Unfortunately, there is not much usable guidance beyond platitudinous statements such as “Safety culture needs to be established prior to the start of authorized activities such as the construction phase, and it is applied to all participants (licensee, vendor, architect engineer, constructors, etc.)”, “Safety culture should be maintained at very high level from the beginning of the project” and, from an U.K. report, “. . . an understanding of nuclear safety culture during construction must be emphasized.”*

These should not be world-shaking insights for regulators (the intended audience for the report) or licensees.  On the other hand, the industry continues to have problems that should have been eliminated after the fiascos that occurred during the initial build-out of the nuclear fleet in the 1960s through 1980s; maybe it does need regular reminding of George Santayana’s aphorism: “Those who cannot remember the past are condemned to repeat it.” 


*  Committee on Nuclear Regulatory Activities, Nuclear Energy Agency, “First Construction Experience Synthesis Report 2008-2011,” NEA/CNRA/R(2012)2 (May 3, 2012), pp. 8, 16 and 41.