Showing posts with label Safety Management Decisions. Show all posts
Showing posts with label Safety Management Decisions. Show all posts

Wednesday, May 8, 2013

Safety Management and Competitiveness

Jean-Marie Rousseau
We recently came across a paper that should be of significant interest to nuclear safety decision makers.  “Safety Management in a Competitiveness Context” was presented in March 2008 by Jean-Marie Rousseau of the Institut de Radioprotection et de Surete Nucleaire (IRSN).  As the title suggests the paper examines the effects of competitive pressures on a variety of nuclear safety management issues including decision making and the priority accorded safety.  Not surprisingly:

“The trend to ignore or to deny this phenomenon is frequently observed in modern companies.” (p. 7)

The results presented in the paper came about from a safety assessment performed by IRSN to examine safety management of EDF [Electricite de France] reactors including:

“How real is the ‘priority given to safety’ in the daily arbitrations made at all nuclear power plants, particularly with respect to the other operating requirements such as costs, production, and radiation protection or environmental constraints?” (p. 2)

The pertinence is clear as “priority given to safety” is the linchpin of safety culture policy and expected behaviors.  In addition the assessment focused on decision-making processes at both the strategic and operational levels.  As we have argued, decisions can provide significant insights into how safety culture is operationalized by nuclear plant management. 

Rousseau views nuclear operations as a “highly complex socio-technical system” and his paper provides a brief review of historical data where accidents or near misses displayed indications of the impact of competing priorities on safety.  The author notes that competitiveness is necessary just as is safety and as such it represents another risk that must be managed at organizational and managerial levels.  This characterization is intriguing and merits further reflection particularly by regulators in their pursuit of “risk informed regulation”.  Nominally regulators apply a conceptualization of risk that is hardware and natural phenomena centric.  But safety culture and competitive pressures also could be justified as risks to assuring safety - in fact much more dynamic risks - and thus be part of the framework of risk informed regulation.*  Often, as is the case with this paper, there is some tendency to assert that achievement of safety is coincident with overall performance excellence - which in a broad sense it is - but notwithstanding there are many instances where there is considerable tension - and potential risk.

Perhaps most intriguing in the assessment is the evaluation of EDF’s a posteriori analyses of its decision making processes as another dimension of experience feedback.**   We quote the paper at length:

“The study has pointed out that the OSD***, as a feedback experience tool, provides a priori a strong pedagogic framework for the licensee. It offers a context to organize debates about safety and to share safety representations between actors, illustrated by a real problematic situation. It has to be noticed that it is the only tool dedicated to “monitor” the safety/competitiveness relationship.

"But the fundamental position of this tool (“not to make judgment about the decision-maker”) is too restrictive and often becomes “not to analyze the decision”, in terms of results and effects on the given situation.

"As the existence of such a tool is judged positively, it is necessary to improve it towards two main directions:
- To understand the factors favouring the quality of a decision-making process. To this end, it is necessary to take into account the decision context elements such as time pressure, fatigue of actors, availability of supports, difficulties in identifying safety requirements, etc.
- To understand why a “qualitative decision-making process” does not always produce a “right decision”. To this end, it is necessary to analyze the decision itself with the results it produces and the effects it has on the situation.” (p. 8)

We feel this is a very important aspect that currently receives insufficient attention.  Decisions can provide a laboratory of safety management performance and safety culture actualization.  But how often are decisions adequately documented, preserved, critiqued and shared within the organization?  Decisions that yield a bad (reportable) result may receive scrutiny internally and by regulators but our studies indicate there is rarely sufficient forensic analysis - cause analyses are almost always one dimensional and hardware and process oriented.  Decisions with benign outcomes - whether the result of “good” decision making or not - are rarely preserved or assessed.  The potential benefits of detailed consideration of decisions have been demonstrated in many of the independent assessments of accidents (Challenger, Columbia, BP Texas Oil Refinery, etc.) and in research by Perin and others. 

We would go a step further than proposed enhancements to the OSD.  As Rousseau notes there are downsides to the routine post-hoc scrutiny of actual decisions - for one it will likely identify management errors even in the absence of a bad decision outcome.  This would be one more pressure on managers already challenged by a highly complex decision environment.  An alternative is to provide managers the opportunity to “practice” making decisions in an environment that supports learning and dialogue on achieving the proper balances in decisions - in other words in a safety management simulator.  The industry requires licensed operators to practice operations decisions on a simulator for similar reasons - why not nuclear managers charged with making safety decisions?



*  As the IAEA has noted, “A danger of concentrating too much on a quantitative risk value that has been generated by a PSA [probabilistic safety analysis] is that...a well-designed plant can be operated in a less safe manner due to poor safety management by the operator.”  IAEA-TECDOC-1436, Risk Informed Regulation of Nuclear Facilities: Overview of the Current Status, February 2005.

**  EDF implemented safety-availability-Radiation-Protection-environment observatories (SAREOs) to increase awareness of the arbitration between safety and other performance factors. SAREOs analyze in each station the quality of the decision-making process and propose actions to improve it and to guarantee compliance with rules in any circumstances [“Nuclear Safety: our overriding priority” EDF Group‟s file responding to FTSE4Good nuclear criteria] 


***  Per Rousseau, “The OSD (Observatory for Safety/Availability) is one of the “safety management levers” implemented by EDF in 1997. Its objective is to perform retrospective analyses of high-stake decisions, in order to improve decision-making processes.” (p. 7)

Tuesday, November 20, 2012

BP/Deepwater Horizon: Upping the Stakes

Anyone who thought safety culture and safety decision making was an institutional artifact, or mostly a matter of regulatory enforcement, might want to take a close look at what is happening on the BP/Deepwater Horizon front these days. Three BP employees have been criminally indicted - and two of those indictments bear directly on safety in operational decisions. The indictments of the well-site leaders, the most senior BP personnel on the platform, accuses them of causing the deaths of 11 crewmen aboard the Deepwater Horizon rig in April 2010 through gross negligence, primarily by misinterpreting a crucial pressure test that should have alerted them that the well was in trouble.*

The crux of the matter relates to the interpretation of a pressure test to determine whether the well had been properly sealed prior to being temporarily abandoned. Apparently BP’s own investigation found that the men had misinterpreted the test results.

The indictment states, “The Well Site Leaders were responsible for...ensuring that well drilling operations were performed safely in light of the intrinsic danger and complexity of deepwater drilling.” (Indictment p.3)

The following specific actions are cited as constituting gross negligence: “...failed to phone engineers onshore to advise them ...that the well was not secure; failed to adequately account for the abnormal readings during the testing; accepted a nonsensical explanation for the abnormal readings, again without calling engineers onshore to consult…” (Indictment p.7)

The willingness of federal prosecutors to advance these charges should (and perhaps are intended to) send a chill down every manager’s spine in high risk industries. While gross negligence is a relatively high standard, and may or may not be provable in the BP case, the actions cited in the indictment may not sound all that extraordinary - failure to consult with onshore engineers, failure to account for “abnormal” readings, accepting a “nonsensical” explanation. Whether this amounts to “reckless” or willful disregard for a known risk is a matter for the legal system. As an article in the Wall Street Journal notes, “There were no federal rules about how to conduct such a test at the time. That has since changed; federal regulators finalized new drilling rules last week that spell out test procedures.”**

The indictment asserts that the men violated the “standard of care” applicable to the deepwater oil exploration industry. One might ponder what federal prosecutors think the “standard of care” is for the nuclear power generation industry.
 

Clearly the well site leaders made a serious misjudgment - one that turned out to have catastrophic consequences. But then consider the statement by the Assistant Attorney General, that the accident was caused by “BP’s culture of privileging profit over prudence.” (WSJ article)   Are there really a few simple, direct causes of this accident or is this an example of a highly complex system failure? Where does culpability for culture lie?  Stay tuned.


* U.S. District Court Eastern District of Louisiana, “Superseding Indictment for Involuntary Manslaughter, Seaman's Manslaughter and Clean Water Act: United States of America v. Robert Kaluza and Donald Vidrine,” Criminal No. 12-265.


** T. Fowler and R. Gold, “Engineers Deny Charges in BP Spill,” Wall Street Journal online (Nov. 18, 2012).



Friday, June 29, 2012

Modeling Safety Culture (Part 2): Safety Culture as Pressure Boundary

No, this is not an attempt to incorporate safety culture into the ASME code.  As introduced in Part 1 we want to offer a relatively simple construct for safety culture - hoping to provide a useful starting point for a model of safety culture and a bridge between safety culture as amorphous values and beliefs, and safety culture that helps achieve desired balances in outcomes.

We propose that safety culture be considered “the willingness and ability of an organization to resist undue pressure on safety from competing business priorities”.  Clearly this is a 30,000 foot view of safety culture and does not try to address the myriad ways in which it materializes within the organization. This is intentional since there are so many possible moving parts at the individual level making it too easy to lose sight of the macro forces. 

The following diagram conceptualizes the boundary between safety priorities (i.e., safety culture) and other organizational priorities (business pressure).  The plotted line is essentially a threshold where the pressure for maintaining safety priorities (created by culture) may start to yield to increasing amounts of pressure to address other business priorities.  In the region to the left of the plot line, safety and business priorities exist in an equilibrium.  To the right of the line business pressure exceeds that of the safety culture and can lead to compromises.  Note that this construct supports the view that strong safety performance is consistent with strong overall performance.  Strong overall performance, in areas such as production, cost and schedule, ensure that business pressures are relatively low and in equilibrium with reasonably strong safety culture.  (A larger figure with additional explanatory notes is available here.)



The arc of the plot line suggests that the safety/business threshold increases (requires greater business pressure) as safety culture becomes stronger.  It also illustrates that safety priorities may be maintained even at lower safety culture strengths when there is little competing business pressure.  This aspect seems particularly consistent with determinations at certain plants that safety culture is “adequate” but still requires strengthening.  It also provides an appealing explanation for how complacency can over time erode a relatively strong safety culture . If overall performance is good, resulting in minimal business pressures, the culture might not be “challenged” or noticed even as culture becomes degraded.

Another perspective on safety culture as pressure boundary is what happens when business pressure elevates to a point where the threshold is crossed.  One reason that organizations with strong culture may be able to resist more pressure is a greater ability to manage business challenges that arise and/or a willingness to adjust business goals before they become overwhelming.  And even at the threshold such organizations may be better able to identify compensatory actions that have only minimal and short term safety impacts.  For organizations with weaker safety culture, the threshold may lead to more immediate and direct tradeoffs of safety priorities.  In addition, the feedback effects of safety compromises (e.g., larger backlogs of unresolved problems) can compound business performance deficiencies and further increase business pressure.  One possible insight from the pressure model is that in some cases, perceived safety culture issues may be more a situation of reasonably strong safety culture being over matched by excessive business pressures.  The solution may be more about relieving business pressures than exclusively trying to reinforce culture.

In Part 3 we hope to further develop this approach through some simple simulations that illustrate the interaction of managing resources and balancing pressures.  In the meantime we would like to hear reactions from readers to this concept.

Tuesday, June 26, 2012

Modeling Safety Culture (Part 1)

Our June 12th post on the nature of decision making raised concerns about current perceptions of safety culture and the lack of a crisp mental model.  We contended that decisions were the critical manifestation of safety culture and should be understood as an ongoing process to achieve superior performance across all key organizational assets.  A recent post on LinkedIn by our friend Bill Mullins provided a real world example of this process from his days as a Rad Protection Manager.

“As a former Plant Radiation Protection Manager with lots of outage experience, my risk-balancing challenge arose across an evolving portfolio of work…We had to make allocations of finite human capital - radiation protection technicians, supervisors, and radiological engineers - day in a day out, in a way that matched the tempo of the ‘work proceeding safely.’"*

What would a model of safety culture look like?  In terms of a model that describes how safety culture is operationalized, there is not much to cite.  NEI has weighed in with a “safety culture process” diagram which may or may not be a model but includes elements such as CAP that one might expect to see in a model.  A fundamental consideration of any model is how to represent safety culture; does safety culture “determine” actions taken by an organization (a causal relationship), or just provide a context within which actions are taken, or is it really a product, or integration, of the actions taken?   

There is a very interesting overview of these issues in an article by M. D. Cooper titled, appropriately, “Toward a Model of Safety Culture.”  One intriguing assertion by the author is safety culture must be able to be managed and manipulated, contrary to many, including Schein, who take a different view (that it is inherent in the social system). (p. 116)  In another departure from Schein Cooper finds fault with a “linear” view of safety culture where attitudes directly result in behaviors. (p. 122)  Ultimately Cooper suggests an approach where reciprocal relationships between personal and situational aspects yield what we view as culture.  (This article is also worth a read for the observations about the limits of safety culture surveys and whether the goal of initiatives taken in response to surveys is improving safety culture—or improving safety culture survey results.)

Our own view is more in the direction of Cooper.  We think safety culture can be thought of as a force or pressure within the organization to ensure that actions and decisions reflect safety.  But safety competes with other forces arising from competing business goals, incentives and even personal interests.  The actual actions and decisions turn on the combined balance of these various pressures.***  Over time the integrated effect of the actions manifest the true priority of safety, and thus the safety culture.  

Such a process is not linear, thus to the question of does safety culture determine outcomes or vice versa, the answer is “yes”.  The diagram below illustrates the basic relationships between safety culture, management actions, business performance and safety performance. It is a cyclic and continuously looping process, driven by goals and modulated by results.  The basic idea is that safety culture exists in an equilibrium with safety and business performance much of the time.  However when business performance cannot meet its goals, it creates pressure on management and its ability to continue to give safety the appropriate priority.  (A larger figure with additional explanatory notes is available here.)




*  The link to the thread (including Bill's comment) is here.  This may be difficult for readers who are not LinkedIn members to access.

**  M.D. Cooper, “Toward a Model of Safety Culture,” Safety Science 36 (2000): 111-136.

*** As summarized in an MIT Sloan Management Review article we blogged about on Sept. 1, 2010, “All decisions….are values-based.  That is, a decision necessarily involves an implicit or explicit trade-off of values.”  Safety culture is merely one of the values that is involved in this computation.

Tuesday, June 12, 2012

The Nature of Decision Making

This post may seem a bit on the abstract side of things but is intended to lay some foundation for future discussions on how to represent and model safety culture.  We have posted previously about the various definitions of nuclear safety culture that are in vogue.  Generally we find the definitions to be of limited value for at least two reasons: one, they focus on lists of desired traits and values but do not address the real conflicts and impediments to achieving those values; and two, they don’t illuminate how a strong safety culture comes about, or even whether it is something that can be actively managed.  Recent discussions on some of the LinkedIn forums include lots of references to good leadership practices and the like, essentially painting a picture that safety culture is a matter of having “the right stuff”.  But how much of safety culture is a product of leadership traits if those traits do not translate into hard day-to-day decisions that are consistent with safety priorities? 

This train of thought always leads us back to focusing on decision making as the backbone of safety culture.  In turn it makes us ask how can we look at decisions as a balancing function that accounts for a variety of inputs and yields appropriate actions on an ongoing basis.  We found the following formulation quite helpful:

“...decision making is conceived as a continuous process for converting varying information flows into signals that determine action….In system dynamics, a decision function does not portray a choice among alternatives….we are viewing decision processes from a distance where discrete choices disappear, leaving only broad organizational pressures that shape action.”*

We have taken Morecroft’s approach and adapted it to nuclear safety culture context.  The diagram below shows the status of key organizational assets (we have used three - generation, budget and safety - as illustrations) being accessed (black arrows); processing the information through various layers that interpret, limit and rationalize as the basis for decisions; and the resulting decisions being fed back (orange arrows) to adjust performance of each of the assets.  (A larger figure with additional explanatory notes is available here.)



In other words, decision making is viewed as a process and not as discrete events.  Decision making is constantly impacted by the status of all asset stocks in the business and produces a stream of decisions in response, resulting in adjustments to each of the stocks.  When we define safety culture in terms of assigning the highest priority to safety consistent with its significance, we are effectively indicating how the stream of decisions should allocate resources among the various organizational assets.

Part of the problem we see in various definitions or “explanations” of safety culture is in its complexity and multiplicity of attributes, values, and traits that must be accommodated.  The bounded rationality aspect of a system dynamics approach stems from a belief that people can only process and utilize limited sets of inputs, generally far less than are available.  Thus in our formulation of a safety culture “model” you will see that the performance of key business assets are based on just a few key attributes that input to decisions and trigger the prioritization process.

We expect some people will have difficulty viewing safety culture in terms of information flows, decision streams, and allocations of resources.  However a process based model is a big step toward consideration of how to manage, measure and achieve goals for safety culture performance.


*  John Morecroft, Strategic Modelling and Business Dynamics (John Wiley & Sons, 2007) p. 212.

Friday, November 4, 2011

A Factory for Producing Decisions

The subject of this post is the compelling insights of Daniel Kahneman into issues of behavioral economics and how we think and make decisions.  Kahneman is one of the most influential thinkers of our time and a Nobel laureate.  Two links are provided for our readers who would like additional information.  One is via the McKinsey Quarterly, a video interview* done several years ago.  It runs about 17 minutes.  The second is a current review in The Atlantic** of Kahneman’s just released book, Thinking Fast and Slow.

Kahneman begins the McKinsey interview by suggesting that we think of organizations as “factories for producing decisions” and therefore, think of decisions as a product.  This seems to make a lot of sense when applied to nuclear operating organizations - they are the veritable “River Rouge” of decision factories.  What may be unusual for nuclear organizations is the large percentage of decisions that directly or indirectly include safety dimensions, dimensions that can be uncertain and/or significantly judgmental, and which often conflict with other business goals.  So nuclear organizations have to deliver two products: competitively priced megawatts and decisions that preserve adequate safety.

To Kahneman decisions as product logically raises the issue of quality control as a means to ensure the quality of decisions.  At one level quality control might focus on mistakes and ensuring that decisions avoid recurrence of mistakes.  But Kahneman sees the quality function going further into the psychology of the decision process to ensure, e.g., that the best information is available to decision makers, that the talents of the group surrounding the ultimate decision maker are being used effectively, and the presence of an unbiased decision-making environment.

He notes that there is an enormous amount of resistance within organizations to improving decision processes. People naturally feel threatened if their decisions are questioned or second guessed.  So it may be very difficult or even impossible to improve the quality of decisions if the leadership is threatened too much.  But, are there ways to avoid this?  Kahneman suggests the “premortem” (think of it as the analog to a post mortem).  When a decision is being formulated (not yet made), convene a group meeting with the following premise: It is a year from now, we have implemented the decision under consideration, it has been a complete disaster.  Have each individual write down “what happened?”

The objective of the premortem is to legitimize dissent and minimize the innate “bias toward optimism” in decision analysis.  It is based on the observation that as organizations converge toward a decision, dissent becomes progressively more difficult and costly and people who warn or dissent can be viewed as disloyal.  The premortem essentially sets up a competitive situation to see who can come up with the flaw in the plan.  In essence everyone takes on the role of dissenter.  Kahneman’s belief is that the process will yield some new insights - that may not change the decision but will lead to adjustments to make the decision more robust. 

Kahneman’s ideas about decisions resonate with our thinking that the most useful focus for nuclear safety culture is the quality of organizational decisions.  It also contrasts with a recent instance of a nuclear plant run afoul of the NRC (Browns Ferry) and now tagged with a degraded cornerstone and increased inspections.  As usual in the nuclear industry, TVA has called on an outside contractor to come in and perform a safety culture survey, to “... find out if people feel empowered to raise safety concerns….”***  It may be interesting to see how people feel, but we believe it would be far more powerful and useful to analyze a significant sample of recent organizational decisions to determine if the decisions reflect an appropriate level of concern for safety.  Feelings (perceptions) are not a substitute for what is actually occurring in the decision process. 

We have been working to develop ways to grade whether decisions support strong safety culture, including offering opportunities on this blog for readers to “score” actual plant decisions.  In addition we have highlighted the work of Constance Perin including her book, Shouldering Risks, which reveals the value of dissecting decision mechanics.  Perin’s observations about group and individual status and credibility and their implications for dissent and information sharing directly parallel Kahneman’s focus on the need to legitimize dissent.  We hope some of this thinking ultimately overcomes the current bias in nuclear organizations to reflexively turn to surveys and the inevitable retraining in safety culture principles.


*  "Daniel Kahneman on behavioral economics," McKinsey Quarterly video interview (May 2008).

** M. Popova, "The Anti-Gladwell: Kahneman's New Way to Think About Thinking," The Atlantic website (Nov. 1, 2011).

*** A. Smith, "Nuke plant inspections proceeding as planned," Athens [Ala.] News Courier website (Nov. 2, 2011).

Friday, October 14, 2011

Decision No. 2 Scoring Results

In July we initiated a process for readers to participate in evaluating the extent to which actual decisions made at nuclear plants were consistent with a strong safety culture.  (The decision scoring framework is discussed here and the results for the first decision are discussed here.)  Example decision 2 involved a temporary repair to a Service Water System piping elbow.  Performance of a permanent code repair was postponed until the next cold shutdown or refuel outage.

We asked readers to assess the decision in two dimensions: potential safety impact and the strength of the decision, using anchored scales to quantify the scores.  The chart shows the scoring results.  Our interpretation of the results is as follows:

As with the first decision, most of the scores did coalesce in a limited range for each scoring dimension.  Based on the anchored scales, this meant most people thought the safety impact was fairly significant, likely due to the extended time period of the temporary repair which could extend to the next refuel outage.  The people that scored safety significance in this range also scored the decision strength as one that reasonably balanced safety and other operational priorities.  Our interpretation here is that people viewed the temporary repair as a reasonable interim measure, sufficient to maintain an adequate safety margin.  Notwithstanding that most scores were in the mid range, there were also decision strength scores as low as 3 (safety had lower priority than desired) and as high as 9 (safety had high priority where competing priorities were significant).  Across this range of decision strength scores, the scores for safety impact were consistent at 8.  This clearly illustrates the potential for varying perceptions of whether a decision is consistent with a strong safety culture.  The reasons for the variation could be based on how people felt about the efficacy of the temp repair or simply different standards or expectations for how aggressively one should address the leakage problem.

It is not very difficult to see how this scoring variability could translate into similarly mixed safety culture survey results.  But unlike survey questions which tend to be fairly general and abstract, the decision scoring results provide a definitive focus for assessing the “why” of safety culture perceptions.  Training and self assessment activities could benefit from these data as well.  Perhaps most intriguing is the question of what level of decision strength is expected in an organization with a “strong” safety culture.  Is it 5 (reasonably balances…) or is something higher, in the 6 to 7 range, expected?  We note that the average decision strength for example 2 was about 5.2.

Stay tuned for more on decision scoring.

Friday, July 15, 2011

Decision Scoring No. 2

This post introduces the second decision scoring example.  Click here, or the box above this post, to access the detailed decision summary and scoring feature.  

This example involves a proposed non-code repair to a leak in the elbow of service water system piping.  By opting for a non-code, temporary repair, a near term plant shutdown will be avoided but the permanent repair will be deferred for as long as 20 months.  In grading this decision for safety impact and decision strength, it may be helpful to think about what alternatives were available to this licensee.  We could think of several:

-    not perform a temporary repair as current leakage was within tech spec limits, but implement an augmented inspection and monitoring program to timely identify any further degradation.

-    perform the temporary repair as described but commit to perform the permanent repair within a shorter time period, say 6 months.

-    immediately shut down and perform the code repair.

Each of these alternatives would likely affect the potential safety impact of this leak condition and influence the perception of the decision strength.  For example a decision to shut down immediately and perform the code repair would likely be viewed as quite conservative, certainly more conservative than the other options.  Such a decision might provide the strongest reinforcement of safety culture.  The point is that none of these decisions is necessarily right or wrong, or good or bad.  They do however reflect more or less conservatism, and ultimately say something about safety culture.

Wednesday, July 13, 2011

Decision No. 1 Scoring Results


We wanted to present the results to date for the first of the decision scoring examples.  (The decision scoring framework is discussed here.)  This decision involved the replacement of a bearing in the air handling unit for a safety related pump room.  After declaring the air unit inoperable, the bearing was replaced within the LCO time window.

We asked readers to assess the decision in two dimensions: potential safety impact and the strength of the decision, using anchored scales to quantify the scores.  The chart to the left shows the scoring results with the size of the data symbols related to the number of responses.  Our interpretation of the results is as follows:

First, most of the scores did coalesce in the mid ranges of each scoring dimension.  Based on the anchored scales, this meant most people thought the safety impact associated with the air handling unit problem was fairly minimal and did not extend out in time.  This is consistent with the fact that the air handler bearing was replaced within the LCO time window.  The people that scored safety significance in this mid range also scored the decision strength as one that reasonably balanced safety and other operational priorities.  This seems consistent to us with the fact that the licensee had also ordered a new shaft for the air handler and would install it at the next outage - the new shaft being necessary for addressing the cause of the bearing problem.  Notwithstanding that most scores were in the mid range, we find it interesting that there is still a spread from 4-7 in the scoring of decision strength, and somewhat smaller spread of 4-6 in safety impact.  This would be an attribute of decision scores that might be tracked closely to see identify situations where the spreads change over time - perhaps signaling that either there is disagreement regarding the merits of the decisions or that there is a need for better communication of the bases for decisions.

Second, while not a definitive trend, it is apparent that in the mid-range scores people tended to see decision strength in terms of safety impact.  In other words, in situations where the safety impact was viewed as greater (e.g., 6 or so), the perceived strength of the decision was viewed as somewhat less than when the safety impact was viewed as somewhat lower (e.g., 4 or so).  This trend was emphasized by the scores that rated decision strength at 9 based on safety impact of 2.  There is intrinsic logic to this and also may highlight to managers that an organization’s perception of safety priorities will be directly influenced by their understanding of the safety significance of the issues involved.  One can also see the potential for decision scores “explaining” safety culture survey results which often indicate a relatively high percentage of respondents “somewhat agreeing” that e.g., safety is a high priority, a smaller percentage “mostly agreeing” and a smaller percentage yet, “strongly agreeing”. 

Third, there were some scores that appeared to us to be “outside the ballpark”.  These were the scores that rated safety impact at 10 did not seem consistent with our reading of the air handling unit issue, including the note indicating that the licensee had assessed the safety significance as minimal.

Stay tuned for the next decision scoring example and please provide your input.

Tuesday, June 21, 2011

Decisions….Decisions

Safety Culture Performance Measures

Developing forward looking performance measures for safety culture remains a key challenge today and is the logical next step following the promulgation of the NRC’s policy statement on safety culture.  The need remains high as safety culture issues continue to be identified by the NRC subsequent to weaknesses developing in the safety culture and ultimately manifesting in traditional (lagging) performance indicators.

Current practice has continued to rely on safety culture surveys which focus almost entirely on attitudes and perceptions about safety.  But other cultural values are also present in nuclear operations - such as meeting production goals - and it is the rationalization of competing values on a daily basis that is at the heart of safety culture.  In essence decision makers are pulled in several directions by these competing priorities and must reach answers that accord safety its appropriate priority.

Our focus is on safety management decisions made every day at nuclear plants; e.g., operability, exceeding LCO limits, LER determinations, JCOs, as well as many determinations associated with problem reporting, and corrective action.  We are developing methods to “score” decisions based on how well they balance competing priorities and to relate those scores to inference of safety culture.  As part of that process we are asking our readers to participate in the scoring of decisions that we will post each week - and then share the results and interpretation.  The scoring method will be a more limited version of our developmental effort but should illustrate some of the benefits of a decision-centric view of safety culture.

Look in the right column for the links to Score Decisions.  They will take you to the decision summaries and score cards.  We look forward to your participation and welcome any questions or comments.