A reader recently asked: “Do you subscribe to Edgar Schein's culture model?” The short-form answer is a qualified “Yes.” Prof. Schein has developed significant and widely accepted insights into the structure of organizational culture. In its simplest form, his model of culture has three levels: the organization’s (usually invisible) underlying beliefs and assumptions, its espoused values, and its visible artifacts such as behavior and performance. He describes the responsibility of management, through its leadership, to articulate the espoused values with policies and strategies and thus shape culture to align with management’s vision for the organization. Schein’s is a useful mental model for conceptualizing culture and management responsibilities.*
However, we have issues with the way some people have applied his work to safety culture. For starters, there is the apparent belief that these levels are related in a linear fashion, more particularly, that management by promulgating and reinforcing the correct values can influence the underlying beliefs, and together they will guide the organization to deliver the desired behaviors, i.e., the target level of safety performance. This kind of thinking has problems.
First, it’s too simplistic. Safety performance doesn’t arise only because of management’s espoused values and what the rest of the organization supposedly believes. As discussed in many of our posts, we see a much more complex, multidimensional and interactive system that yields outcomes which reflect, in greater or lesser terms, desired levels of safety. We have suggested that it is the totality of such outcomes that is representative of the safety culture in fact.**
Second, it leads to attempts to measure and influence safety culture that are often ineffective and even misleading. We wonder whether the heavy emphasis on values and leadership attitudes and behaviors - or traits - that the Schein model encourages, creates a form versus substance trap. This emphasis carries over to safety culture surveys - currently the linchpin for identifying and “correcting” deficient safety culture - and even doubles down by measuring the perception of attitudes and behaviors. While attitudes and behaviors may in fact have a beneficial effect on the organizational environment in which people perform - we view them as good habits - we are not convinced they are the only determinants of the actions, decisions and choices made by the organization. Is it possible that this approach creates an organization more concerned with how it looks and how it is perceived than with what it does? If everyone is checking their safety likeness in the cultural mirror might this distract from focusing on how and why actual safety-related decisions are being made?
We think there is good support for our skepticism. For every significant safety event in recent years - the BP refinery fire, the Massey coal mine explosion, the shuttle disasters, the Deepwater oil rig explosion, and the many instances of safety culture issues at nuclear plants - the organization and senior management had been espousing as their belief that “safety is the highest priority.” Clearly that was more illusion than reality.
To give a final upward thrust to the apple cart, we don’t think that the current focus on nuclear safety culture is primarily about culture. Rather we see “safety culture” more as a proxy for management’s safety performance - and perhaps a back door for the NRC to regulate while disclaiming same.***
* We have mentioned Prof. Schein in several prior blog posts: June 26, 2012, December 8, 2011, August 11, 2010, March 29, 2010, and August 17, 2009.
** This past year we have posted several times on decisions as one type of visible result (artifact) of the many variables that influence organizational behavior. In addition, please revisit two of Prof. Perin’s case studies, summarized here. They describe well-intentioned people, who probably would score well on a safety culture survey, who made plant problems much worse through a series of decisions that had many more influences than management’s entreaties and staff’s underlying beliefs.
*** Back in 2006, the NRC staff proposed to enhance the ROP to more fully address safety culture, saying that “Safety culture includes . . . features that are not readily visible such as basic assumptions and beliefs of both managers and individuals, which may be at the root cause of repetitive and far-reaching safety performance problems.” It wouldn’t surprise us if that’s an underlying assumption at the agency. See L.A. Reyes to the Commissioners, SECY-06-0122 “Policy Issue Information: Safety Culture Initiative Activities to Enhance the Reactor Oversight Process and Outcomes of the Initiatives” (May 24, 2006) p. 7 ADAMS ML061320282.
Friday, September 21, 2012
Tuesday, September 4, 2012
More on Cynefin
Bob Cudlin recently posted on the work of David Snowden, a decision theorist and originator of the Cynefin decision construct. Snowden’s Cognitive Edge website has a lot of information related to Cynefin, perhaps too much to swallow at once. For those who want an introduction to the concepts, focusing on their implications for decision-making, we suggest a paper “Cynefin: repeatability, science and values”* by Prof. Simon French.
In brief, the Cynefin model divides decision contexts into four spaces: Known (or Simple), Knowable (or Complicated), Complex and Chaotic. Knowledge about cause-and-effect relationships (and thus, appropriate decision making approaches) differs for each space. In the Simple space, cause-and-effect is known and rules or processes can be established for decision makers; “best” practices are possible. In the Complicated space, cause-and-effect is generally known but individual decisions require additional data and analysis, perhaps with probabilistic attributes; different practices may achieve equal results. In the Complex space, cause-and-effect may only be identified after an event takes place so decision making must work on broad, flexible strategies that can be adjusted as a situation evolves; new practices emerge. In the Chaotic space, there are no applicable analysis methods so decision makers must try things, see what happens and attempt to stabilize the situation; a novel (one-off) practice obtains.
The model in the 2008 French paper is not in complete accord with the Cynefin model currently described by Snowden but French’s description of the underlying considerations for decision makers remains useful. French’s paper also relates Cynefin to the views of other academics in the field of decision making.
For an overview of Cynefin in Snowden’s own words, check out “The Cynefin Framework” on YouTube. There he discusses a fifth space, Disorder, which is basically where a decision maker starts when confronted with a new decision situation. Importantly, a decision maker will instinctively try to frame the decision in the Cynefin decision space most familiar to the decision maker based on personal history, professional experience, values and preference for action.
In addition, Snowden describes the boundary between the Simple and Chaotic as the “complacent zone,” a potentially dangerous place. In the Simple space, the world appears well-understood but as near-misses and low-signal events are ignored, the system can drift toward the boundary and slip into the Chaotic space where a crisis can arise and decision makers risk being overwhelmed.
Both decision maker bias and complacency present challenges to maintaining a strong safety culture. The former can lead to faulty analysis of problems, forcing complex issues with multiple interactive causes through a one-size-fits-all solution protocol. The latter can lead to disasters, great and small. We have posted many times on the dangers of complacency. To access those posts, click “complacency” in the Labels box.
* S. French, “Cynefin: repeatability, science and values,” Newsletter of the European Working Group “Multiple Criteria Decision Aiding,” series 3, no. 17 (Spring 2008). Thanks to Bill Mullins for bringing this paper to our attention.
In brief, the Cynefin model divides decision contexts into four spaces: Known (or Simple), Knowable (or Complicated), Complex and Chaotic. Knowledge about cause-and-effect relationships (and thus, appropriate decision making approaches) differs for each space. In the Simple space, cause-and-effect is known and rules or processes can be established for decision makers; “best” practices are possible. In the Complicated space, cause-and-effect is generally known but individual decisions require additional data and analysis, perhaps with probabilistic attributes; different practices may achieve equal results. In the Complex space, cause-and-effect may only be identified after an event takes place so decision making must work on broad, flexible strategies that can be adjusted as a situation evolves; new practices emerge. In the Chaotic space, there are no applicable analysis methods so decision makers must try things, see what happens and attempt to stabilize the situation; a novel (one-off) practice obtains.
The model in the 2008 French paper is not in complete accord with the Cynefin model currently described by Snowden but French’s description of the underlying considerations for decision makers remains useful. French’s paper also relates Cynefin to the views of other academics in the field of decision making.
For an overview of Cynefin in Snowden’s own words, check out “The Cynefin Framework” on YouTube. There he discusses a fifth space, Disorder, which is basically where a decision maker starts when confronted with a new decision situation. Importantly, a decision maker will instinctively try to frame the decision in the Cynefin decision space most familiar to the decision maker based on personal history, professional experience, values and preference for action.
In addition, Snowden describes the boundary between the Simple and Chaotic as the “complacent zone,” a potentially dangerous place. In the Simple space, the world appears well-understood but as near-misses and low-signal events are ignored, the system can drift toward the boundary and slip into the Chaotic space where a crisis can arise and decision makers risk being overwhelmed.
Both decision maker bias and complacency present challenges to maintaining a strong safety culture. The former can lead to faulty analysis of problems, forcing complex issues with multiple interactive causes through a one-size-fits-all solution protocol. The latter can lead to disasters, great and small. We have posted many times on the dangers of complacency. To access those posts, click “complacency” in the Labels box.
* S. French, “Cynefin: repeatability, science and values,” Newsletter of the European Working Group “Multiple Criteria Decision Aiding,” series 3, no. 17 (Spring 2008). Thanks to Bill Mullins for bringing this paper to our attention.
Posted by
Lewis Conner
0
comments. Click to view/add.
Labels:
Cynefin,
Decision Making,
References
Thursday, August 30, 2012
Failure to Learn
In this post we call your attention to a current research paper* and Wall Street Journal summary article** that sheds some light on how people make decisions to protect against risk. The specific subject of the research involves response to imminent risk of house damage due to hurricanes. As the author of the paper states, “The purpose of this paper is to attempt to resolve the question of whether there are, in fact, inherent limits to our ability to learn from experience about the value of protection against low-probability, high-consequence, events.” (p.3) Also of interest is how the researchers used several simulations to gain insight and quantify how the decisions compared to optimal risk mitigation.
Are these results directly applicable to nuclear safety decisions? We think not. But they are far from irrelevant. They illustrate the value of careful and thoughtful research into the how and why of decisions, the impact of the decision environment and the opportunities for learning to produce better decisions. It also raises the question, Where is the nuclear industry on this subject? Nuclear managers are making routinely what are probably the most safety significant decisions of any industry. But how good are these decisions, and what determines their decision quality? The industry might contend that the emphasis on safety culture (meaning values and traits) is the sine qua non for assuring decisions that adequately reflect safety. Bad decision? Must have been bad culture. Reiterate culture, assume better decisions to follow. Is this right or is safety culture the wrong blanket or just too small a blanket to try to cover a decision process evolving from a complex adaptive system?
The basic construct for the first simulation was a contest among participants (college students) with the potential to earn a small cash bonus based on achieving certain performance results. Each participant was made the owner of a house in a coastal area subject to hurricane intrusion. During the simulation animation, a series of hurricanes would materialize in the ocean and approach land. The position, track and strength of the hurricane were continuously updated. Prior to landfall participants had the choice of purchasing protection against damage for that specific storm, either partial or full protection. The objective was to maximize total net asset; i.e., the value of the house, less any uncompensated damage and less the cost of any purchased protection.
While the first simulation focused on recurrent short term mitigation decisions, in the second simulation participants had the option to purchase protection that would last at least for the full season but had to purchased prior to a storm occurring. (A comprehensive description of the simulation and test data are provided in the referenced paper.)
The results indicated that participants significantly under-protected their homes leading to actual losses higher than a “rational” approach to purchasing protection. While part of the losses was due to purchasing protection unnecessarily, most was due to under protection. The main driver, according to the researchers, appeared to be that participants over relied on their most recent experience instead of an objective assessment of current risk. In other words, if in a prior hurricane they experienced no damage, either due to the track of the hurricane or because they had purchased protection, they were less inclined to purchase protection for the next hurricane.
The simulations reveal limitations in the ability to achieve improved decisions in what was, in essence, a trial and error environment. Feedback occurred after each storm, but participants did not necessarily use the feedback in an optimal manner “due to a tendency to excessively focus on the immediate disutility of cost outlays” (p.10) In any event it is clear that the nuclear safety decision making environment is “not ideal for learning—…[since] feedback is rare and noisy…” (p.5) In fact most feedback in nuclear operations might appear to be affirming since rarely do decisions to take short term risks result in bad outcomes. It is an environment susceptible to complacency more than learning.
The author concludes with a final question as to whether non-optimal decision making, such as observed in the simulations, can be overcome. He concludes, “This is may be a difficult since the psychological mechanisms that lead to the biases may be hard-wired; as long as we remain present-focused, prone to chasing short-term rewards and avoiding short term punishment, it is unlikely that individuals and institutions will learn to undertake optimal levels of protective investment by experience alone. The key, therefore, is introducing decision architectures that allow individuals to overcome these biases through, for example, creative use of defaults…” (pp. 30-31)
* R.J. Meyer, “Failing to Learn from Experience about Catastrophes: The Case of Hurricane Preparedness,” The Wharton School, University of Pennsylvania Working Paper 2012-05 (March 2012).
** C. Shea, “Failing to Learn From Hurricane Experience, Again and Again,” Wall Street Journal (Aug. 17, 2012).
Are these results directly applicable to nuclear safety decisions? We think not. But they are far from irrelevant. They illustrate the value of careful and thoughtful research into the how and why of decisions, the impact of the decision environment and the opportunities for learning to produce better decisions. It also raises the question, Where is the nuclear industry on this subject? Nuclear managers are making routinely what are probably the most safety significant decisions of any industry. But how good are these decisions, and what determines their decision quality? The industry might contend that the emphasis on safety culture (meaning values and traits) is the sine qua non for assuring decisions that adequately reflect safety. Bad decision? Must have been bad culture. Reiterate culture, assume better decisions to follow. Is this right or is safety culture the wrong blanket or just too small a blanket to try to cover a decision process evolving from a complex adaptive system?
The basic construct for the first simulation was a contest among participants (college students) with the potential to earn a small cash bonus based on achieving certain performance results. Each participant was made the owner of a house in a coastal area subject to hurricane intrusion. During the simulation animation, a series of hurricanes would materialize in the ocean and approach land. The position, track and strength of the hurricane were continuously updated. Prior to landfall participants had the choice of purchasing protection against damage for that specific storm, either partial or full protection. The objective was to maximize total net asset; i.e., the value of the house, less any uncompensated damage and less the cost of any purchased protection.
While the first simulation focused on recurrent short term mitigation decisions, in the second simulation participants had the option to purchase protection that would last at least for the full season but had to purchased prior to a storm occurring. (A comprehensive description of the simulation and test data are provided in the referenced paper.)
The results indicated that participants significantly under-protected their homes leading to actual losses higher than a “rational” approach to purchasing protection. While part of the losses was due to purchasing protection unnecessarily, most was due to under protection. The main driver, according to the researchers, appeared to be that participants over relied on their most recent experience instead of an objective assessment of current risk. In other words, if in a prior hurricane they experienced no damage, either due to the track of the hurricane or because they had purchased protection, they were less inclined to purchase protection for the next hurricane.
The simulations reveal limitations in the ability to achieve improved decisions in what was, in essence, a trial and error environment. Feedback occurred after each storm, but participants did not necessarily use the feedback in an optimal manner “due to a tendency to excessively focus on the immediate disutility of cost outlays” (p.10) In any event it is clear that the nuclear safety decision making environment is “not ideal for learning—…[since] feedback is rare and noisy…” (p.5) In fact most feedback in nuclear operations might appear to be affirming since rarely do decisions to take short term risks result in bad outcomes. It is an environment susceptible to complacency more than learning.
The author concludes with a final question as to whether non-optimal decision making, such as observed in the simulations, can be overcome. He concludes, “This is may be a difficult since the psychological mechanisms that lead to the biases may be hard-wired; as long as we remain present-focused, prone to chasing short-term rewards and avoiding short term punishment, it is unlikely that individuals and institutions will learn to undertake optimal levels of protective investment by experience alone. The key, therefore, is introducing decision architectures that allow individuals to overcome these biases through, for example, creative use of defaults…” (pp. 30-31)
* R.J. Meyer, “Failing to Learn from Experience about Catastrophes: The Case of Hurricane Preparedness,” The Wharton School, University of Pennsylvania Working Paper 2012-05 (March 2012).
** C. Shea, “Failing to Learn From Hurricane Experience, Again and Again,” Wall Street Journal (Aug. 17, 2012).
Posted by
Bob Cudlin
0
comments. Click to view/add.
Labels:
Decision Making
Tuesday, August 28, 2012
Confusion of Properties and Qualities
Dave Snowden |
Snowden is a proponent of applying complexity science to inform managers’ decision making and actions. He is perhaps best known for developing the Cynefin framework which is designed to help managers understand their operational context - based on four archetypes: simple, complicated, complex and chaotic. In considering the archetypes one can see how various aspects of nuclear operations might fit within the simple or complicated frameworks; frameworks where tools such as best practices and root cause analysis are applicable. But one can also see the limitations of these frameworks in more complex situations, particularly those involving nuanced safety decisions which are at the heart of nuclear safety culture. Snowden describes “complex adaptive systems” as ones where the system and its participants evolve together through ongoing interaction and influence, and system behavior is “emergent” from that process. Perhaps most provocatively for nuclear managers is his contention that CDA systems are “non-causal” in nature, meaning one shouldn’t think in terms of linear cause and effect and shouldn’t expect that root cause analysis will provide the needed insight into system failures.
With all that said, we want to focus on a quote from one of Snowden’s lectures in 2008 “Complexity Applied to Systems”.* In the lecture at approximately the 15:00 minute mark, he comments on a “fundamental error of logic” he calls “confusion of properties and qualities”. He says:
“...all of management science, they observe the behaviors of people who have desirable properties, then try to achieve those desirable properties by replicating the behaviors”.
By way of a pithy illustration Snowden says, “...if I go to France and the first ten people I see are wearing glasses, I shouldn’t conclude that all Frenchmen wear glasses. And I certainly shouldn’t conclude if I put on glasses, I will become French.”
For us Snowden’s observation generated an immediate connection to the approach being implemented around the nuclear enterprise. Think about the common definitions of safety culture adopted by the NRC and industry. The NRC definition specifies “... the core values and behaviors…” and “Experience has shown that certain personal and organizational traits are present in a positive safety culture. A trait, in this case, is a pattern of thinking, feeling, and behaving that emphasizes safety, particularly in goal conflict situations, e.g., production, schedule, and the cost of the effort versus safety.”**
The INPO definition defines safety culture as “An organization's values and behaviors – modeled by its leaders and internalized by its members…”***
In keeping with these definitions the NRC and industry rely heavily on the results of safety culture surveys to ascertain areas in need of improvement. These surveys overwhelmingly focus on whether nuclear personnel are “modeling” the definitional traits, values and behaviors. This seems to fall squarely in the realm described by Snowden of looking to replicate behaviors in hopes of achieving the desired culture and results. Most often, identified deficiencies are subject to retraining to reinforce the desired safety culture traits. But what seems to be lacking is a determination of why the traits were not exhibited in the first place. Followup surveys may be conducted periodically, again to measure compliance with traits. This recipe is considered sufficient until the next time there are suspect decisions or actions by the licensee.
Bottom Line
The nuclear enterprise - NRC and industry - appear to be locked into a simplistic and linear view of safety culture. Values and traits produce desired behaviors; desired behaviors produce appropriate safety management. Bad results? Go back to values and traits and retrain. Have management reiterate that safety is their highest priority. Put up more posters.
But what if Snowden’s concept of complex adaptive systems is really an applicable model, and the safety management system is a much more complicated, continuously, self-evolving process? It is a question well worth pondering - and may have far more impact than much of the hardware centric issues currently being pursued.
Footnote: Snowden is an immensely informative and entertaining lecturer and a large number of his lectures are available via podcasts on the Cognitive Edge website and through YouTube videos. They could easily provide a stimulating input to safety culture training sessions.
* Podcast available at http://cognitive-edge.com/library/more/podcasts/agile-conference-complexity-applied-to-systems-2008/.
** NRC Safety Culture Policy Statement (June 14, 2011).
*** INPO Definition of Safety Culture (2004).
Posted by
Bob Cudlin
2
comments. Click to view/add.
Labels:
Cynefin,
INPO,
Management,
NRC,
References,
Safety Culture
Tuesday, July 31, 2012
Regulatory Influence on Safety Culture
In September, 2011 the Nuclear Energy Agency (NEA) and the International Atomic Energy (IAEA) held a workshop for regulators and industry on oversight of licensee management. “The principal aim of the workshop was to share experience and learning about the methods and approaches used by regulators to maintain oversight of, and influence, nuclear licensee leadership and management for safety, including safety culture.”*
However, we were very impressed by Prof. Richard Taylor’s keynote address. He is from the University of Bristol and has studied organizational and cultural factors in disasters and near-misses in both nuclear and non-nuclear contexts. His list of common contributors includes issues with leadership, attitudes, environmental factors, competence, risk assessment, oversight, organizational learning and regulation. He expounded on each factor with examples and additional detail.
We found his conclusion most encouraging: “Given the common precursors, we need to deepen our understanding of the complexity and interconnectedness of the socio-political systems at the root of organisational accidents.” He suggests using system dynamics modeling to study archetypes including “maintaining visible convincing leadership commitment in the presence of commercial pressures.” This is totally congruent with the approach we have been advocating for examining the effects of competing business and safety pressures on management.
Unfortunately, this was the intellectual high point of the proceedings. Topics that we believe are important to assessing and understanding SC got short shrift thereafter. In particular, goal conflict, CAP and management compensation were not mentioned by any of the other presenters.
Decision-making was mentioned by a few presenters but there was no substantive discussion of this topic (the U.K. presenter had a motherhood statement that “Decisions at all levels that affect safety should be rational, objective, transparent and prudent”; the Barnes/Kove presentation appeared to focus on operational decision making). A bright spot was in the meeting summary where better insight into licensees’ decision making process was mentioned as desirable and necessary by regulators. And one suggestion for future research was “decision making in the face of competing goals.” Perhaps there is hope after all.
(If this post seems familiar, last Dec 5 we reported on a Feb 2011 IAEA conference for regulators and industry that covered some of the same ground. Seven months later the bureaucrats had inched the football a bit down the field.)
* Proceedings of an NEA/IAEA Workshop, Chester, U.K. 26-28 Sept 2011, “Oversight and Influencing of Licensee Leadership and Management for Safety, Including Safety Culture – Regulatory Approaches and Methods,” NEA/CSNI/R(2012)13 (June 2012).
Posted by
Lewis Conner
2
comments. Click to view/add.
Labels:
Decision Making,
Goal Conflict,
IAEA,
NEA,
References,
System Dynamics
Friday, July 27, 2012
Modeling Safety Culture (Part 4): Simulation Results 2
As we introduced in our prior post on this subject (Results 1), we are presenting some safety culture simulation results based on a highly simplified model. In that post we illustrated how management might react to business pressure caused by a reduction in authorized budget dollars. The actions of management result in shifting of resources from safety to business and lead to changes in the state of safety culture.
In this post we continue with the same model and some other interesting scenarios. In each of the following charts three outputs are plotted: safety culture in red, management action level in blue and business pressure in dark green. The situation is an organization with a somewhat lower initial safety culture and confronted with a somewhat smaller budget reduction than the example in Results 1.
Figure 1 |
Figure 2 |
Figure 3 |
Perhaps the most important takeaway from these three simulations is that the total changes in safety culture are not significantly different. A certain price is being paid for shifting priorities away from safety, however the ability to reduce and maintain lower business pressure is much better with the last management strategy.
Figure 4 |
Posted by
Bob Cudlin
0
comments. Click to view/add.
Labels:
Decision Making,
Goal Conflict,
Simulation,
System Dynamics
Friday, July 20, 2012
Cognitive Dissonance at Palisades
“Cognitive dissonance” is the tension that arises from holding two conflicting thoughts in one’s mind at the same time. Here’s a candidate example, a single brief document that presents two different perspectives on safety culture issues at Palisades.
On June 26, 2012, the NRC requested information on Palisades’ safety culture issues, including the results of a 2012 safety culture assessment conducted by an outside firm, Conger & Elsea, Inc (CEI). In reply, on July 9, 2012 Entergy submitted a cover letter and the executive summary of the CEI assessment.* The cover letter says “Areas for Improvement (AFls) identified by CEI over1apped many of the issues already identified by station and corporate leadership in the Performance Recovery Plan. Because station and corporate management were implementing the Performance Recovery Plan in April 2012, many of the actions needed to address the nuclear safety culture assessment were already under way.”
Further, “Gaps identified between the station Performance Recovery Plan and the safety culture assessment are being addressed in a Safety Culture Action Plan. . . . [which is] a living document and a foundation for actively engaging station workers to identify, create and complete other actions deemed to be necessary to improve the nuclear safety culture at PNP.”
Seems like management has matters in hand. But let’s look at some of the issues identified in the CEI assessment.
“. . . important decision making processes are governed by corporate procedures. . . . However, several events have occurred in recent Palisades history in which deviation from those processes contributed to the occurrence or severity of an event.”
“. . . there is a lack of confidence and trust by the majority of employees (both staff and management) at the Plant in all levels of management to be open, to make the right decisions, and to really mean what they say. This is indicated by perceptions [of] the repeated emphasis of production over safety exhibited through decisions around resources.” [emphasis added]
“There is a lack in the belief that Palisades Management really wants problems or concerns reported or that the issues will be addressed. The way that CAP is currently being implemented is not perceived as a value added process for the Plant.”
The assessment also identifies issues related to Safety Conscious Work Environment and accountability throughout the organization.
So management is implying things are under control but the assessment identified serious issues. As our Bob Cudlin has been explaining in his series of posts on decision making, pressures associated with goal conflict permeate an entire organization and the problems that arise cannot be fixed overnight. In addition, there’s no reason for a plant to have an ineffective CAP but if the CAP isn’t working, that’s not going to be quickly fixed either.
* Letter, A.J. Vitale to NRC, “Reply to Request for Information” (July 9,2012) ADAMS ML12193A111.
On June 26, 2012, the NRC requested information on Palisades’ safety culture issues, including the results of a 2012 safety culture assessment conducted by an outside firm, Conger & Elsea, Inc (CEI). In reply, on July 9, 2012 Entergy submitted a cover letter and the executive summary of the CEI assessment.* The cover letter says “Areas for Improvement (AFls) identified by CEI over1apped many of the issues already identified by station and corporate leadership in the Performance Recovery Plan. Because station and corporate management were implementing the Performance Recovery Plan in April 2012, many of the actions needed to address the nuclear safety culture assessment were already under way.”
Further, “Gaps identified between the station Performance Recovery Plan and the safety culture assessment are being addressed in a Safety Culture Action Plan. . . . [which is] a living document and a foundation for actively engaging station workers to identify, create and complete other actions deemed to be necessary to improve the nuclear safety culture at PNP.”
Seems like management has matters in hand. But let’s look at some of the issues identified in the CEI assessment.
“. . . important decision making processes are governed by corporate procedures. . . . However, several events have occurred in recent Palisades history in which deviation from those processes contributed to the occurrence or severity of an event.”
“. . . there is a lack of confidence and trust by the majority of employees (both staff and management) at the Plant in all levels of management to be open, to make the right decisions, and to really mean what they say. This is indicated by perceptions [of] the repeated emphasis of production over safety exhibited through decisions around resources.” [emphasis added]
“There is a lack in the belief that Palisades Management really wants problems or concerns reported or that the issues will be addressed. The way that CAP is currently being implemented is not perceived as a value added process for the Plant.”
The assessment also identifies issues related to Safety Conscious Work Environment and accountability throughout the organization.
So management is implying things are under control but the assessment identified serious issues. As our Bob Cudlin has been explaining in his series of posts on decision making, pressures associated with goal conflict permeate an entire organization and the problems that arise cannot be fixed overnight. In addition, there’s no reason for a plant to have an ineffective CAP but if the CAP isn’t working, that’s not going to be quickly fixed either.
* Letter, A.J. Vitale to NRC, “Reply to Request for Information” (July 9,2012) ADAMS ML12193A111.
Posted by
Lewis Conner
1 comments. Click to view/add.
Labels:
Assessment,
Goal Conflict,
NRC,
Palisades,
Safety Culture
Sunday, July 15, 2012
Modeling Safety Culture (Part 3): Simulation Results 1
As promised in our June 29, 2012 post, we are taking the next step to incorporate our mental models of safety culture and decision making in a simple simulation program. The performance dynamic we described viewed safety culture as a “level”, and the level of safety culture determines its ability to resist pressure associated with competing business priorities. If business performance is not meeting goals, pressure on management is created which can be offset by sufficiently strong safety culture. However if business pressure exceeds the threshold for a given safety culture level, management decision making can be affected, resulting in a shift of resources from safety to business needs. This may relieve some business pressure but create a safety gap that can degrade safety culture, making it potentially even more vulnerable to business pressure.
It is worth expanding on the concept of safety culture as a “level” or in systems dynamics terms, a “stock” - an analogy might be the level of liquid in a reservoir which may increase or decrease due to flows into and out of the reservoir. This representation causes safety culture to respond less quickly to changes in system conditions than other factors. For example, an abrupt cut in an organization’s budget and its pressure on management to respond may occur quite rapidly - however its impact on organizational safety culture will play out more gradually. Thus “...stocks accumulate change. They are kind of a memory, storing the results of past actions...stocks cannot be adjusted instantaneously no matter how great the organizational pressures…This vital inertial characteristic of stock and flow networks distinguishes them from simple causal links.”*
It is worth expanding on the concept of safety culture as a “level” or in systems dynamics terms, a “stock” - an analogy might be the level of liquid in a reservoir which may increase or decrease due to flows into and out of the reservoir. This representation causes safety culture to respond less quickly to changes in system conditions than other factors. For example, an abrupt cut in an organization’s budget and its pressure on management to respond may occur quite rapidly - however its impact on organizational safety culture will play out more gradually. Thus “...stocks accumulate change. They are kind of a memory, storing the results of past actions...stocks cannot be adjusted instantaneously no matter how great the organizational pressures…This vital inertial characteristic of stock and flow networks distinguishes them from simple causal links.”*
Let’s see this in action in the following highly simplified model. The model considers just two competing priorities: safety and business. When performance in these categories differs from goals, pressure is created on management and may result in actions to ameliorate the pressure. In this model management action is limited to shifting resources from one priority to the other. Safety culture, per our June 29, 2012 post, is an organization’s ability to resist and then respond to competing priorities. At time zero, a reduction in authorized budget is imposed resulting in a gap (current spending versus authorized spending) and creating business pressure on management to respond.
Figure 1 |
Figure 2 |
pressure associated with the gap. Immediately following the budget reduction, business pressure rapidly increases and quickly reaches a level sufficient to cause management to start to shift priorities. The first set of management actions brings some pressure relief, the second set of actions further reduces pressure. As expected there is some time lag in the response of business pressure to the actions of management.
Figure 3 |
accumulated in the safety culture. Note first the gradual changes that occur in culture versus the faster and sharper changes in management actions and business pressure. As management takes action there is a loss of safety priority and safety culture slowly degrades. When further escalation of management action occurs it is at a point where culture is already lower, making the organization more susceptible to compromising safety priorities. Safety culture declines further. This type of response is indicative of a feedback loop which is an important dynamic feature of the system. Business pressure causes management actions, those actions degrade safety culture, degraded culture reduces resistance to further actions.
We invite comments and questions from our readers.
* John Morecroft, Strategic Modelling and Business Dynamics (John Wiley & Sons, 2007) pp. 59-61.
Posted by
Bob Cudlin
1 comments. Click to view/add.
Labels:
Decision Making,
Goal Conflict,
Simulation,
System Dynamics
Subscribe to:
Posts (Atom)