Charlan Nemeth is a psychology professor at the University of California, Berkeley. Her research and practical experience inform her conclusion that the presence of authentic dissent during the decision making process leads to better informed and more creative decisions. This post presents highlights from her 2018 book* and provides our perspective on her views.
Going along to get along
Most people are inclined to go along with the majority in a decision making situation, even when they believe the majority is wrong. Why? Because the majority has power and status, most organizational cultures value consensus and cohesion, and most people want to avoid conflict. (179)
An organization’s leader(s) may create a culture of agreement but consensus, aka the tyranny of the majority, gives the culture its power over members. People consider decisions from the perspective of the consensus, and they seek and analyze information selectively to support the majority opinion. The overall effect is sub-optimal decision making; following the majority requires no independent information gathering, no creativity, and no real thinking. (36,81,87-88)
Truth matters less than group cohesion. People will shape and distort reality to support the consensus—they are complicit in their own brainwashing. They will willingly “unknow” their beliefs, i.e., deny something they know to be true, to go along. They live in information bubbles that reinforce the consensus, and are less likely to pay attention to other information or a different problem that may arise. To get along, most employees don’t speak up when they see problems. (32,42,98,198)
“Groupthink” is an extreme form of consensus, enabled by a norm of cohesion, a strong leader, situational stress, and no real expectation that a better idea than the leader’s is possible. The group dynamic creates a feedback loop where people repeat and reinforce the information they have in common, leading to more extreme views and eventually the impetus to take action. Nemeth’s illustrative example is the decision by President John Kennedy and his advisors to authorize the disastrous Bay of Pigs invasion.** (140-142)
Dissent adds value to the decision making process
Dissent breaks the blind following of the majority and stimulates thought that is more independent and divergent, i.e., creates more alternatives and considers facts on all sides of the issue. Importantly, the decision making process is improved even when the dissenter is wrong because it increases the group’s chances of identifying correct solutions. (7-8,12,18,116,180)
Dissent takes courage but can be contagious; a single dissenter can encourage others to speak up. Anonymous dissent can help protect the dissenter from the group. (37,47)
Dissent must be authentic, i.e., it must reflect the true beliefs of the dissenter. To persuade others, the dissenter must remain consistent in his position. He can only change because of new or changing information. Only authentic, persistent dissent will force others to confront the possibility that they may be wrong. At the end of the day, getting a deal may require the dissenter to compromise, but changing the minds of others requires consistency. (58,63-64,67,115,190)
Alternatives to dissent
Other, less antagonistic, approaches to improving decision making have been promoted. Nemeth finds them lacking.
Training is the go to solution in many organizations but is not very effective in addressing biases or getting people to speak up to realities of power and hierarchies. Dissent is superior to training because it prompts reconsidering positions and contemplating alternatives. (101,107)
Classical brainstorming incorporates several rules for generating ideas, including withholding criticism of ideas that have been put forth. However, Nemeth found in her research that allowing (but not mandating) criticism led to more ideas being generated. In her view, it’s the “combat between different positions that provides the benefits to decision making.” (131,136)
Demographic diversity is promoted as a way to get more input into decisions. But demographics such as race or gender are not as helpful as diversity of skills, knowledge, and backgrounds (and a willingness to speak up), along with leaders who genuinely welcome different viewpoints. (173,175,200)
The devil’s advocate approach can be better than nothing, but it generally leads to considering the negatives of the original position, i.e., the group focuses on better defenses for that position rather than alternatives to it. Group members believe the approach is fake or acting (even when the advocate really believes it) so it doesn’t promote alternative thinking or force participants to confront the possibility that they may be wrong. The approach is contrived to stimulate divergent thinking but it actually creates an illusion that all sides have been considered while preserving group cohesion. (182-190,203-04)
Dissent is not free for the individual or the group
Dissenters are disliked, ridiculed, punished, or worse. Dissent definitely increases conflict and sometimes lowers morale in the group. It requires a culture where people feel safe in expressing dissent, and it’s even better if dissent is welcomed. The culture should expect that everyone will be treated with respect. (197-98,209)
Our Perspective
We have long argued that leaders should get the most qualified people, regardless of rank or role, to participate in decision making and that alternative positions should be encouraged and considered. Nemeth’s work strengthens and extends our belief in the value of different views.
If dissent is perceived as an honest effort to attain the truth of a situation, it should be encouraged by management and tolerated, if not embraced, by peers. Dissent may dissuade the group from linear cause-effect, path of least resistance thinking. We see a similar practice in Ray Dalio’s concepts of an idea meritocracy and radical open-mindedness, described in our April 17, 2018 review of his book Principles. In Dalio’s firm, employees are expected to engage in lively debate, intellectual combat even, over key decisions. His people have an obligation to speak up if they disagree. Not everyone can do this; a third of Dalio’s new hires are gone within eighteen months.
On the other hand, if dissent is perceived as self-serving or tattling, then the group will reject it like a foreign virus. Let’s face it: nobody likes a rat.
We agree with Nemeth’s observation that training is not likely to improve the quality of an organization’s decision making. Training can give people skills or techniques for better decision making but training does not address the underlying values that steer group decision making dynamics.
Much academic research of this sort is done using students as test subjects.*** They are readily available, willing to participate, and follow directions. Some folks think the results don’t apply to older adults in formal organizations. We disagree. It’s easier to form stranger groups with students who don’t have to worry about power and personal relationships than people in work situations; underlying psychological mechanisms can be clearly and cleanly exposed.
Bottom line: This is a lucid book written for popular consumption, not an academic journal, and is worth a read.
(Give me the liberty to know, to utter, and to argue freely according to conscience. — John Milton)
* C. Nemeth, In Defense of Troublemakers (New York: Basic Books, 2018).
** Kennedy learned from the Bay of Pigs fiasco. He used a much more open and inclusive decision making process during the Cuban Missile Crisis.
*** For example, Daniel Kahneman’s research reported in Thinking, Fast and Slow, which we reviewed Dec. 18, 2013.
Showing posts with label Kahneman. Show all posts
Showing posts with label Kahneman. Show all posts
Monday, June 29, 2020
Tuesday, April 17, 2018
Nuclear Safety Culture: Insights from Principles by Ray Dalio
Book cover |
Decision Making
We’ll begin with Dalio’s mental model of reality. Reality is a system of universal cause-effect relationships that repeat and evolve like a perpetual motion machine. The system dynamic is driven by evolution (“the single greatest force in the universe” (p. 142)) which is the process of adaptation.
Because many situations repeat themselves, principles (policies or rules) advance the goal of making decisions in a systematic, repeatable way. Any decision situation has two major steps: learning (obtaining and synthesizing data about the current situation) and deciding what to do. Logic, reason and common sense are the primary decision making mechanisms, supported by applicable existing principles and tools, e.g., expected value calculations or evidence-based decision making tools. The lessons learned from each decision situation can be incorporated into existing or new principles. Practicing the principles develops good habits, i.e., automatic, reflexive behavior in the specified situations. Ultimately, the principles can be converted into algorithms that can be computerized and used to support the human decision makers.
Believability weighting can be applied during the decision making process to obtain data or opinions about solutions. Believable people can be anyone in the organization but are limited to those “who 1) have repeatedly and successfully accomplished the thing in question, and 2) . . . can logically explain the cause-effect relationships behind their conclusions.” (p. 371) Believability weighting supplements and challenges responsible decision makers but does not overrule them. Decision makers can also make use of thoughtful disagreement where they seek out brilliant people who disagree with them to gain a deeper understanding of decision situations.
The organization needs a process to get beyond disagreement. After all discussion, the responsible party exercises his/her decision making authority. Ultimately, those who disagree have to get on board (“get in sync”) and support the decision or leave the organization.
The two biggest barriers to good decision making are ego and blind spots. Radical open-mindedness recognizes the search for what’s true and the best answer is more important than the need for any specific person, no matter their position in the organization, to be right.
Culture
Organizations and the individuals who populate them should also be viewed as machines. Both are imperfect but capable of improving. The organization is a machine made up of culture and people that produces outcomes that provide feedback from which learning can occur. Mistakes are natural but it is unacceptable to not learn from them. Every problem is an opportunity to improve the machine.
People are generally imperfect machines. People are more emotional than logical. They suffer from ego (subconscious drivers of thoughts) and blind spots (failure to see weaknesses in themselves). They have different character attributes. In short, people are all “wired” differently. A strong culture with clear principles is needed to get and keep everyone in sync with each other and in pursuit of the organization’s goals.
Mutual adjustment takes place when people interact with culture. Because people are different and the potential to change their wiring is low** it is imperative to select new employees who will embrace the existing culture. If they can’t or won’t, or lack ability, they have to go. Even with its stringent hiring practices, about a third of Bridgewater’s new hires are gone by the end of eighteen months.
Human relations are built on meaningful relationships, radical truth and tough love. Meaningful relationships means people give more consideration to others than themselves and exhibit genuine caring for each other. Radical truth means you are “transparent with your thoughts and open-mindedly accepting the feedback of others.” (p. 268) Tough love recognizes that criticism is essential for improvement towards excellence; everyone in the organization is free to criticize any other member, no matter their position in the hierarchy. People have an obligation to speak up if they disagree.
“Great cultures bring problems and disagreements to the surface and solve them well . . .” (p. 299) The culture should support a five-step management approach: Have clear goals, don’t tolerate problems, diagnose problems when they occur, design plans to correct the problems, and do what’s necessary to implement the plans, even if the decisions are unpopular. The culture strives for excellence so it’s intolerant of folks who aren’t excellent and goal achievement is more important than pleasing others in the organization.
More on Management
Dalio’s vision for Bridgewater is “an idea meritocracy in which meaningful work and meaningful relationships are the goals and radical truth and radical transparency are the ways of achieving them . . .” (p. 539) An idea meritocracy is “a system that brings together smart, independent thinkers and has them productively disagree to come up with the best possible thinking and resolve their disagreements in a believability-weighted way . . .” (p. 308) Radical truth means “not filtering one’s thoughts and one’s questions, especially the critical ones.” (ibid.) Radical transparency means “giving most everyone the ability to see most everything.” (ibid.)
A person is a machine operating within a machine. One must be one’s own machine designer and manager. In managing people and oneself, take advantage of strengths and compensate for weaknesses via guardrails and soliciting help from others. An example of a guardrail is assigning a team member whose strengths balance another member’s weaknesses. People must learn from their own bad decisions so self-reflection after making a mistake is essential. Managers must ascertain if mistakes are evidence of a weakness and whether compensatory action is required or, if the weakness is intolerable, termination. Because values, abilities and skills are the drivers of behavior management should have a full profile for each employee.
Governance is the system of checks and balances in an organization. No one is above the system, including the founder-owner. In other words, senior managers like Dalio can be subject to the same criticism as any other employee.
Leadership in the traditional sense (“I say, you do”) is not so important in an idea meritocracy because the optimal decisions arise from a group process. Managers are seen as decision makers, system designers and shapers who can visualize a better future and then build it. Leaders “must be willing to recruit individuals who are willing to do the work that success requires.” (p. 520)
Our Perspective
We recognize international investment management is way different from nuclear power management so some of Dalio’s principles can only be applied to the nuclear industry in a limited way, if at all. One obvious example of a lack of fit is the area of risk management. The investing environment is extremely competitive with players evolving rapidly and searching for any edge. Timely bets (investments) must be made under conditions where the risk of failure is many orders of magnitude greater than what acceptable in the nuclear industry. Other examples include the relentless, somewhat ruthless, pursuit of goals and a willingness to jettison people that is foreign to the utility world.
But we shouldn’t throw the baby out with the bath. While Dalio’s approach may be too extreme for wholesale application in your environment it does provide a comparison (note we don’t say “standard”) for your organization’s performance. Does your decision making process measure up to Dalio’s in terms of robustness, transparency and the pursuit of truth? Does your culture really strive for excellence (and eliminate those who don’t share that vision) or is it an effort constrained by hierarchical, policy or political realities?
This is a long book but it’s easy to read and key points are repeated often. Not all of it is novel; many of the principles are based on observations or techniques that have been around for awhile and should be familiar to you. For example, ideas about how human minds work are drawn, in part, from Daniel Kahneman; an integrated hierarchy of goals looks like Management by Objectives; and a culture that doesn’t automatically punish people for making mistakes or tolerable errors sounds like a “just culture” albeit with some mandatory individual learning attached.
Bottom line: Give this book a quick look. It can’t hurt and might help you get a clearer picture of how your own organization actually operates.
* R. Dalio, Principles (New York: Simon & Schuster, 2017). This book was recommended to us by a Safetymatters reader. Please contact us if you have any material you would like us to review.
** A person’s basic values and abilities are relatively fixed, although skills may be improved through training.
Posted by
Lewis Conner
0
comments. Click to view/add.
Labels:
Dalio,
Decision Making,
Just Culture,
Kahneman,
Management,
Mental Model,
References
Friday, December 1, 2017
Nuclear Safety Culture: Focus on Decision Making
McKinsey Five Fifty cover |
The McKinsey Quarterly (MQ) has packaged a trio of articles* on DM. Their first purpose is identifying and countering the different biases that lead to sub-optimal, even disastrous decisions. (When specific biases are widely spread in an organization, they are part of its culture.) A second purpose is to describe the attributes of more fair, robust and effective DM processes. The articles’ specific topics are (1) the behavioral science that underlies DM, (2) a method for categorizing and processing decisions and (3) a case study of a major utility that changed its decision culture.
“The case for behavioral strategy” (MQ, March 2010)
This article covers the insights from psychology that can be used to fashion a robust DM process. The authors evidence the need for process improvement by reporting their survey research results showing over 50 percent of the variability in decisional results (i.e., performance) was determined by the quality of the DM process while less than 10 percent was caused by the quality of the underlying analysis.
There are plenty of cognitive biases that can affect human DM. The authors discuss several of them and strategies for counteracting them, as summarized in the table below.
Type of bias
|
How to counteract
|
False pattern recognition (e.g., saliency (overweight recent or memorable events), confirmation,
inaccurate analogies)
|
Require alternative explanations for the data in the
analysis, articulate participants’ relevant experiences (which can reveal the
basis for their biases), identify similar situations for comparative
analysis.
|
Bias for action
|
Explicitly consider uncertainty in the input data and the
possible outcomes.
|
Stability (anchoring to an initial value, loss aversion)
|
Establish stretch targets that can’t be achieved by
business as usual.
|
Silo thinking
|
Involve a diverse group in the DM process and define
specific decision criteria before discussions begin.
|
Social (conformance to group views)
|
Create genuine debate through a diverse set of decision
makers, a climate of trust and depersonalized discussions.
|
The greatest problem arises from biases that create repeatable patterns that become undesirable cultural traits. DM process designers must identify the types of biases that arise in their organization’s DM, and specify debiasing techniques that will work in their organization and embed them in formal DM procedures.
An attachment to the article identifies and defines 17 specific biases. Much of the seminal research on DM biases was performed by Daniel Kahneman who received a Nobel prize for his efforts. We have reviewed Prof. Kahneman’s work on Safetymatters; see our Nov. 4, 2011 and Dec. 18, 2013 posts or click on the Kahneman label.
“Untangling your organization’s decision making” (MQ, June 2017)
While this article is aimed at complex, global organizations, there are lessons here for nuclear organizations (typically large bureaucracies) because all organizations have become victims of over-abundant communications, with too many meetings and low value e-mail threads distracting members from paying attention to making good decisions.
The authors posit four types of decisions an organization faces, plotted on a 2x2 matrix (the consultant’s best friend) with scope and impact (broad or narrow) on one axis and level of familiarity (infrequent or frequent) on the other. A different DM approach is proposed for each quadrant.
Big-bet decisions are infrequent and have broad impact. Recommendations include (1) ensure there’s an executive sponsor, (2) break down the mega-decision into manageable parts for analysis (and reassemble them later), (3) use a standard DM approach for all the parts and (4) establish a mechanism to track effectiveness during decision implementation.
The authors observe that some decisions turn out to be “bet the company” ones without being recognized as such. There are examples of this in the nuclear industry. For details, see our June 18,2013 post on Kewaunee (had only an 8 year PPA), Crystal River (tried to cut through the containment using in-house expertise) and SONGs (installed replacement steam generators with an unproven design).
Cross-cutting decisions are more frequent and have broad impact. Some decisions at a nuclear power plant fall into this category. They need to have the concurrence and support of the Big 3 stakeholders (Operations, Engineering and Maintenance). Silo attitudes are an omnipresent threat to success in making these kinds of decisions. The key is to get the stakeholders to agree on the main process steps and define them in a plain-English procedure that defines the calendar, handoffs and decisions. Governing policy should establish the DM bodies and their authority, and define shared performance metrics to measure success.
Delegated decisions are frequent and low-risk. They can be effectively handled by an individual or working team, with limited input from others. The authors note “The role-modeling of senior leaders is invaluable, but they may be reluctant” to delegate. We agree. In our experience, many nuclear managers were hesitant to delegate as many decisions as they could have to subordinates. Their fear of being held accountable for a screw-up was just too great. However, their goal should have been to delegate all decisions except those for which they alone had the capabilities and accountability. Subordinates need appropriate training and explicit authority to make their decisions and they need to be held accountable by higher-level managers. The organization needs to establish a clear policy defining when and how a decision should be elevated to a more senior decision maker.
Ad hoc decisions are infrequent and low-risk; they were deliberately omitted from the article.
“A case study in combating bias” (MQ, May 2017)
This is an interview with a senior executive of a German utility that invested €10 billion in conventional power projects, investments that failed when the political-economic environment evolved in a direction opposite to their assumptions. In their postmortem, they realized they had succumbed to several cognitive biases, including status quo, confirmation, champion and sunflower. The sunflower bias (groups aligning with their leaders) stretched far down the organizational hierarchy so lower-level analysts didn’t dare to suggest contrary assumptions or outcomes.
The article describes how the utility made changes to their DM practices to promote awareness of biases and implement debiasing techniques, e.g, one key element is officially designated “devil’s advocates” in DM groups. Importantly, training emphasizes that biases are not some personal defect but “just there,” i.e., part of the culture. The interviewee noted that the revised process is very time-intensive so it is utilized only for the most important decisions facing each user group.
Our Perspective
The McKinsey content describes executive level, strategic DM but many of the takeaways are equally applicable to decisions made at the individual, department and inter-department level, where a consistent approach is perhaps even more important in maintaining or improving organizational performance.
The McKinsey articles come in one of their Five Fifty packages, with a summary you can review in five minutes and the complete articles that may take fifty minutes total. You should invest at least the smaller amount.
* “Better Decisions,” McKinsey Quarterly Five Fifty. Retrieved Nov. 28, 2017.
Posted by
Lewis Conner
0
comments. Click to view/add.
Labels:
Decision Making,
Decisions,
Kahneman,
Management
Monday, October 16, 2017
Nuclear Safety Culture: A Suggestion for Integrating “Just Culture” Concepts
All of you have heard of “Just Culture” (JC). At heart, it is an attitude toward investigating and explaining errors that occur in organizations in terms of “why” an error occurred, including systemic reasons, rather than focusing on identifying someone to blame. How might JC be applied in practice? A paper* by Shem Malmquist describes how JC concepts could be used in the early phases of an investigation to mitigate cognitive bias on the part of the investigators.
The author asserts that “cognitive bias has a high probability of occurring, and becoming integrated into the investigators subconscious during the early stages of an accident investigation.”
He recommends that, from the get-go, investigators categorize all pertinent actions that preceded the error as an error (unintentional act), at-risk behavior (intentional but for a good reason) or reckless (conscious disregard of a substantial risk or intentional rule violation). (p. 5) For errors or at-risk actions, the investigator should analyze the system, e.g., policies, procedures, training or equipment, for deficiencies; for reckless behavior, the investigator should determine what system components, if any, broke down and allowed the behavior to occur. (p. 12). Individuals should still be held responsible for deliberate actions that resulted in negative consequences.
Adding this step to a traditional event chain model will enrich the investigation and help keep investigators from going down the rabbit hole of following chains suggested by their own initial biases.
Because JC is added to traditional investigation techniques, Malmquist believes it might be more readily accepted than other approaches for conducting more systemic investigations, e.g., Leveson’s System Theoretic Accident Model and Processes (STAMP). Such approaches are complex, require lots of data and implementing them can be daunting for even experienced investigators. In our opinion, these models usually necessitate hiring model experts who may be the only ones who can interpret the ultimate findings—sort of like an ancient priest reading the entrails of a sacrificial animal. Snide comment aside, we admire Leveson’s work and reviewed it in our Nov. 11, 2013 post.
Our Perspective
This paper is not some great new insight into accident investigation but it does describe an incremental step that could make traditional investigation methods more expansive in outlook and robust in their findings.
The paper also provides a simple introduction to the works of authors who cover JC or decision-making biases. The former category includes Reason and Dekker and the latter one Kahneman, all of whom we have reviewed here at Safetymatters. For Reason, see our Nov. 3, 2014 post; for Dekker, see our Aug. 3, 2009 and Dec. 5, 2012 posts; for Kahneman, see our Nov. 4, 2011 and Dec. 18, 2013 posts.
Bottom line: The parts describing and justifying the author’s proposed approach are worth reading. You are already familiar with much of the contextual material he includes.
* S. Malmquist, “Just Culture Accident Model – JCAM” (June 2017).
The author asserts that “cognitive bias has a high probability of occurring, and becoming integrated into the investigators subconscious during the early stages of an accident investigation.”
He recommends that, from the get-go, investigators categorize all pertinent actions that preceded the error as an error (unintentional act), at-risk behavior (intentional but for a good reason) or reckless (conscious disregard of a substantial risk or intentional rule violation). (p. 5) For errors or at-risk actions, the investigator should analyze the system, e.g., policies, procedures, training or equipment, for deficiencies; for reckless behavior, the investigator should determine what system components, if any, broke down and allowed the behavior to occur. (p. 12). Individuals should still be held responsible for deliberate actions that resulted in negative consequences.
Adding this step to a traditional event chain model will enrich the investigation and help keep investigators from going down the rabbit hole of following chains suggested by their own initial biases.
Because JC is added to traditional investigation techniques, Malmquist believes it might be more readily accepted than other approaches for conducting more systemic investigations, e.g., Leveson’s System Theoretic Accident Model and Processes (STAMP). Such approaches are complex, require lots of data and implementing them can be daunting for even experienced investigators. In our opinion, these models usually necessitate hiring model experts who may be the only ones who can interpret the ultimate findings—sort of like an ancient priest reading the entrails of a sacrificial animal. Snide comment aside, we admire Leveson’s work and reviewed it in our Nov. 11, 2013 post.
Our Perspective
This paper is not some great new insight into accident investigation but it does describe an incremental step that could make traditional investigation methods more expansive in outlook and robust in their findings.
The paper also provides a simple introduction to the works of authors who cover JC or decision-making biases. The former category includes Reason and Dekker and the latter one Kahneman, all of whom we have reviewed here at Safetymatters. For Reason, see our Nov. 3, 2014 post; for Dekker, see our Aug. 3, 2009 and Dec. 5, 2012 posts; for Kahneman, see our Nov. 4, 2011 and Dec. 18, 2013 posts.
Bottom line: The parts describing and justifying the author’s proposed approach are worth reading. You are already familiar with much of the contextual material he includes.
* S. Malmquist, “Just Culture Accident Model – JCAM” (June 2017).
Posted by
Lewis Conner
0
comments. Click to view/add.
Labels:
Dekker,
James Reason,
Just Culture,
Kahneman,
Leveson,
References,
Systems View
Tuesday, June 7, 2016
The Criminalization of Safety (Part 3)
Our Perspective
The facts and circumstances of the events described in Table 1 in Part 1 point to a common driver - the collision of business and safety priorities, with safety being compromised. Culture is inferred as the “cause” in several of the events but with little amplification or specifics.[1] The compromises in some cases were intentional, others a product of a more complex rationalization. The events have been accompanied by increased criminal prosecutions with varied success.
We think it is fair to say that so far, criminalization
of safety performance does not appear to be an effective remedy. Statutory limitations and proof issues are
significant limitations with no easy solution. The reality is that
criminalization is at its core a “disincentive”. To be effective it would have to deter
actions or decisions that are not consistent with safety but not create a
minefield of culpability. It is also a
blunt instrument requiring rather egregious behavior to rise to the level of criminality. Its best use is probably as an ultimate
boundary, to deter intentional misconduct but not be an unintended trap for bad
judgment or inadequate performance. In
another vein, criminalization would also seem incompatible with the concept of
a “just culture” other than for situations involving intentional misconduct or
gross negligence.
Whether effective or not, criminalization
reflects the urgency felt by government authorities to constrain excessive risk
taking, intentional or not, and enhance oversight. It is increasingly clear that current
regulatory approaches are missing the mark.
All of the events catalogued in Table 1 occurred in industries that are
subject to detailed safety and environmental regulation. After the fact assessments highlight missed
opportunities for more assertive regulatory intervention, and in the Flint
cases there are actual criminal charges being applied to regulators. The Fukushima event precipitated a complete
overhaul of the nuclear regulatory structure in Japan, still a work in
progress. Post hoc punishments, no
matter how severe, are not a substitute.
Nuclear Regulation Initiatives
Looking specifically at nuclear regulation in
the U.S. we believe several specific reforms should be considered. It is always
difficult to reform without the impetus of a major safety event, but we could
see these actions as ones that could appear obvious in a post-event assessment
if there was ever an “O-ring” moment in the nuclear industry.[2]
1. The NRC should include the safety
management system in its regulatory activities.
The NRC has effectively constructed a cordon
sanitaire around safety management by decreeing that “management” is beyond
the scope of regulation. The NRC relies
on the fact that licensees bear the primary responsibility for safety and the
NRC should not intrude into that role.
If one contemplates the trend of recent events scrutinizing the
performance of regulators following safety events, this legalistic “defense” may
not fare well in a situation where more intrusive regulation could have made
the difference.
The NRC does monitor “safety culture” and
often requires licensees to address weaknesses in culture following performance
issues. In essence safety culture has
become an anodyne for avoiding direct confrontation of safety management issues. Cynically one could say it is the ultimate
conspiracy - where regulators and “stakeholders” come together to accept
something that is non-contentious and conveniently abstract to prevent a
necessary but unwanted (apparently by both sides) intrusion into safety
management.
As readers of this blog know, our unyielding
focus has been on the role of the complex socio-technical system that functions
within a nuclear organization to operate nuclear plants effectively and
safely. This management system includes
many drivers, variables, feedbacks, culture, and time delays in its processes,
not all of which are explicit or linear.
The outputs of the system are the actions and decisions that ultimately
produce tangible outcomes for production and safety. Thus it is a safety system and a legitimate
and necessary area for regulation.
NRC review of safety management need not
focus on traditional management issues which would remain the province of the
licensee. So organizational structure,
personnel decisions, etc. need not be considered.[3] But here we
should heed the view of Daniel Kahneman where he suggests we think of
organizations as “factories for producing decisions” and therefore, think of
decisions as a product. (See our Nov. 4,2011 post, A Factory for Producing Decisions.) Decisions are in fact the key product of the
safety management system. Regulatory
focus on how the management system functions and the decisions it produces
could be an effective and proactive approach.
We suggest two areas of the management system
that could be addressed as a first priority: (1) Increased transparency of how
the management system produces specific safety decisions including the capture
of objective data on each such decision, and (2) review of management compensation plans to minimize the potential for incentives to promote
excessive risk taking in operations.
2. The NRC should require greater
transparency in licensee management decisions with potential safety impacts.
Managing nuclear operations involves a
continuum of decisions balancing a variety of factors including production and
safety. These decisions may occur with
individuals or with larger groups in meetings or other forums. Some may involve multiple reviews and
concurrences. But in general the details
of decision making, i.e., how the sausage is made, are rarely captured in
detail during the process or preserved for later assessment.[4] Typically only
decisions that happen to yield a bad outcome (e.g., prompt the issuance of an
LER or similar) become subject to more intensive review and post mortem. Or actions that require specific, advance
regulatory approval and require an SER or equivalent.[5]
Transparency is key. Some say the true test of ethics is what
people do when no one is looking. Well
the converse of that may also be true - do people behave better when they know
oversight is or could be occurring? We
think a lot of the NRC’s regulatory scheme is already built on this premise,
relying as it does on auditing licensee activities and work products.
Thinking back to the Davis Besse example, the
criminal prosecutions of both the corporate entity and individuals were limited
to providing false or incomplete information to the NRC. There was no attempt to charge on the basis
of the actual decisions to propose, advocate for, and attempt to justify, that
the plant could continue to operate beyond the NRC’s specified date for
corrective actions. The case made by
First Energy was questionable as presented to the NRC and simply unjustified
when accounting for the real facts behind their vessel head inspections.
Transparency would be served by documenting
and preserving the decision process on safety significant issues. These data might include the safety
significance and applicable criteria, the potential impact on business
performance (plant output, cost, schedule, etc), alternatives considered, and
the participants and their inputs to the decision making process, and how a
final decision was reached. These are
the specifics that are so hard or impossible to reproduce after the fact.[6] The not
unexpected result: blaming someone or something but not gaining insight into
how the management system failed.
This approach would provide an opportunity
for the NRC to audit decisions on a routine basis. Licensee self assessment would also be served
through safety committee review and other oversight including INPO. Knowing that decisions will be subject to
such scrutiny also can promote careful balancing of factors in safety decisions
and serve to articulate how those balances are achieved and safety is
served. Having such tangible information
shared throughout the organization could be the strongest way to reinforce the
desired safety culture.
3. As part of its regulation of the safety management
system, the NRC should restrict incentive compensation for nuclear management
that is based on meeting business goals.
We started this series of posts focusing on
criminalization of safety. One of the
arguments for more aggressive criminalization is essentially to offset the
powerful pull of business-based incentives with the fear of criminal
sanctions. This has proved to
elusive. Similarly attempting to balance
business incentives with safety incentives also is problematic. The Transocean experience illustrates that
quite vividly.[7]
Our survey several years ago of nuclear
executive compensation indicated (1) the amounts of compensation are very
significant for the top nuclear executives, (2) the compensation is heavily
dependent on each year’s performance, and (3) business performance measured
by EPS is the key to compensation, safety performance is a minor contributor.
A corollary to the third point might be that in no cases that we could identify
was safety performance a condition precedent or qualification for earning the
business-based incentives. (See our July 9, 2010 post, Nuclear Management Compensation (Part 2)). With 60-70% of
total compensation at risk, executives can see their compensation, and that of
the entire management team, impacted by as much as several million dollars in a
year. Can this type of compensation structure impact safety? Intuition says it creates both risk and a
perception problems. Virtually every
significant safety event in Table 1 has reference to the undue influence of
production priorities on safety. The
issue was directly raised in at least one nuclear organization[8] which revised its compensation system to avoid
undermining safety culture.
We believe a more effective approach is to minimize
the business pressures in the first place. We believe there is a need for a regulatory
policy that discourages or prohibits licensee organizations from utilizing
significant incentives based on financial performance. Such incentives invariably target production
and budget goals as they are fundamental to business success. To the extent safety goals are included they
are a small factor or based on metrics that do not reflect fundamental safety. Assuring safety is the highest priority is
not subject to easily quantifiable and measurable metrics - it is judgmental
and implicit in many actions and decisions taken on a day-to-day basis at all
levels of the organization.
Organizations should pay nuclear management competitively and generously
and make informed judgments about their overall performance.
Others have recognized the problem and taken
similar steps to address it. For
example, in the aftermath of the financial crisis of 2008 the Federal Reserve
Board has been doing some arm twisting with U.S. financial services companies
to adjust their executive compensation plans - and those plans are in fact
being modified to cap bonuses associated with achieving performance goals. (See
our April 25, 2013 post, Inhibiting Excessive Risk Taking by Executives.)
Nick Taleb (of Black Swan fame) believes that
bonuses provide an incentive to take risks. He states, “The asymmetric
nature of the bonus (an incentive for success without a corresponding
disincentive for failure) causes hidden risks to accumulate in the financial
system and become a catalyst for disaster.” Now just substitute “nuclear
operations” for “the financial system”.
Central to Taleb’s thesis
is his belief that management has a large informational advantage over outside
regulators and will always know more about risks being taken within their
operation. (See our Nov. 9, 2011 post, Ultimate Bonuses.) Eliminating the force of incentives and
providing greater transparency to safety management decisions could reduce risk
and improve everybody’s insight into those risks deemed acceptable.
Conclusion
In industries outside the commercial nuclear
space, criminal charges have been brought for bad outcomes that resulted, at
least in part, from decisions that did not appropriately consider overall system
safety (or, in the worst cases, simply ignored it.) Our suggestions are intended to reduce the probability
of such events occurring in the nuclear industry.
[1]
It raises the question whether anytime business priorities trump safety it is a
case of deficient culture. We have
argued in other blog posts that sufficiently high business or political
pressure can compromise even a very strong safety culture. So reflexive resort to safety culture may be
easy but not be very helpful.
[2]
Credit to Adam Steltzner author of The Right Kind of Crazy recounting his and
other engineers’ roles in the design of the Mars rovers. His reference is to the failure of O-ring
seals on the space shuttle Challenger.
[3]
We do recognize that there are regulatory criteria for general organizational
matters such as for the training and qualification of personnel.
[4]
In essence this creates a “safe harbor” for most safety judgments and to which
the NRC is effectively blind.
[5]
In Davis Besse much of the “proof” that was relied on in the
prosecutions of individuals was based on concurrence chains for key documents
and NRC staff recollections of what was said in meetings. There was no contemporaneous documentation of
how First Energy made its threshold decision that postponing the outage was
acceptable, who participated, and who made the ultimate decision. Much was made of the fact that management was
putting great pressure on maintaining schedule but there was no way to
establish how that might have directly affected decision making.
[6]
Kahneman believes there is “hindsight bias”. Hindsight is 20/20 and it
supposedly shows what decision makers could (and should) have known and done
instead of their actual decisions that led to an unfavorable outcome, incident,
accident or worse. We now know that when the past was the present, things
may not have been so clear-cut. See our Dec.18, 2013 post, Thinking, Fast and Slow by Daniel Kahneman.
[7]
Transocean, owner of the Deepwater Horizon oil rig, awarded millions of dollars
in bonuses to its executives after “the best year in safety performance in our
company’s
history,” according to an annual report…’Notwithstanding the tragic loss of
life in the Gulf of Mexico, we achieved an exemplary statistical safety record
as measured by our total recordable incident rate and total potential severity
rate.’” See our April 7, 2011 post for the original citation in Transocean's annual report and further discussion.
[8]
“The reward and recognition system is perceived to be heavily weighted toward
production over safety”. The reward system was revised "to ensure
consistent health of NSC”. See our July 29, 2010 post, NRC Decision on FPL (Part 2).
Subscribe to:
Posts (Atom)