Showing posts with label Kahneman. Show all posts
Showing posts with label Kahneman. Show all posts

Monday, November 3, 2014

A Life In Error by James Reason



Most of us associate psychologist James Reason with the “Swiss Cheese Model” of defense in depth or possibly the notion of a “just culture.”  But his career has been more than two ideas, he has literally spent his professional life studying errors, their causes and contexts.  A Life In Error* is an academic memoir, recounting his study of errors starting with the individual and ending up with the organization (the “system”) including its safety culture (SC).  This post summarizes relevant portions of the book and provides our perspective.  It is going to read like a sub-titled movie on fast-forward but there are a lot of particulars packed in this short (124 pgs.) book. 

Slips and Mistakes 

People make plans and take action, consequences follow.  Errors occur when the intended goals are not achieved.  The plan may be adequate but the execution faulty because of slips (absent-mindedness) or trips (clumsy actions).  A plan that was inadequate to begin with is a mistake which is usually more subtle than a slip, and may go undetected for long periods of time if no obviously bad consequences occur. (pp. 10-12)  A mistake is a creation of higher-level mental activity than a slip.  Both slips and mistakes can take “strong but wrong” forms, where schema** that were effective in prior situations are selected even though they are not appropriate in the current situation.

Absent-minded slips can occur from misapplied competence where a planned routine is sidetracked into an unplanned one.  Such diversions can occur, for instance, when one’s attention is unexpectedly diverted just as one reaches a decision point and multiple schema are both available and actively vying to be chosen. (pp. 21-25)  Reason’s recipe for absent-minded errors is one part cognitive under-specification, e.g., insufficient knowledge, and one part the existence of an inappropriate response primed by prior, recent use and the situational conditions. (p. 49) 

Planning Biases 

The planning activity is subject to multiple biases.  An individual planner’s database may be incomplete or shaped by past experiences rather than future uncertainties, with greater emphasis on past successes than failures.  Planners can underestimate the influence of chance, overweight data that is emotionally charged, be overly influenced by their theories, misinterpret sample data or miss covariations, suffer hindsight bias or be overconfident.***  Once a plan is prepared, planners may focus only on confirmatory data and are usually resistant to changing the plan.  Planning in a group is subject to “groupthink” problems including overconfidence, rationalization, self-censorship and an illusion of unanimity.  (pp. 56-62)

Errors and Violations 

Violations are deliberate acts to break rules or procedures, although bad outcomes are not generally intended.  Violations arise from various motivational factors including the organizational culture.  Types of violations include corner-cutting to avoid clumsy procedures, necessary violations to get the job done because the procedures are unworkable, adjustments to satisfy conflicting goals and one-off actions (such as turning off a safety system) when faced with exceptional circumstances.  Violators perform a type of cost:benefit analysis biased by the fact that benefits are likely immediate while costs, if they occur, are usually somewhere in the future.  In Reason’s view, the proper course for the organization is to increase the perceived benefits of compliance not increase the costs (penalties) for violations.  (There is a hint of the “just culture” here.) 

Organizational Accidents 

Major accidents (TMI, Chernobyl, Challenger) have three common characteristics: contributing factors that were latent in the system, multiple levels of defense, and an unforeseen combination of latent factors and active failures (errors and/or violations) that defeated the defenses.  This is the well-known Swiss Cheese Model with the active failures opening short-lived holes and latent failures creating longer-lasting but unperceived holes.

Organizational accidents are low frequency, high severity events with causes that may date back years.  In contrast, individual accidents are more frequent but have limited consequences; they arise from slips, trips and lapses.  This is why organizations can have a good industrial accident record while they are on the road to a large-scale disaster, e.g., BP at Texas City. 

Organizational Culture 

Certain industries, including nuclear power, have defenses-in-depth distributed throughout the system but are vulnerable to something that is equally widespread.  According to Reason, “The most likely candidate is safety culture.  It can affect all elements in a system for good or ill.” (p. 81)  An inadequate SC can undermine the Swiss Cheese Model: there will be more active failures at the “sharp end”; more latent conditions created and sustained by management actions and policies, e.g., poor maintenance, inadequate equipment or downgrading training; and the organization will be reluctant to deal proactively with known problems. (pp. 82-83)

Reason describes a “cluster of organizational pathologies” that make an adverse system event more likely: blaming sharp-end operators, denying the existence of systemic inadequacies, and a narrow pursuit of production and financial goals.  He goes on to list some of the drivers of blame and denial.  The list includes: accepting human error as the root cause of an event; the hindsight bias; evaluating prior decisions based on their outcomes; shooting the messenger; belief in a bad apple but not a bad barrel (the system); failure to learn; a climate of silence; workarounds that compensate for systemic inadequacies’ and normalization of deviance.  (pp. 86-92)  Whew! 

Our Perspective 

Reason teaches us that the essence of understanding errors is nuance.  At one end of the spectrum, some errors are totally under the purview of the individual, at the other end they reside in the realm of the system.  The biases and issues described by Reason are familiar to Safetymatters readers and echo in the work of Dekker, Hollnagel, Kahneman and others.  We have been pounding the drum for a long time on the inadequacies of safety analyses that ignore systemic issues and corrective actions that are limited to individuals (e.g., more training and oversight, improved procedures and clearer expectations).

The book is densely packed with the work of a career.  One could easily use the contents to develop a SC assessment or self-assessment.  We did not report on the chapters covering research into absent-mindedness, Freud and medical errors (Reason’s current interest) but they are certainly worth reading.

Reason says this book is his valedictory: “I have nothing new to say and I’m well past my prime.” (p. 122)  We hope not.


*  J. Reason, A Life In Error: From Little Slips to Big Disasters (Burlington, VT: Ashgate, 2013).

**  Knowledge structures in long-term memory. (p. 24)

***  This will ring familiar to readers of Daniel Kahneman.  See our Dec. 18, 2013 post on Kahneman’s Thinking, Fast and Slow.

Wednesday, February 12, 2014

Left Brain, Right Stuff: How Leaders Make Winning Decisions by Phil Rosenzweig

In this new book* Rosenzweig extends the work of Kahneman and other scholars to consider real-world decisions.  He examines how the content and context of such decisions is significantly different from controlled experiments in a decision lab.  Note that Rosenzweig’s advice is generally aimed at senior executives, who typically have greater latitude in making decisions and greater responsibility for achieving results than lower-level professionals, but all managers can benefit from his insights.  This review summarizes the book and explores its lessons for nuclear operations and safety culture. 

Real-World Decisions

Decision situations in the real world can be more “complex, consequential and laden with uncertainty” than those described in laboratory experiments. (p. 6)  A combination of rigorous analysis (left brain) and ambition (the right stuff—high confidence and a willingness to take significant risks) is necessary to achieve success. (pp. 16-18)  The executive needs to identify the important characteristics of the decision he is facing.  Specifically,

Can the outcome following the decision be influenced or controlled?

Some real-world decisions cannot be controlled, e.g., the price of Apple stock after you buy 100 shares.  In those situations the traditional advice to decision makers, viz., be rational, detached, analyze the evidence and watch out for biases, is appropriate. (p. 32)

But for many decisions, the executive (or his team) can influence outcomes through high (but not excessive) confidence, positive illusions, calculated risks and direct action.  The knowledgeable executive understands that individuals perceived as good executives exhibit a bias for action and “The essence of management is to exercise control and influence events.” (p. 39)  Therefore, “As a rule of thumb, it's better to err on the side of thinking we can get things done rather than assuming we cannot.  The upside is greater and the downside less.” (p. 43)

Think about your senior managers.  Do they under or over-estimate their ability to influence future performance through their decisions?

Is the performance based on the decision(s) absolute or relative?

Absolute performance is described using some system of measurement, e.g., how many free throws you make in ten attempts or your batting average over a season.  It is not related to what anyone else does. 

But in competition performance is relative to rivals.  Ten percent growth may not be sufficient if a rival grows fifty percent.**  In addition, payoffs for performance may be highly skewed: in the Olympics, there are three medals and the others get nothing; in many industries, the top two or three companies make money, the others struggle to survive; in the most extreme case, it's winner take all and the everyone else gets nothing.  It is essential to take risks to succeed in highly skewed competitive situations.

Absolute and relative performance may be connected.  In some cases, “a small improvement in absolute performance can make an outsize difference in relative performance, . . .” (p. 66)  For example, if a well-performing nuclear plant can pick up a couple percentage points of annual capacity factor (CF), it can make a visible move up the CF rankings thus securing bragging rights (and possibly bonuses) for its senior managers.

For a larger example, remember when the electricity markets deregulated and many utilities rushed to buy or build merchant plants?  Note how many have crawled back under the blanket of regulation where they only have to demonstrate prudence (a type of absolute performance) to collect their guaranteed returns, and not compete with other sellers.  In addition, there is very little skew in the regulated performance curve; even mediocre plants earn enough to carry on their business.  Lack of direct competition also encourages sharing information, e.g., operating experience in the nuclear industry.  If competition is intense, sharing information is irresponsible and possibly dangerous to one's competitive position. (p. 61)

Do your senior managers compare their performance to some absolute scale, to other members of your fleet (if you're in one), to similar plants, to all plants, or the company's management compensation plan?

Will the decision result in rapid feedback and be repeated or is it a one-off or will it take a long time to see results? 


Repetitive decisions, e.g., putting at golf, can benefit from deliberate practice, where performance feedback is used to adjust future decisions (action, feedback, adjustment, action).  This is related to the extensive training in the nuclear industry and the familiar do, check and adjust cycle ingrained in all nuclear workers.

However, most strategic decisions are unique or have consequences that will only manifest in the long-term.  In such cases, one has to make the most sound decision possible then take the best shot. 

Executives Make Decisions in a Social Setting

Senior managers depend on others to implement decisions and achieve results.  Leadership (exaggerated confidence, emphasizing certain data and beliefs over others, consistency, fairness and trust is indispensable to inspire subordinates and shape culture.  Quoting Jack Welch, “As a leader, your job is to steer and inspire.” (p. 146)  “Effective leadership . . . means being sincere to a higher purpose and may call for something less than complete transparency.” (p. 158)

How about your senior managers?  Do they tell the whole truth when they are trying to motivate the organization to achieve performance goals?  If not, how does that impact trust over the long term?  
    
The Role of Confidence and Overconfidence

There is a good discussion of the overuse of the term “overconfidence,” which has multiple meanings but whose meaning in a specific application is often undefined.  For example, overconfidence can refer to being too certain that our judgment is correct, believing we can perform better than warranted by the facts (absolute performance) or believing we can outperform others (relative performance). 

Rosenzweig conducted some internet research on overconfidence.  The most common use in the business press was to explain, after the fact, why something had gone wrong. (p. 85)  “When we charge people with overconfidence, we suggest that they contributed to their own demise.” (p. 87)  This sounds similar to the search for the “bad apple” after an incident occurs at a nuclear plant.

But confidence is required to achieve high performance.  “What's the best level of confidence?  An amount that inspires us to do our best, but not so much that we become complacent, or take success for granted, or otherwise neglect what it takes to achieve high performance.” (p. 95)

Other Useful Nuggets

There is a good extension of the discussion (introduced in Kahneman) of base rates and conditional probabilities including the full calculations from two of the conditional probability examples in Kahneman's Thinking, Fast and Slow (reviewed here).

The discussion on decision models notes that such models can be useful for overcoming common biases, analyzing large amounts of data and predicting elements of the future beyond our influence.  However, if we have direct influence, “Our task isn't to predict what will happen, but to make it happen.” (p. 189)

Other chapters cover decision making in a major corporate acquisition (focusing on bidding strategy) and in start-up businesses (focusing on a series of start-up decisions)

Our Perspective

Rosenzweig acknowledges that he is standing on the shoulders of Kahneman and others students of decision making.  But “An awareness of common errors and cognitive biases is only a start.” (p. 248)  The executive must consider the additional decision dimensions discussed above to properly frame his decision; in other words, he has to decide what he's deciding.

The direct applicability to nuclear safety culture may seem slight but we believe executives' values and beliefs, as expressed in the decisions they make over time, provide a powerful force on the shape and evolution of culture.  In other words, we choose to emphasize the transactional nature of leadership.  In contrast, Rosenzweig emphasizes its transformational nature: “At its core, however, leadership is not a series of discrete decisions, but calls for working through other people over long stretches of time.” (p. 164)  Effective leaders are good at both.

Of course, decision making and influence on culture is not the exclusive province of senior managers.  Think about your organization's middle managers—the department heads, program and project managers, and process owners.  How do they gauge their performance?  How open are they to new ideas and approaches?  How much confidence do they exhibit with respect to their own capabilities and the capabilities of those they influence? 

Bottom line, this is a useful book.  It's very readable, with many clear and engaging examples,  and has the scent of academic rigor and insight; I would not be surprised if it achieves commercial success.


*  P. Rosenzweig, Left Brain, Right Stuff: How Leaders Make Winning Decisions (New York: Public Affairs, 2014).

**  Referring to Lewis Carroll's Through the Looking Glass, this situation is sometimes called “Red Queen competition [which] means that a company can run faster but fall further behind at the same time.” (p. 57)

Wednesday, December 18, 2013

Thinking, Fast and Slow by Daniel Kahneman

Kahneman is a Nobel Prize winner in economics.  His focus is on personal decision making, especially the biases and heuristics used by the unconscious mind as it forms intuitive opinions.  Biases lead to regular (systematic) errors in decision making.  Kahneman and Amos Tversky developed prospect theory, a model of choice, that helps explain why real people make decisions that are different from those of the rational man of economics.

Kahneman is a psychologist so his work focuses on the individual; many of his observations are not immediately linkable to safety culture (a group characteristic).  But even in a nominal group setting, individuals are often very important.  Think about the lawyers, inspectors, consultants and corporate types who show up after a plant incident.  What kind of biases do they bring to the table when they are evaluating your organization's performance leading up to the incident?

The book* has five parts, described below.  Kahneman reports on his own research and then adds the work of many other scholars.  Many of the experiments appear quite simple but provide insights into unconscious and conscious decision making.  There is a lot of content so this is a high level summary, punctuated by explicative or simply humorous quotes.

Part 1 describes two methods we use to make decisions: System 1 and System 2.  System 1 is impulsive, intuitive, fast and often unconscious; System 2 is more analytic, cautious, slow and controlled. (p. 48)  We often defer to System 1 because of its ease of use; we simply don't have the time, energy or desire to pore over every decision facing us.  Lack of desire is another term for lazy.

System 1 often operates below consciousness, utilizing associative memory to link a current stimulus to ideas or concepts stored in memory. (p. 51)  System 1's impressions become beliefs when accepted by System 2 and a mental model of the world takes shape.  System 1 forms impressions of familiarity and rapid, precise intuitions then passes them on to System 2 to accept/reject. (pp. 58-62)

System 2 activities take effort and require attention, which is a finite resource.  If we exceed the attention budget or become distracted then System 2 will fail to obtain correct answers.  System 2 is also responsible for self-control of thoughts and behaviors, another drain on mental resources. (pp. 41-42)

Biases include a readiness to infer causality, even where none exists; a willingness to believe and confirm in the absence of solid evidence; succumbing to the halo effect where we project a coherent whole based on an initial impression; and problems caused by WYSIATI** including basing conclusions on limited evidence, overconfidence, framing effects where decisions differ depending on how information and questions are presented and base-rate neglect where we ignore widely-known data about a decision situation. (pp. 76-88)

Heuristics include substituting easier questions for the more difficult ones that have been asked, letting current mood affect answers on general happiness and allowing emotions to trump facts. (pp. 97-103) 

Part 2 explores decision heuristics in greater detail, with research and examples of how we think associatively, metaphorically and causally.  A major topic throughout this section is the errors people tend to make when handling questions that have a statistical dimension.  Such errors occur because statistics requires us to think of many things at once, which System 1 is not designed to do, and a lazy or busy System 2, which could handle this analysis, is prone to accept System 1's proposed answer.  Other errors occur because:

We make incorrect inferences from small samples and are prone to ascribe causality to chance events.  “We are far too willing the reject the belief that much of what we in life is random.” (p. 117)  We are prone to attach “a causal interpretation to the inevitable fluctuations of a random process.” (p. 176)  “There is more luck in the outcomes of small samples.” (p. 194)

We fall for the anchoring effect, where we see a particular value for an unknown quantity (e.g., the asking price for a used car) before we develop our own value.  Even random anchors, which provide no relevant information, can influence decision making.

People search for relevant information when asked questions.  Information availability and ease of retrieval is a System 1 heuristic but only System 2 can judge the quality and relevance of retrieved content.  People are more strongly affected by ease of retrieval and go with their intuition when they are, for example, mentally busy or in a good mood. (p. 135)  However, “intuitive predictions tend to be overconfident and overly extreme.” (p. 192)

Unless we know the subject matter well, and have some statistical training, we have difficulty dealing with situations that require statistical reasoning.  One research finding “illustrates a basic limitation in the ability of our mind to deal with small risks: we either ignore them altogether or give them far too much weight—nothing in between.” (p. 143)  “There is one thing you can do when you have doubts about the quality of the evidence: let your judgments of probability stay close to the base rate.” (p. 153)  “. . . whenever the correlation between two scores is imperfect, there will be regression to the mean. . . . [a process that] has an explanation but does not have a cause.” (pp. 181-82)

Finally, and the PC folks may not appreciate this, but “neglecting valid stereotypes inevitably results in suboptimal judgments.” (p. 169)

Part 3 focuses on specific shortcomings of our thought processes: overconfidence, fed by the illusory certainty of hindsight, in what we think we know, and underappreciation of the role of chance in events.

“Subjective confidence in a judgment is not a reasoned evaluation of the probability that this judgment is correct.  Confidence is a feeling.” (p. 212)  Hindsight bias “leads observers to assess the quality of a decision not by whether the process was sound but by whether its outcome was good or bad. . . . a clear outcome bias.” (p. 203)  “. . . the optimistic bias may well be the most significant of the cognitive biases.” (p. 255)  “The optimistic style involves taking credit for success but little blame for failure.” (p. 263)

“The sense-making machinery of System 1 makes us see the world as more tidy, predictable, and coherent than it really is.” (p. 204)  “. . . reality emerges from the interactions of many different agents and force, including blind luck, often producing large and unpredictable results.” (p. 220)  “An unbiased appreciation of uncertainty is a cornerstone of rationality—but it is not what people and organizations want. . . . Acting on pretended knowledge is often the preferred solution.” (p. 263)

And the best quote in the book: “Professional controversies bring out the worst in academics.” (p. 234)

Part 4 contrasts the rational people of economics with the more complex people of psychology, in other words, the Econs vs. the Humans.  Kahneman shows how prospect theory opened a door between the two disciplines and contributed to the start of the field of behavioral economics.

Economists adopted expected utility theory to prescribe how decisions should be made and describe how Econs make choices.  In contrast, prospect theory has three cognitive features: evaluation of choices is relative to a reference point, outcomes above that point are gains, below that point are losses; diminishing sensitivity to changes; and loss aversion, where losses loom larger than gains. (p. 282)  In practice, loss aversion leads to risk-averse choices when both gains and losses are possible, and diminishing sensitivity leads to risk taking when sure losses are compared to a possible larger loss.  “Decision makers tend to prefer the sure thing over the gamble (they are risk averse) when the outcomes are good.  They tend to reject the sure thing and accept the gamble (they are risk seeking) when both outcomes are negative.” (p. 368)

“The fundamental ideas of prospect theory are that reference points exist, and that losses loom larger than corresponding gains.” (p. 297)  “A reference point is sometimes the status quo, but it can also be a goal in the future; not achieving the goal is a loss, exceeding the goal is a gain.” (p. 303)  Loss aversion is a powerful conservative force.” (p. 305)

When people do consider vary rare events, e.g., a nuclear accident, they will almost certainly overweight the probability in their decision making.  “ . . . people are almost completely insensitive to variations of risk among small probabilities.” (p. 316)  “. . . low-probability events are much more heavily weighted when described in terms of relative frequencies (how many) than when stated is more abstract terms of . . . “probability” (how likely).” (p. 329)  Framing of questions evoke emotions, e.g., “losses evokes stronger negative feelings than costs.” (p. 364)  But “[r]eframing is effortful and System 2 is normally lazy.” (p. 367)  As an exercise, think about how anti-nuclear activists and NEI would frame the same question about the probability and consequences of a major nuclear accident. 

There are some things an organization can do to improve its decision making.  It can use local centers of over optimism (Sales dept.) and loss aversion (Finance dept.) to offset each other.  In addition, an organization's decision making practices can require the use an outside view (i.e., a look at the probabilities of similar events in the larger world) and a formal risk policy to mitigate against known decision biases. (p. 340)

Part 5 covers two different selves that exist in every human, the experiencing self and the remembering self.  The former lives through an experience and the latter creates a memory of it (for possible later recovery) using specific heuristics.  Our tendency to remember events as a sample or summary of actual experience is a factor that biases current and future decisions.  We end up favoring (fearing) a short period of intense joy (pain) over a long period of moderate happiness (pain). (p. 409) 

Our memory has evolved to represent past events in terms of peak pain/pleasure during the events and our feelings when the event is over.  Event duration does not impact our ultimate memory of an event.  For example, we choose future vacations based on our final evaluations of past vacations even if many of our experiences during the past vacations were poor. (p. 389)

In a possibly more significant area, the life satisfaction score you assign to yourself is based on a small sample of highly available ideas or memories. (p. 400)  Ponder that the next time you take or review responses from a safety culture survey.

Our Perspective

This is an important book.  Although not explicitly stated, the great explanatory themes of cause (mechanical), choice (intentional) and chance (statistical) run through it.  It is filled with nuggets that apply to the individual (psychological) and also the aggregate if the group shares similar beliefs.  Many System 1 characteristics, if unchecked and shared by a group, have cultural implications.*** 

We have discussed Kahneman's work before on this blog, e.g., his view that an organization is a factory for producing decisions and his suggestion to use a “premortem” as a partial antidote for overconfidence.  (A premortem is an exercise the group undertakes before committing to an important decision: Imagine being a year into the future, the decision's outcome is a disaster.  What happened?)  For more on these points, see our Nov. 4, 2011 post.

We have also discussed some of the topics he raises, e.g., the hindsight bias.  Hindsight is 20/20 and it supposedly shows what decision makers could (and should) have known and done instead of their actual decisions that led to an unfavorable outcome, incident, accident or worse.  We now know that when the past was the present, things may not have been so clear-cut.

Kahneman's observation that the ability to control attention predicts on-the-job performance (p. 37) is certainly consistent with our reports on the characteristics of high reliability organizations (HROs). 

“The premise of this book is that it is easier to recognize other people's mistakes than our own.” (p. 28)  Having observers at important, stressful decision making meetings is useful; they are less cognitively involved than the main actors and more likely to see any problems in the answers being proposed.

Critics' major knock on Kahneman's research is that it doesn't reflect real world conditions.  His model is “overly concerned with failures and driven by artificial experiments than by the study of real people doing things that matter.” (p. 235)  He takes this on by collaborating with a critic in an investigation of intuitive decision making, specifically seeking to answer: “When can you trust a self-confident professional who claims to have an intuition?” (p. 239)  The answer is when the expert acquired skill in a predictable environment, and had sufficient practice with immediate, high-quality feedback.  For example, anesthesiologists are in a good position to develop predictive expertise; on the other hand, psychotherapists are not, primarily because a lot of time and external events can pass between their prognosis for a patient and ultimate results.  However, “System 1 takes over in emergencies . . .” (p. 35)  Because people tend to do what they've been trained to do in emergencies, training leading to (correct) responses is vital.

Another problem is that most of Kahneman's research uses university students, both undergraduate and graduate, as subjects.  It's fair to say professionals have more training and life experience, and have probably made some hasty decisions they later regretted and (maybe) learned from.  On the other hand, we often see people who make sub-optimal, or just plain bad decisions even though they should know better.

There are lessons here for managers and other would-be culture shapers.  System 1's search for answers is mostly constrained to information consistent with existing beliefs (p. 103) which is an entry point for  culture.  We have seen how group members can have their internal biases influenced by the dominant culture.  But to the extent System 1 dominates employees' decision making, decision quality may suffer.

Not all appeals can be made to the rational man in System 2 even though a customary, if tacit, assumption of managers is they and their employees are rational and always operating consciously, thus new experiences will lead to expected new values and beliefs, new decisions and improved safety culture.  But it may not be this straightforward.  System 1 may intervene and managers should be alert to evidence of System 1 type thinking and adjust their interventions accordingly.  Kahneman suggests encouraging “a culture in which people look out for one another as they approach minefields.” (p. 418) 

We should note Systems 1 and 2 are constructs and “do not really exist in the brain or anywhere else.” (p. 415)  System 1 is not Dr. Morbius' Id monster.****  System 1 can be trained to behave differently, but it is always ready to provide convenient answers for a lazy System 2.

The book is long, with small print, but the chapters are short so it's easy to invest 15-20 min. at a time.  One has to be on constant alert for useful nuggets that can pop up anywhere—which I guess promotes reader mindfulness.  It is better than Blink, which simply overwhelmed this reader with a cloudburst of data showing the informational value of thin slices and unintentionally over-promoted the value of intuition. (see pp. 235-36)  And it is much deeper than The Power of Habit, which we reviewed last February.

(Common sense is nothing more than a deposit of prejudices laid down by the mind before you reach eighteen.  Attributed to Albert Einstein)

*  D. Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011).

**  WYSIATI – What You See Is All There Is.  Information that is not retrieved from memory, or otherwise ignored, may as well not exist. (pp. 85-88)  WYSIATI means we base decisions on the limited information that we are able or willing to retrieve before a decision is due.  

***  A few of these characteristics are mentioned in this report, e.g., impressions morphing into beliefs, a bias to believe and confirm, and WYSIATI errors.  Others include links of cognitive ease to illusions of truth and reduced vigilance (complacency), and narrow framing where decision problems are isolated from one another. (p. 105)

****  Dr. Edward Morbius is a character in the 1956 sci-fi movie Forbidden Planet.

Thursday, June 6, 2013

Implementing Safety Culture Policy Part 2

This post continues our discussion of the implementation of safety culture policy in day-to-day nuclear management decision making, started in our post dated April 9, 2013.   In that post we introduced several parameters for quantitatively scoring decisions: decision quality, safety significance and significance uncertainty.  At this time we want to update the decision quality label, using instead “decision balance”.

To illustrate the application of the scoring method we used a set of twenty decisions based on issues taken from actual U.S. nuclear operating experience, typically those that were reported in LERs.  As a baseline, we scored each issue for safety significance and uncertainty.  Each issue identified 3 to 4 decision options for addressing the problem - and each option was annotated with the potential impacts of the decision on budgets, generation (e.g. potential outage time) and the corrective action program.   We scored each decision option for its decision balance (how well the decision option balances safety priority) and then identified the preferred decision option for each issue.  This constitutes what we refer to as the “preferred decision set”.  A pdf file of one example issue with decision choices and scoring inputs is available here

Our assumption is that the preferred decision set would be established/approved by senior management based on their interpretation of the issues and their expectations for how organizational decisions should reflect safety culture.  The set of issues would then be used in a training environment for appropriate personnel.  For purposes of this example, we incorporated the preferred decision set into our NuclearSafetySim* simulator to illustrate the possible training experience.  The sim provides an overall operational context tracking performance for cost, plant generation and CAP program and incorporating performance goals and policies.

Chart 1
In the sim application a trainee would be tasked with assessing an issue every three months over a 60 month operational period.  The trainee would do this while attempting to manage performance results to achieve specified goals.  For each issue the trainee would review the issue facts, assign values for significance and uncertainty, and select a decision option.  Chart 1 compares the actual decisions (those by the trainee) to those in the preferred set for our prototype session.   Note that approximately 40% of the time the actual decision matched the preferred decision (orange data points).  For the remainder of the issues the trainee’s selected decisions differed.  Determining and understanding why the differences occurred is one way to gain insight into how culture manifests in management actions.

As we indicated in the April 9 post, each decision is evaluated for its safety significance and uncertainty in accordance with quantified scales.  These serve as key inputs to determining the appropriate balance to be achieved in the decision.  In prior work in this area, reported in our posts dated July 15, 2011 and October 14, 2011 we solicited readers to score two issues for safety significance.  The reported scores ranged from 2 to 10 (most scores between 4 to 6) for one issue and ranged 5 to 10 (most scores 6 to 8) for the other issue.  This reflects the reality that perceptions of safety significance are subject to individual differences.  In the current exercise, similar variations in scoring were expected and led to differences between the trainee’s scores and the preferred decision set.  The variation may be due to the inherent subjective nature of assessing these attributes and other factors such as experience, expertise, biases, and interpretations of the issue.  So this could be one source of difference in the trainee decision selections versus the preferred set, as the decision process attempts to match action to significance. 

Another source could be in the decision options themselves.   The decision choice by a trainee could have focused on what the trainee felt was the “best” (i.e., most efficacious) decision versus an explicit consideration of safety priority commensurate with safety significance.  Additionally decision choices may have been influenced by their potential impacts, particularly under conditions where performance was not on track to meet goals. 


Chart 2
Taking this analysis a bit further, we looked at how decision balance varied over the course of the simulation.  As discussed in our April 9 post we use decision balance to create a quantitative measure of how well the goal of safety culture is being incorporated in a specific decision - the extent to which the decision accords the priority for safety commensurate with its safety significance.  In the instant exercise, each decision option for each issue has been assigned a balance value as part of the preferred scoresheet.**  Chart 2 shows a timeline of decision balances - one for the preferred decision set and the other for the actual decisions made by the trainee.  A smoothing function has been applied to the discrete values of balance to provide a continuous track. 

The plots illustrate how decision balance may vary over time, with specific decisions reflecting greater or lesser emphasis on safety.  During the first half of the sim the decision balances are in fairly close agreement, reflecting in part that in 5 of 8 cases the actual decisions matched the preferred decisions.  However in the second half of the sim significant differences emerge, primarily in the direction of weaker balances associated with the trainee decisions.  Again, understanding why these differences emerge could provide insight into how safety culture is actually being practiced within the organization. Chart 3 adds in some additional context.

Chart 3
The yellow line is a plot of “goal pressure” which is simply a sum of the differences in actual performance in the sim to goals for cost, generation and CAP program.  Higher values of pressure are associated with performance lagging the goals.  Inspection of the plot indicates that goal pressure was mostly modest in the first half of the sim before an initial spike up and further increases with time.  The blue line, the decision balance of the trainee, does not show any response to the initial spike, but later in the sim the high goal pressure could be seen as a possible contributor to decisions trending to lower balances.  A final note is that over the course of the entire sim, the average values of preferred and actual balance are fairly close for this player, perhaps suggesting reasonable overall alignment in safety priorities notwithstanding decision to decision variations. 

A variety of training benefits can flow from the decision simulation.  Comparisons of actual to preferred decisions provide a baseline indication of how well expected safety balances are being achieved in realistic decisions.  Consideration of contributing factors such as goal pressure may illustrate challenges for decision makers.  Comparisons of results among and across groups of trainees could provide further insights.  In all cases the results would provide material for discussion, team building and alignment on safety culture.

In our post dated November 4, 2011 we quoted the work of Kahneman, that organizations are “factories for producing decisions”.  In nuclear safety, the decision factory is the mechanism to actualize safety culture into specific priorities and actions.  A critical element of achieving strong safety culture is to be able to identify differences between espoused values for safety (i.e., the traits typically associated with safety culture) and de facto values as revealed in actual decisions. We believe this can be achieved by capturing decision data explicitly, including the judgments on significance and uncertainty, and the operational context of the decisions.

The next step is synthesizing the decision and situational parameters to develop a useful systems-based measure of safety culture.  A quantity that could be tracked in a simulation environment to illustrate safety culture response and provide feedback and/or during nuclear operations to provide a real time pulse of the organization’s culture.



* For more information on using system dynamics to model safety culture, please visit our companion website, nuclearsafetysim.com.

** It is possible for some decision options to have the same value of balance even though they incorporate different responses to the issue and different operational impacts. 

Friday, November 4, 2011

A Factory for Producing Decisions

The subject of this post is the compelling insights of Daniel Kahneman into issues of behavioral economics and how we think and make decisions.  Kahneman is one of the most influential thinkers of our time and a Nobel laureate.  Two links are provided for our readers who would like additional information.  One is via the McKinsey Quarterly, a video interview* done several years ago.  It runs about 17 minutes.  The second is a current review in The Atlantic** of Kahneman’s just released book, Thinking Fast and Slow.

Kahneman begins the McKinsey interview by suggesting that we think of organizations as “factories for producing decisions” and therefore, think of decisions as a product.  This seems to make a lot of sense when applied to nuclear operating organizations - they are the veritable “River Rouge” of decision factories.  What may be unusual for nuclear organizations is the large percentage of decisions that directly or indirectly include safety dimensions, dimensions that can be uncertain and/or significantly judgmental, and which often conflict with other business goals.  So nuclear organizations have to deliver two products: competitively priced megawatts and decisions that preserve adequate safety.

To Kahneman decisions as product logically raises the issue of quality control as a means to ensure the quality of decisions.  At one level quality control might focus on mistakes and ensuring that decisions avoid recurrence of mistakes.  But Kahneman sees the quality function going further into the psychology of the decision process to ensure, e.g., that the best information is available to decision makers, that the talents of the group surrounding the ultimate decision maker are being used effectively, and the presence of an unbiased decision-making environment.

He notes that there is an enormous amount of resistance within organizations to improving decision processes. People naturally feel threatened if their decisions are questioned or second guessed.  So it may be very difficult or even impossible to improve the quality of decisions if the leadership is threatened too much.  But, are there ways to avoid this?  Kahneman suggests the “premortem” (think of it as the analog to a post mortem).  When a decision is being formulated (not yet made), convene a group meeting with the following premise: It is a year from now, we have implemented the decision under consideration, it has been a complete disaster.  Have each individual write down “what happened?”

The objective of the premortem is to legitimize dissent and minimize the innate “bias toward optimism” in decision analysis.  It is based on the observation that as organizations converge toward a decision, dissent becomes progressively more difficult and costly and people who warn or dissent can be viewed as disloyal.  The premortem essentially sets up a competitive situation to see who can come up with the flaw in the plan.  In essence everyone takes on the role of dissenter.  Kahneman’s belief is that the process will yield some new insights - that may not change the decision but will lead to adjustments to make the decision more robust. 

Kahneman’s ideas about decisions resonate with our thinking that the most useful focus for nuclear safety culture is the quality of organizational decisions.  It also contrasts with a recent instance of a nuclear plant run afoul of the NRC (Browns Ferry) and now tagged with a degraded cornerstone and increased inspections.  As usual in the nuclear industry, TVA has called on an outside contractor to come in and perform a safety culture survey, to “... find out if people feel empowered to raise safety concerns….”***  It may be interesting to see how people feel, but we believe it would be far more powerful and useful to analyze a significant sample of recent organizational decisions to determine if the decisions reflect an appropriate level of concern for safety.  Feelings (perceptions) are not a substitute for what is actually occurring in the decision process. 

We have been working to develop ways to grade whether decisions support strong safety culture, including offering opportunities on this blog for readers to “score” actual plant decisions.  In addition we have highlighted the work of Constance Perin including her book, Shouldering Risks, which reveals the value of dissecting decision mechanics.  Perin’s observations about group and individual status and credibility and their implications for dissent and information sharing directly parallel Kahneman’s focus on the need to legitimize dissent.  We hope some of this thinking ultimately overcomes the current bias in nuclear organizations to reflexively turn to surveys and the inevitable retraining in safety culture principles.


*  "Daniel Kahneman on behavioral economics," McKinsey Quarterly video interview (May 2008).

** M. Popova, "The Anti-Gladwell: Kahneman's New Way to Think About Thinking," The Atlantic website (Nov. 1, 2011).

*** A. Smith, "Nuke plant inspections proceeding as planned," Athens [Ala.] News Courier website (Nov. 2, 2011).