Showing posts with label Decisions. Show all posts
Showing posts with label Decisions. Show all posts

Friday, December 1, 2017

Nuclear Safety Culture: Focus on Decision Making



McKinsey Five Fifty cover
We have long held that decision making (DM) is a key artifact reflecting the state of a nuclear organization’s safety culture.

The McKinsey Quarterly (MQ) has packaged a trio of articles* on DM.  Their first purpose is identifying and countering the different biases that lead to sub-optimal, even disastrous decisions.  (When specific biases are widely spread in an organization, they are part of its culture.)  A second purpose is to describe the attributes of more fair, robust and effective DM processes.  The articles’ specific topics are (1) the behavioral science that underlies DM, (2) a method for categorizing and processing decisions and (3) a case study of a major utility that changed its decision culture. 

“The case for behavioral strategy” (MQ, March 2010)

This article covers the insights from psychology that can be used to fashion a robust DM process.  The authors evidence the need for process improvement by reporting their survey research results showing over 50 percent of the variability in decisional results (i.e., performance) was determined by the quality of the DM process while less than 10 percent was caused by the quality of the underlying analysis. 

There are plenty of cognitive biases that can affect human DM.  The authors discuss several of them and strategies for counteracting them, as summarized in the table below.


Type of bias
How to counteract
False pattern recognition (e.g., saliency (overweight recent or memorable events), confirmation, inaccurate analogies)
Require alternative explanations for the data in the analysis, articulate participants’ relevant experiences (which can reveal the basis for their biases), identify similar situations for comparative analysis.
Bias for action
Explicitly consider uncertainty in the input data and the possible outcomes.
Stability (anchoring to an initial value, loss aversion)
Establish stretch targets that can’t be achieved by business as usual.
Silo thinking
Involve a diverse group in the DM process and define specific decision criteria before discussions begin.
Social (conformance to group views)
Create genuine debate through a diverse set of decision makers, a climate of trust and depersonalized discussions.


The greatest problem arises from biases that create repeatable patterns that become undesirable cultural traits.  DM process designers must identify the types of biases that arise in their organization’s DM, and specify debiasing techniques that will work in their organization and embed them in formal DM procedures.

An attachment to the article identifies and defines 17 specific biases.  Much of the seminal research on DM biases was performed by Daniel Kahneman who received a Nobel prize for his efforts.  We have reviewed Prof. Kahneman’s work on Safetymatters; see our Nov. 4, 2011 and Dec. 18, 2013 posts or click on the Kahneman label. 

“Untangling your organization’s decision making” (MQ, June 2017)

While this article is aimed at complex, global organizations, there are lessons here for nuclear organizations (typically large bureaucracies) because all organizations have become victims of over-abundant communications, with too many meetings and low value e-mail threads distracting members from paying attention to making good decisions.

The authors posit four types of decisions an organization faces, plotted on a 2x2 matrix (the consultant’s best friend) with scope and impact (broad or narrow) on one axis and level of familiarity (infrequent or frequent) on the other.  A different DM approach is proposed for each quadrant. 

Big-bet decisions are infrequent and have broad impact.  Recommendations include (1) ensure there’s an executive sponsor, (2) break down the mega-decision into manageable parts for analysis (and reassemble them later), (3) use a standard DM approach for all the parts and (4) establish a mechanism to track effectiveness during decision implementation.

The authors observe that some decisions turn out to be “bet the company” ones without being recognized as such.  There are examples of this in the nuclear industry.  For details, see our June 18,2013 post on Kewaunee (had only an 8 year PPA), Crystal River (tried to cut through the containment using in-house expertise) and SONGs (installed replacement steam generators with an unproven design). 

Cross-cutting decisions are more frequent and have broad impact.  Some decisions at a nuclear power plant fall into this category.  They need to have the concurrence and support of the Big 3 stakeholders (Operations, Engineering and Maintenance).  Silo attitudes are an omnipresent threat to success in making these kinds of decisions.  The key is to get the stakeholders to agree on the main process steps and define them in a plain-English procedure that defines the calendar, handoffs and decisions.  Governing policy should establish the DM bodies and their authority, and define shared performance metrics to measure success. 

Delegated decisions are frequent and low-risk.  They can be effectively handled by an individual or working team, with limited input from others.  The authors note “The role-modeling of senior leaders is invaluable, but they may be reluctant” to delegate.  We agree.  In our experience, many nuclear managers were hesitant to delegate as many decisions as they could have to subordinates.  Their fear of being held accountable for a screw-up was just too great.  However, their goal should have been to delegate all decisions except those for which they alone had the capabilities and accountability.  Subordinates need appropriate training and explicit authority to make their decisions and they need to be held accountable by higher-level managers.  The organization needs to establish a clear policy defining when and how a decision should be elevated to a more senior decision maker. 

Ad hoc decisions are infrequent and low-risk; they were deliberately omitted from the article. 

“A case study in combating bias” (MQ, May 2017)

This is an interview with a senior executive of a German utility that invested €10 billion in conventional power projects, investments that failed when the political-economic environment evolved in a direction opposite to their assumptions.  In their postmortem, they realized they had succumbed to several cognitive biases, including status quo, confirmation, champion and sunflower.  The sunflower bias (groups aligning with their leaders) stretched far down the organizational hierarchy so lower-level analysts didn’t dare to suggest contrary assumptions or outcomes.

The article describes how the utility made changes to their DM practices to promote awareness of biases and implement debiasing techniques, e.g, one key element is officially designated “devil’s advocates” in DM groups.  Importantly, training emphasizes that biases are not some personal defect but “just there,” i.e., part of the culture.  The interviewee noted that the revised process is very time-intensive so it is utilized only for the most important decisions facing each user group. 

Our Perspective 

The McKinsey content describes executive level, strategic DM but many of the takeaways are equally applicable to decisions made at the individual, department and inter-department level, where a consistent approach is perhaps even more important in maintaining or improving organizational performance.

The McKinsey articles come in one of their Five Fifty packages, with a summary you can review in five minutes and the complete articles that may take fifty minutes total.  You should invest at least the smaller amount.


*  “Better Decisions,” McKinsey Quarterly Five Fifty.  Retrieved Nov. 28, 2017.

Friday, January 27, 2017

Leadership, Decisions, Systems Thinking and Nuclear Safety Culture

AcciMap Excerpt
We recently read a paper* that echoes some of the themes we emphasize on Safetymatters, viz., leadership, decisions and a systems view.  Following is an excerpt from the abstract:

Leadership is progressively being recognized as a key** factor in supporting successful performance across a range of domains. . . . the decisions and actions that characterize safety leadership thus become important emergent properties in the prevention of incidents, which should be considered within the context of the broader organizational system and not merely constrained to understanding events or conditions that shape performance at the ‘sharp end’.”  [emphasis added]

The authors go on to analyze decisions and actions after a mining incident (landslide) using a combination of three different schemes: Rasmussen’s Risk Management Framework (RMF) and corresponding AcciMap, and the Critical Decision Method (CDM).

The RMF describes work systems as comprised of various levels and argues that safety performance is affected by decisions and actions at all levels from politicians in the external environment down through company executives and managers and finally to individual workers.  Rasmussen’s AcciMap is an expansive causal diagram for an accident or incident that displays the contributions (or omissions) at each level in the RMF and their connections.

CDM uses semi-structured interviews to obtain information about how individuals formulate their decisions, including context such as background knowledge and immediate influencing factors.  Consistent with the RMF, case study interviews were conducted with individuals at different organizational levels.  CDM data were used to construct the AcciMap.

We won’t go into the details of the analysis but it identified over a dozen key decisions made at different organizational levels before and during the incident; most were connected to at least one other key decision.  The AcciMap illustrates decisions and communications across multiple levels and thus provides a useful picture of how an organization anticipates and responds to an unusual situation.

Our Perspective

The authors argue, and we agree, that this type of analysis provides greater detail and insight into the performance of an organization’s safety management system than traditional accident investigations (especially those focused on finding someone to blame).

This article does not specifically discuss culture.  But the body of decisions an organization produces is the strongest evidence and most visible artifact of its culture.  Organizational decisions are far more important than responses to surveys or interviews where people can report what they believe (or hope) the culture is, or what they think their audience wants to hear.

We like that RMF and AcciMap are agnostic: they can be used to analyze either “what went wrong” or “what went right” scenarios.  (The case study was in the latter category because no one was hurt in the incident.)  If an assessor is looking at a sample of decisions to infer a nuclear organization’s culture, most of those decisions will have had positive (or at least no negative) consequences.

The authors are Australian academics but this short (8 pages total) paper is quite readable and a good introduction to CDM and Rasmussen’s constructs.  The references include people whose work we have positively reviewed on Safetymatters, including Dekker, Hollnagel, Leveson and Reason.

Bottom line: There is nothing about culture or nuclear here, but the overall message reinforces our beliefs about how to think about Nuclear Safety Culture.


*  S-L Donovana, P.M. Salmonb and M.G. LennĂ©a, “The leading edge: A systems thinking methodology for assessing safety leadership,” Procedia Manufacturing 3 (2015), pp. 6644–6651.  Available at sciencedirect.com; retrieved Jan. 19, 2017.

**  Note they do not say “one and only” or even “most important.”

Monday, January 16, 2017

Nuclear Safety Culture and the Shrinking U.S. Nuclear Plant Population

In the last few years, nuclear plant owners have shut down or scheduled for shutdown 17 units totaling over 14,000 MW.  Over half of these units had (or have) nuclear safety culture (NSC) issues sufficiently noteworthy to warrant mention here on Safetymatters.  We are not saying that NSC issues alone have led to the permanent shutdown of any plant, but such issues often accompany poor decision-making that can hasten a plant’s demise.  Following is a roll call of the deceased or endangered plants.

Plants with NSC issues

NSC issues provide windows into organizational behavior; the sizes of issues range from isolated problems to systemic weaknesses.

FitzPatrick

This one doesn’t exactly belong on the list.  Entergy scheduled it for shutdown in Jan. 2017 but instead it will likely be purchased by a white knight, Exelon, in a transaction brokered by the governor of New York.  With respect to NSC, in 2012 FitzPatrick received a Confirmatory Order (CO) after the NRC discovered violations, the majority of which were willful, related to adherence to site radiation protection procedures. 

Fort Calhoun

This plant shut down on Oct. 24, 2016.  According to the owner, the reason was “market conditions.”  It’s hard for a plant to be economically viable when it was shut down for over two years because of scheduled maintenance, flooding, a fire and various safety violations.  The plant kept moving down the NRC Action Matrix which meant more inspections and a third-party NSC assessment.  A serious cultural issue was how the plant staff’s perception of the Corrective Action Program (CAP) had evolved to view the CAP as a work management system rather than the principal way for the plant to identify and fix its problems.  Click on the Fort Calhoun label to pull up our related posts.

Indian Point 2 and 3

Units 2 and 3 are scheduled to shut down in 2020 and 2021, respectively.  As the surrounding population grew, the political pressure to shut them down also increased.  A long history of technical and regulatory issues did not inspire confidence.  In NSC space, they had problems with making incomplete or false statements to the NRC, a cardinal sin for a regulated entity.  The plant received a Notice of Violation (NOV) in 2015 for providing information about a licensed operator's medical condition that was not complete and accurate; they received a NOV in 2014 because a chemistry manager falsified test results.  Our May 12, 2014 post on the latter event is a reader favorite. 

Palisades

This plant had a long history of technical and NSC issues.  It is scheduled for shutdown on Oct. 1, 2018.  In 2015 Palisades received a NOV because it provided information to the NRC that was not complete and accurate; in 2014 it received a CO because a security manager assigned a person to a role for which he was not qualified; in 2012 it received a CO after an operator left the control room without permission and without performing a turnover to another operator.  Click on the Palisades label to pull up our related posts.

Pilgrim

This plant is scheduled for shutdown on May 31, 2019.  It worked its way to column 4 of the Action Matrix in Sept. 2015 and is currently undergoing an IP 95003 inspection, including an in-depth evaluation of the plant’s CAP and an independent assessment of the plant’s NSC.  In 2013, Pilgrim received a NOV because it provided information to the NRC that was not complete and accurate; in 2005 it received a NOV after an on-duty supervisor was observed sleeping in the control room.

San Onofre 2 and 3

These units ceased operations on Jan. 1, 2012.  The proximate cause of death was management incompetence: management opted to replace the old steam generators (S/Gs) with a large, complex design that the vendor had never fabricated before.  The new S/Gs were unacceptable in operation when tube leakage occurred due to excessive vibrations.  NSC was never anything to write home about either: the plant was plagued for years by incidents, including willful violations, and employees claiming they feared retaliation if they reported or discussed such incidents.

Vermont Yankee

This plant shut down on Dec. 29, 2014 ostensibly for “economic reasons” but it had a vociferous group of critics calling for it to go.  The plant evidenced a significant NSC issue in 2009 when plant staff parsed an information request to the point where they made statements that were “incomplete and misleading” to state regulators about tritium leakage from plant piping.  Eleven employees, including the VP for operations, were subsequently put on leave or reprimanded.  Click on the Vermont Yankee label to pull up our related posts.

Plant with no serious or interesting NSC issues 


The following plants have not appeared on our NSC radar in the eight years we’ve been publishing Safetymatters.  We have singled out a couple of them for extremely poor management decisions.

Crystal River basically committed suicide when they tried to create a major containment penetration on their own and ended up with a delaminating containment.  It ceased operations on Sept. 26, 2009.

Kewaunee shut down on May 7, 2013 for economic reasons, viz., the plant owner apparently believed their initial 8-year PPA would be followed by equal or even higher prices in the electricity market.  The owner was wrong.

Rounding out the list, Clinton is scheduled to shut down June 1, 2017; Diablo Canyon 1 and 2 will shut down in 2024 and 2025, respectively; Oyster Creek is scheduled to shut down on June 1, 2019; and Quad Cities 1 and 2 are scheduled to shut down on June 1, 2018 — all for business reasons.

Our Perspective

Bad economics (low natural gas prices, no economies of scale for small units) were the key drivers of these shutdown decisions but NSC issues and management incompetence played important supporting roles.  NSC problems provide ammunition to zealous plant critics but, more importantly, also create questions about plant safety and viability in the minds of the larger public.

Monday, January 4, 2016

How Top Management Decisions Shape Culture

A brief article* in the December 2015 The Atlantic magazine asks “What was VW thinking?” then reviews a few classic business cases to show how top management, often CEO, decisions can percolate down through an organization, sometimes with appalling results.  The author also describes a couple of mechanisms by which bad decision making can be institutionalized.  We’ll start with the cases.

Johnson & Johnson had a long-standing credo that outlined its responsibilities to those who used its products.  In 1979, the CEO reinforced the credo’s relevance to J&J’s operations.  When poisoned Tylenol showed up in stores, J&J did not hesitate to recall product, warn people against taking Tylenol and absorb a $100 million hit.  This is often cited as an example of a corporation doing the right thing. 

B. F. Goodrich promised an Air Force contractor an aircraft brake that was ultralight and ultracheap.  The only problem was it didn’t work, in fact it melted.  Only by massively finagling the test procedures and falsifying test results did they get the brake qualified.  The Air Force discovered the truth when they reviewed the raw test data.  A Goodrich whistleblower announced his resignation over the incident but was quickly fired by the company.  

Ford President Lee Iacocca wanted the Pinto to be light, inexpensive and available in 25 months.  The gas tank’s position made the vehicle susceptible to fire when the car was rear-ended but repositioning the gas tank would have delayed the roll-out schedule.  Ford delayed addressing the problem, resulting in at least one costly lawsuit and bad publicity for the company.

With respect to institutional mechanisms, the author reviews Diane Vaughan’s normalization of deviance and how it led to the space shuttle Challenger disaster.  To promote efficiency, organizations adopt scripts that tell members how to handle various situations.  Scripts provide a rationale for decisions, which can sometimes be the wrong decisions.  In Vaughan’s view, scripts can “expand like an elastic waistband” to accommodate more and more deviation from standards or norms.  Scripts are important organizational culture artifacts.  We have often referred to Vaughan’s work on Safetymatters.

The author closes with a quote: “Culture starts at the top, . . . Employees will see through empty rhetoric and will emulate the nature of top-management decision making . . . ”  The speaker?  Andrew Fastow, Enron’s former CFO and former federal prison inmate.

Our Perspective

I used to use these cases when I was teaching ethics to business majors at a local university.  Students would say they would never do any of the bad stuff.  I said they probably would, especially once they had mortgages (today it’s student debt), families and career aspirations.  It’s hard to put up a fight when the organization has so accepted the script they actually believe they are doing the right thing.  And don’t even think about being a whistleblower unless you’ve got money set aside and a good lawyer lined up.

Bottom line: This is worth a quick read.  It illustrates the importance of senior management’s decisions as opposed to its sloganeering or other empty leadership behavior.


*  J. Useem, “What Was Volkswagen Thinking?  On the origins of corporate evil—and idiocy,”  The Atlantic (Dec. 2015), pp.26-28.

Thursday, January 29, 2015

Safety Culture at Chevron’s Richmond, CA Refinery



The U.S. Chemical Safety and Hazard Investigation Board (CSB) released its final report* on the August 2012 fire at the Chevron refinery in Richmond, CA caused by a leaking pipe.  In the discussion around the CSB’s interim incident report (see our April 16, 2013 post) the agency’s chairman said Chevron’s safety culture (SC) appeared to be a factor in the incident.  This post focuses on the final report findings related to the refinery’s SC.

During their investigation, the CSB learned that some personnel were uncomfortable working around the leaking pipe because of potential exposure to the flammable fluid.  “Some individuals even recommended that the Crude Unit be shut down, but they left the final decision to the management personnel present.  No one formally invoked their Stop Work Authority.  In addition, Chevron safety culture surveys indicate that between 2008 and 2010, personnel had become less willing to use their Stop Work Authority. . . . there are a number of reasons why such a program may fail related to the ‘human factors’ issue of decision-making; these reasons include belief that the Stop Work decision should be made by someone else higher in the organizational hierarchy, reluctance to speak up and delay work progress, and fear of reprisal for stopping the job.” (pp. 12-13) 

The report also mentioned decision making that favored continued production over safety. (p. 13)  In the report’s details, the CSB described the refinery organization’s decisions to keep operating under questionable safety conditions as “normalization of deviance,” a term popularized by Diane Vaughn and familiar to Safetymatters readers. (p. 105) 

The report included a detailed comparison of the refinery’s 2008 and 2010 SC surveys.  In addition to the decrease in employees’ willingness to use their Stop Work Authority, surveyed operators and mechanics reported an increased belief that using such authority could get them into trouble (p. 108) and that equipment was not properly cared for. (p. 109) 

Our Perspective

We like the CSB.  They’re straight shooters and don’t mince words.  While we are not big fans of SC surveys, the CSB’s analysis of Chevron’s SC surveys appears to show a deteriorating SC between 2008 and 2010. 

Chevron says they agree with some CSB findings however Chevron believes “the CSB has presented an inaccurate depiction of the Richmond Refinery’s current process safety culture.”  Chevron says “In a third-party survey commissioned by Contra Costa County, when asked whether they feel free to use Stop Work Authority during any work activity, 93 percent of Chevron refinery workers responded favorably.  The overall results for the process safety survey exceeded the survey taker’s benchmark for North American refineries.”**  Who owns the truth here?  The CSB?  Chevron?  Both?    

In 2013, the city of Richmond adopted an Industrial Safety Ordinance (RISO) that requires Chevron to conduct SC assessments, preserve records and develop corrective actions.  The CSB recommendations including beefing up the RISO to evaluate the quality of Chevron’s action items and their actual impact on SC. (p. 116)

Chevron continues to receive blowback from the incident.  The refinery is the largest employer and taxpayer in Richmond.  It’s not a company town but Chevron has historically had a lot of political sway in the city.  That’s changed, at least for now.  In the recent city council election, none of the candidates backed by Chevron was elected.***

As an aside, the CSB report referenced a 2010 study**** that found a sample of oil and gas workers directly intervened in only about 2 out of 5 of the unsafe acts they observed on the job.  How diligent are you and your colleagues about calling out safety problems?


*  CSB, “Final Investigation Report Chevron Richmond Refinery Pipe Rupture and Fire,” Report No. 2012-03-I-CA (Jan. 2015).

**  M. Aldax, “Survey finds Richmond Refinery safety culture strong,” Richmond Standard (Jan. 29, 2015).  Retrieved Jan. 29, 2015.  The Richmond Standard is a website published by Chevron Richmond.

***  C. Jones, “Chevron’s $3 million backfires in Richmond election,” SFGate (Nov. 5, 2014).  Retrieved Jan. 29, 2015.

****  R.D. Ragain, P. Ragain, Mike Allen and Michael Allen, “Study: Employees intervene in only 2 of 5 observed unsafe acts,” Drilling Contractor (Jan./Feb. 2011).  Retrieved Jan. 29, 2015.

Wednesday, April 16, 2014

GM’s CEO Revealing Revelation

GM CEO Mary Barra
As most of our readers are aware General Motors has been much in the news of late regarding a safety issue associated with the ignition switches in the Chevy Cobalt.  At the beginning of April the new CEO of GM, Mary Barra, testified at Congressional hearings investigating the issue.  A principal focus of the hearings was the extent to which GM executives were aware of the ignition switch issues which were identified some ten years ago but did not result in recalls until this February.  Barra has initiated a comprehensive internal investigation of the issues to determine why it took so many years for a safety defect to be announced.

In a general sense this sounds all too familiar as the standard response to a significant safety issue.  Launch an independent investigation to gather the facts and figure out what happened, who knew what, who decided what and why.  The current estimate is that it will take almost two months for this process to be completed.  Also familiar is that accountability inevitably starts (and often ends) at the engineering and low level management levels.  To wit, GM has already announced that two engineers involved in the ignition switch issues have been suspended.

But somewhat buried in Barra’s Congressional testimony is an unusually revealing comment.  According to the Wall Street Journal, Barra said “senior executives in the past were intentionally not involved in details of recalls so as to not influence them.”*  Intentionally not involved in decisions regarding recalls - recalls which can involve safety defects and product liability issues and have significant public and financial liabilities.  Why would you not want the corporation's executives to be involved?  And if one is to believe the rest of Barra’s testimony, it appears executives were not even aware of these issues.

Well, what if executives were involved in these critical decisions - what influence could they have that GM would be afraid of?  Certainly if executive involvement would assure that technical assessments of potential safety defects were rigorous and conservative - that would not be undue influence.  So that leaves the other possibility - that involvement of executives could inhibit or constrain technical assessments from assuring an appropriate priority for safety.  This would be tantamount to the chilling effect popularized in the nuclear industry.  If management involvement creates an implicit pressure to minimize safety findings, there goes the safety conscious work environment and safety.


If keeping executives out of the decision process is believed to yield “better” decisions, it says some pretty bad things about either their competence or ethics.  Having executives involved should at least ensure that they are aware and knowledgeable of potential product safety issues and in a position to proactively assure that decisions and actions are appropriate.   What might be the most likely explanation is that executives don’t want the responsibility and accountability for these types of decisions.  They might prefer to remain protected at the safety policy level but leave the messy reality of comporting those dictates with real world business considerations to lower levels of the organization.  Inevitably accountability rolls downhill to somebody in the engineering or lower management ranks. 

One thing that is certain.  Whatever the substance and process of GM’s decision, it is not transparent, probably not well documented, and now requires a major forensic effort to reconstitute what happened and why.  This is not unusual and it is the standard course in other industries including nuclear generation.  Wouldn’t we be better off if decisions were routinely subject to the rigor of contemporaneous recording including how complex and uncertain safety issues are decided in the context of other business priorities, and by whom?



*  J.B. White and J. Bennett, "Some at GM Brass Told of Cobalt Woe," Wall Street Journal online (Apr. 11, 2014)

Wednesday, March 19, 2014

Safety Culture at Tohoku Electric vs. Tokyo Electric Power Co. (TEPCO)

Fukushima No. 1 (Daiichi)
An op-ed* in the Japan Times asserts that the root cause of the Fukushima No. 1 (Daiichi) plant’s failures following the March 11, 2011 earthquake and tsunami was TEPCO’s weak corporate safety culture (SC).  This post summarizes the op-ed then provides some background information and our perspective.

Op-Ed Summary 

According to the authors, Tohoku Electric had a stronger SC than TEPCO.  Tohoku had a senior manager who strongly advocated safety, company personnel participated in seminars and panel discussions about earthquake and tsunami disaster prevention, and the company had strict disaster response protocols in which all workers were trained.  Although their Onagawa plant was closer to the March 11, 2011 quake epicenter and experienced a higher tsunami, it managed to shut down safely.

SC-related initiatives like Tohoku’s were not part of TEPCO’s culture.  Fukushima No. 1’s problems date back to its original siting and early construction.  TEPCO removed 25 meters off the 35 meter natural seawall of the plant site and built its reactor buildings at a lower elevation of 10 meters (compared to 14.7m for Onagawa).  Over the plant’s life, as research showed that tsunami levels had been underestimated, TEPCO “resorted to delaying tactics, such as presenting alternative scientific studies and lobbying”** rather than implementing countermeasures.

Background and Our Perspective

The op-ed is a condensed version of the authors’ longer paper***, which was adapted from a research paper for an engineering class, presumably written by Ms. Ryu.  The op-ed is basically a student paper based on public materials.  You should read the longer paper, review the references and judge for yourself if the authors have offered conclusions that go beyond the data they present.

I suggest you pay particular attention to the figure that supposedly compares Tohoku and TEPCO using INPO’s ten healthy nuclear SC traits.  Not surprisingly, TEPCO doesn’t fare very well but note the ratings were based on “the author’s personal interpretations and assumptions” (p. 26)

Also note that the authors do not mention Fukushima No. 2 (Daini), a four-unit TEPCO plant about 15 km south of Fukushima No. 1.  Fukushima No. 2 also experienced damage and significant challenges after being hit by a 9m tsunami but managed to reach shutdown by March 18, 2011.  What could be inferred from that experience?  Same corporate culture but better luck?

Bottom line, by now it’s generally agreed that TEPCO SC was unacceptably weak so the authors plow no new ground in that area.  However, their description of Tohoku Electric’s behavior is illuminating and useful.


*  A. Ryu and N. Meshkati, “Culture of safety can make or break nuclear power plants,” Japan Times (Mar. 14, 2014).  Retrieved Mar. 19, 2014.

**  Quoted in the op-ed but taken from “The official report of the Fukushima Nuclear Accident Independent Investigation Commission [NAIIC] Executive Summary” (The National Diet of Japan, 2012), p. 28.  The NAIIC report has a longer Fukushima root cause explanation than the op-ed, viz, “the root causes were the organizational and regulatory systems that supported faulty rationales for decisions and actions, . . .” (p. 16) and “The underlying issue is the social structure that results in “regulatory capture,” and the organizational, institutional, and legal framework that allows individuals to justify their own actions, hide them when inconvenient, and leave no records in order to avoid responsibility.” (p. 21)  IMHO, if this were boiled down, there wouldn’t be much SC left in the bottom of the pot.

***  A. Ryu and N. Meshkati, “Why You Haven’t Heard About Onagawa Nuclear Power Station after the Earthquake and Tsunami of March 11, 2011” (Rev. Feb. 26, 2014).

Wednesday, February 12, 2014

Left Brain, Right Stuff: How Leaders Make Winning Decisions by Phil Rosenzweig

In this new book* Rosenzweig extends the work of Kahneman and other scholars to consider real-world decisions.  He examines how the content and context of such decisions is significantly different from controlled experiments in a decision lab.  Note that Rosenzweig’s advice is generally aimed at senior executives, who typically have greater latitude in making decisions and greater responsibility for achieving results than lower-level professionals, but all managers can benefit from his insights.  This review summarizes the book and explores its lessons for nuclear operations and safety culture. 

Real-World Decisions

Decision situations in the real world can be more “complex, consequential and laden with uncertainty” than those described in laboratory experiments. (p. 6)  A combination of rigorous analysis (left brain) and ambition (the right stuff—high confidence and a willingness to take significant risks) is necessary to achieve success. (pp. 16-18)  The executive needs to identify the important characteristics of the decision he is facing.  Specifically,

Can the outcome following the decision be influenced or controlled?

Some real-world decisions cannot be controlled, e.g., the price of Apple stock after you buy 100 shares.  In those situations the traditional advice to decision makers, viz., be rational, detached, analyze the evidence and watch out for biases, is appropriate. (p. 32)

But for many decisions, the executive (or his team) can influence outcomes through high (but not excessive) confidence, positive illusions, calculated risks and direct action.  The knowledgeable executive understands that individuals perceived as good executives exhibit a bias for action and “The essence of management is to exercise control and influence events.” (p. 39)  Therefore, “As a rule of thumb, it's better to err on the side of thinking we can get things done rather than assuming we cannot.  The upside is greater and the downside less.” (p. 43)

Think about your senior managers.  Do they under or over-estimate their ability to influence future performance through their decisions?

Is the performance based on the decision(s) absolute or relative?

Absolute performance is described using some system of measurement, e.g., how many free throws you make in ten attempts or your batting average over a season.  It is not related to what anyone else does. 

But in competition performance is relative to rivals.  Ten percent growth may not be sufficient if a rival grows fifty percent.**  In addition, payoffs for performance may be highly skewed: in the Olympics, there are three medals and the others get nothing; in many industries, the top two or three companies make money, the others struggle to survive; in the most extreme case, it's winner take all and the everyone else gets nothing.  It is essential to take risks to succeed in highly skewed competitive situations.

Absolute and relative performance may be connected.  In some cases, “a small improvement in absolute performance can make an outsize difference in relative performance, . . .” (p. 66)  For example, if a well-performing nuclear plant can pick up a couple percentage points of annual capacity factor (CF), it can make a visible move up the CF rankings thus securing bragging rights (and possibly bonuses) for its senior managers.

For a larger example, remember when the electricity markets deregulated and many utilities rushed to buy or build merchant plants?  Note how many have crawled back under the blanket of regulation where they only have to demonstrate prudence (a type of absolute performance) to collect their guaranteed returns, and not compete with other sellers.  In addition, there is very little skew in the regulated performance curve; even mediocre plants earn enough to carry on their business.  Lack of direct competition also encourages sharing information, e.g., operating experience in the nuclear industry.  If competition is intense, sharing information is irresponsible and possibly dangerous to one's competitive position. (p. 61)

Do your senior managers compare their performance to some absolute scale, to other members of your fleet (if you're in one), to similar plants, to all plants, or the company's management compensation plan?

Will the decision result in rapid feedback and be repeated or is it a one-off or will it take a long time to see results? 


Repetitive decisions, e.g., putting at golf, can benefit from deliberate practice, where performance feedback is used to adjust future decisions (action, feedback, adjustment, action).  This is related to the extensive training in the nuclear industry and the familiar do, check and adjust cycle ingrained in all nuclear workers.

However, most strategic decisions are unique or have consequences that will only manifest in the long-term.  In such cases, one has to make the most sound decision possible then take the best shot. 

Executives Make Decisions in a Social Setting

Senior managers depend on others to implement decisions and achieve results.  Leadership (exaggerated confidence, emphasizing certain data and beliefs over others, consistency, fairness and trust is indispensable to inspire subordinates and shape culture.  Quoting Jack Welch, “As a leader, your job is to steer and inspire.” (p. 146)  “Effective leadership . . . means being sincere to a higher purpose and may call for something less than complete transparency.” (p. 158)

How about your senior managers?  Do they tell the whole truth when they are trying to motivate the organization to achieve performance goals?  If not, how does that impact trust over the long term?  
    
The Role of Confidence and Overconfidence

There is a good discussion of the overuse of the term “overconfidence,” which has multiple meanings but whose meaning in a specific application is often undefined.  For example, overconfidence can refer to being too certain that our judgment is correct, believing we can perform better than warranted by the facts (absolute performance) or believing we can outperform others (relative performance). 

Rosenzweig conducted some internet research on overconfidence.  The most common use in the business press was to explain, after the fact, why something had gone wrong. (p. 85)  “When we charge people with overconfidence, we suggest that they contributed to their own demise.” (p. 87)  This sounds similar to the search for the “bad apple” after an incident occurs at a nuclear plant.

But confidence is required to achieve high performance.  “What's the best level of confidence?  An amount that inspires us to do our best, but not so much that we become complacent, or take success for granted, or otherwise neglect what it takes to achieve high performance.” (p. 95)

Other Useful Nuggets

There is a good extension of the discussion (introduced in Kahneman) of base rates and conditional probabilities including the full calculations from two of the conditional probability examples in Kahneman's Thinking, Fast and Slow (reviewed here).

The discussion on decision models notes that such models can be useful for overcoming common biases, analyzing large amounts of data and predicting elements of the future beyond our influence.  However, if we have direct influence, “Our task isn't to predict what will happen, but to make it happen.” (p. 189)

Other chapters cover decision making in a major corporate acquisition (focusing on bidding strategy) and in start-up businesses (focusing on a series of start-up decisions)

Our Perspective

Rosenzweig acknowledges that he is standing on the shoulders of Kahneman and others students of decision making.  But “An awareness of common errors and cognitive biases is only a start.” (p. 248)  The executive must consider the additional decision dimensions discussed above to properly frame his decision; in other words, he has to decide what he's deciding.

The direct applicability to nuclear safety culture may seem slight but we believe executives' values and beliefs, as expressed in the decisions they make over time, provide a powerful force on the shape and evolution of culture.  In other words, we choose to emphasize the transactional nature of leadership.  In contrast, Rosenzweig emphasizes its transformational nature: “At its core, however, leadership is not a series of discrete decisions, but calls for working through other people over long stretches of time.” (p. 164)  Effective leaders are good at both.

Of course, decision making and influence on culture is not the exclusive province of senior managers.  Think about your organization's middle managers—the department heads, program and project managers, and process owners.  How do they gauge their performance?  How open are they to new ideas and approaches?  How much confidence do they exhibit with respect to their own capabilities and the capabilities of those they influence? 

Bottom line, this is a useful book.  It's very readable, with many clear and engaging examples,  and has the scent of academic rigor and insight; I would not be surprised if it achieves commercial success.


*  P. Rosenzweig, Left Brain, Right Stuff: How Leaders Make Winning Decisions (New York: Public Affairs, 2014).

**  Referring to Lewis Carroll's Through the Looking Glass, this situation is sometimes called “Red Queen competition [which] means that a company can run faster but fall further behind at the same time.” (p. 57)