Showing posts with label Decision Making. Show all posts
Showing posts with label Decision Making. Show all posts

Saturday, March 2, 2024

Boeing’s Safety Culture Under the FAA’s Microscope

The Federal Aviation Administration (FAA) recently released its report* on the safety culture (SC) at Boeing.  The FAA Expert Panel was tasked with reviewing SC after two crashes involving the latest models of Boeing’s 737 MAX airplanes.  The January 2024 door plug blowout happened as the report was nearing completion and reinforces the report’s findings.

737 MAXX door plug

The report has been summarized and widely reported in mainstream media and we will not review all its findings and recommendations here.  We want to focus on two parts of the report that address topics we have long promoted as being keys to understanding how strong (or weak) an organization’s SC is, viz., an organization’s decision-making processes and executive compensation.  In addition, we will discuss a topic that’s new to us, how to ensure the independence of employees whose work includes assessing company work products from the regulator’s perspective.

Decision-making

An organization’s decision-making processes create some of the most visible artifacts of the organization’s culture: a string of decisions (guided by policies, procedures, and priorities) and their consequences.

The report begins with a clear FAA description of decision-making’s important role in a Safety Management System (SMS) and an organization’s overall management.  In part, an “SMS is all about decision-making. Thus it has to be a decision-maker's tool, not a traditional safety program separate and distinct from business and operational decision making.” (p. 10)

However, the panel’s finding on Boeing’s SMS is a mixed bag.  “Boeing provided evidence that it is using its SMS to evaluate product safety decisions and some business decisions. The Expert Panel’s review of Boeing’s SMS documentation revealed detailed procedures on how to use SMS to evaluate product safety decisions, but there are no detailed procedures on how to determine which business decisions affect safety or how they should be evaluated under SMS.” (emphasis added) (p. 35)

The associated recommendation is “Develop detailed procedures to determine which business activities should be evaluated under SMS and how to evaluate those decisions.” (ibid.)  We think the recommendation addresses the specific problem identified in the finding.

One of the major inputs to a decision-making system is an organization’s priorities.  The FAA says safety should always be the top priority but Boeing’s commitment to safety has arguably weakened over time.

“Boeing provided the Expert Panel with a copy of the Boeing Safety Management System Policy, dated April 2022, which states, in part, “… we make safety our top priority.” Boeing revised this policy in August 2023 with . . .  a change to the message “we make safety our top priority” to “safety is our foundation.”” (p. 29)

Lowering the bar did not help.  “The [Expert] panel observed documentation, survey responses, and employee interviews that did not provide objective evidence of a foundational commitment to safety that matched Boeing’s descriptions of that objective.” (p. 22)

Boeing also created seeds of confusion for its safety decision makers.  Boeing implemented its SMS to operate alongside (and not replace or integrate with) its existing safety program.

“During interviews, Boeing employees highlighted that SMS implementation was not to disrupt existing safety program or systems.  SMS operating procedure documents spoke of SMS as the overarching safety program but then also provided segregation of SMS-focused activities from legacy safety activities . . .” (p. 24)

Executive compensation

We have long said that if safety performance is important to an organization then their senior managers’ compensation should have a safety performance-related component. 

Boeing has included safety in its executive financial incentive program.  Safety is one of five factors comprising operational performance which, in turn, is combined with financial performance to determine company-level performance.  Because of the weights used in the incentive model, “The Product Safety measure comprised approximately 4% of the overall 2022 Annual Incentive Award.” (p. 28)

Is 4% enough to influence executive behavior?  You be the judge.

Employee independence from undue management influence   

Boeing’s relationship with the FAA has an aspect that we don’t see in other industries. 

Boeing holds an Organization Designation Authorization (ODA) from the FAA. This allows Boeing to “make findings and issue certificates, i.e., perform discretionary functions in engineering, manufacturing, operations, airworthiness, or maintenance on behalf of the [FAA] Administrator.” (p. 12)

Basically, the FAA delegates some of its authority to Boeing employees, the ODA Unit Members (UMs), who then perform certain assessment and certification tasks.  “When acting as a representative of the Administrator, an individual is required to perform in a manner consistent with the policies, guidelines, and directives of the FAA. When performing a delegated function, an individual is legally distinct from, and must act independent of, the ODA holder.” (ibid.)  These employees are supposed to take the FAA’s view of situations and apply the FAA’s rules even if the FAA’s interests are in conflict with Boeing’s business interests. 

This might work in a perfect world but in Boeing’s world, it’s had and has problems, primarily “Boeing’s restructuring of the management of the ODA unit decreased opportunities for interference and retaliation against UMs, and provides effective organizational messaging regarding independence of UMs. However, the restructuring, while better, still allows opportunities for retaliation to occur, particularly with regards to salary and furlough ranking.” (emphasis added) (p. 5)  In addition, “The ability to comply with the ODA’s approved procedures is present; however, the integration of the SMS processes, procedures, and data collection requirements has not been accomplished.” (p. 26)

To an outsider, this looks like bad organizational design and practices. 

The U.S. commercial nuclear industry offers a useful contrast.  The regulator (Nuclear Regulatory Commission) expects its licensees to follow established procedures, perform required tests and inspections, and report any problems to the NRC.  Self-reporting is key to an effective relationship built on a base of trust.  However, it’s “trust but verify.”  The NRC has their own full-time employees in all the power plants, performing inspections, monitoring licensee operations, and interacting with licensee personnel.  The inspectors’ findings can lead, and have led, to increased oversight of licensee activities by the NRC.

Our perspective

It’s obvious that Boeing has emphasized production over safety.  The problems described above are evidence of broad systemic issues which are not amenable to quick fixes.  Integrating SC into everyday decision-making is hard work of the “continuous improvement” variety; it will not happen by management fiat.  Adjusting the compensation plan will require the Board to take safety more seriously.  Reworking the ODA program to eliminate all pressures and goal conflicts may not be possible; this is a big problem because the FAA has effectively deputized 1,000 people to perform FAA functions at Boeing. (p. 25)

The report only covers the most visible SC issues.  Complacency, normalization of deviation, the multitude of biases that can affect decision-making, and other corrosive factors are perennial threats to a strong SC and can affect “the natural drift in organizations.” (p. 40)  Such drift may lead to everything from process inefficiencies to tragic safety failures.

Boeing has taken one step: they fired the head of the 737 MAX program.**  Organizations often toss a high-level executive into a volcano to appease the regulatory gods and buy some time.  Boeing’s next challenge is that the FAA has given Boeing 90 days to fix its quality problems highlighted by the door plug blowout.***

Bottom line: Grab your popcorn, the show is just starting.  Boeing is probably too big to fail but it is definitely going to be pulled through the wringer. 


*  Section 103Organization Designation Authorizations (ODA) for Transport Airplanes Expert Panel Review Report,” Federal Aviation Administration (Feb. 26, 2024). 

**  N. Robertson, “Boeing fires head of 737 Max program,” The Hill (Feb. 21, 2024).

***  D. Shepardson and V. Insinna, “FAA gives Boeing 90 days to develop plan to address quality issues,” Reuters (Feb. 28, 2024).

Thursday, May 25, 2023

The National Academies on Behavioral Economics

Report cover
A National Academies of Sciences, Engineering, and Medicine (NASEM) committee recently published a report* on the contributions of behavioral economics (BE) to public policy.  BE is “an approach to understanding human behavior and decision making that integrates knowledge from psychology and other behavioral fields with economic analysis.” (p. Summ-1)

The report’s first section summarizes the history and development of the field of behavioral economics.  Classical economics envisions the individual person as a decision maker who has all relevant information available, and makes rational decisions that maximize his overall, i.e. short- and long-term, self-interest.  In contrast, BE recognizes that actual people making real decisions have many built-in biases, limitations, and constraints.  The following five principles apply to the decision making processes behavioral economists study:

Limited Attention and Cognition - The extent to which people pay limited attention to relevant aspects of their environment and often make cognitive errors.

Inaccurate Beliefs - Individuals can have incorrect perceptions or information about situations, relevant incentives, their own abilities, and the beliefs of others.

Present Bias - People tend to disproportionately focus on issues that are in front of them in the present moment.

Reference Dependence and Framing - Individuals tend to consider how their decision options relate to a particular reference point, e.g., the status quo, rather than considering all available possibilities. People are also sensitive to the way decision problems are framed, i.e., how options are presented, and this affects what comes to their attention and can lead to different perceptions, reactions, and choices.

Social Preferences and Social Norms - Decision makers often consider how their decisions affect others, how they compare with others, and how their decisions imply values and conformance with social norms.

The task of policy makers is to acknowledge these limitations and present decision situations to people in ways that people can comprehend and help them make decisions that will serve their own and society’s interests.  In practice this means decision situations “can be designed to modify the habitual and unconscious ways that people act and make decisions.” (p. Summ-3)

Decision situation designers use various interventions to inform and guide individuals’ decision making.  The NASEM committee mapped 23 possible interventions against the 5 principles.  It’s impractical to list all the interventions here but the more graspable ones include:

Defaults – The starting decision option is the designer’s preferred choice; the decision maker must actively choose a different option.

De-biasing – Attempt to correct inaccurate beliefs by presenting salient information related to past performance of the individual decision maker or a relevant reference group.

Mental Models – Update or change the decision maker’s mental representation of how the world works.

Reminders – Use reminders to cut through inattention, highlight desired behavior, and focus the decision maker on a future goal or desired state.

Framing – Focus the decision maker on a specific reference point, e.g., a default option or the negative consequences of inaction (not choosing any option).

Social Comparison and Feedback - Explicitly compare an individual’s performance with a relevant comparison or reference group, e.g., the individual’s professional peers.

Interventions can range from “nudges” that alter people’s behavior without forbidding any options to designs that are much stronger than nudges and are, in effect, efforts to enforce conformity.

The bulk of the report describes the theory, research, and application of BE in six public policy domains: health, retirement benefits, social safety net benefits, climate change, education, and criminal justice.  The NASEM committee reviewed current research and interventions in each domain and recommended areas for future research activity.  There is too much material to summarize so we’ll provide a single illustrative sample.

Because we have written about culture and safety practices in the healthcare industry, we will recap the report’s discussion of efforts to modify or support medical clinicians’ behavior.  Clinicians often work in busy, sometimes chaotic, settings that place multiple demands on their attention and must make frequent, critical decisions under time pressure.  On occasion, they provide more (or less) health care than a patient’s clinical condition warrants; they also make errors.  Research and interventions to date address present bias and limited attention by changing defaults, and invoke social norms by providing information on an individual’s performance relative to others.  An example of a default intervention is to change mandated checklists from opt-in (the response for each item must be specified) to opt-out (the most likely answer for each item is pre-loaded; the clinician can choose to change it).  An example of using social norms is to provide information on the behavior and performance of peers, e.g., in the quantity and type of prescriptions written.

Overall recommendations

The report’s recommendations are typical for this type of overview: improve the education of future policy makers, apply the key principles in public policy formulation, and fund and emphasize future research.  Such research should include better linkage of behavioral principles and insights to specific intervention and policy goals, and realize the potential for artificial intelligence and machine learning approaches to improve tailoring and targeting of interventions.

Our Perspective

We have written about decision making for years, mostly about how organizational culture (values and norms) affect decision making.  We’ve also reviewed the insights and principles highlighted in the subject report.  For example, our December 18, 2013 post on Daniel Kahneman’s work described people’s built-in decision making biases.  Our June 6, 2022 post on Thaler and Sunstein’s book Nudge discussed the application of behavioral economic principles in the design of ideal (and ethical) decision making processes.  These authors’ works are recognized as seminal in the subject report.

On the subject of ethics, the NASEM committee’s original mission included considering ethical issues related to the use of behavioral economics but ethics’ mention is the report is not much more than a few cautionary notes.  This is thin gruel for a field that includes many public and private actors deciding what people should do instead of letting them decide for themselves.

As evidenced by the report, the application of behavioral economics is widespread and growing.  It’s easy to see its use being supercharged by artificial intelligence and machine learning.  “Behavioral economics” sounds academic and benign.  Maybe we should start calling it behavioral engineering.

Bottom line: Read this report.  You need to know about this stuff.


*  National Academies of Sciences, Engineering, and Medicine, “Behavioral Economics: Policy Impact and Future Directions,” (Washington, DC: The National Academies Press, 2023).

Monday, June 6, 2022

Guiding People to Better Decisions: Lessons from Nudge by Richard Thaler and Cass Sunstein

Safetymatters reports on organizational culture, the values and beliefs that underlie an organization’s essential activities.  One such activity is decision-making (DM) and we’ve said an organization’s DM processes should be robust and replicable.  DM must incorporate the organization’s priorities, allocate its resources, and handle the inevitable goal conflicts which arise.

In a related area, we’ve written about the biases that humans exhibit in their personal DM processes, described most notably in the work by Daniel Kahneman.*  These biases affect decisions people make, or contribute to, on behalf of their organizations, and personal decisions that only impact the decision maker himself.

Thaler and Sunstein also recognize that humans are not perfectly rational decision makers (citing Kahneman’s work, among others) and seek to help people make better decisions based on insights from behavioral science and applied economics.  Nudge** focuses on the presentation of decision situations and alternatives to decision makers on public and private sector websites.  It describes the nitty-gritty of identifying, analyzing, and manipulating decision factors, i.e., the architecture of choice. 

The authors examine the choice architecture for a specific class of decisions: where groups of people make individual choices from a set of alternatives.  Choice architecture consists of curation and navigation tools.  Curation refers to the set of alternatives presented to the decision maker.  Navigation tools sound neutral but small details can have a significant effect on a decider’s behavior. 

The authors discuss many examples including choosing a healthcare or retirement plan, deciding whether or not to become an organ donor, addressing climate change, and selecting a home mortgage.  In each case, they describe different ways of presenting the decision choices, and their suggestions for an optimal approach.  Their recommendations are guided by their philosophy of “libertarian paternalism” which means decision makers should be free to choose, but should be guided to an alternative that would maximize the decider’s utility, as defined by the decision maker herself.

Nudge concentrates on which alternatives are presented to a decider and how they are presented.  Is the decision maker asked to opt-in or opt-out with respect to major decisions?  Are many alternatives presented or a subset of possibilities?  A major problem in the real world is that people can have difficulty in seeing how choices will end up affecting their lives.  What is the default if the decision maker doesn’t make a selection?  This is important: default options are powerful nudges; they can be welfare enhancing for the decider or self-serving for the organization.  Ideally, default choices should be “consistent with choices people would make if they all the relevant information, were not subject to behavioral biases, and had the time to make a thoughtful choice.” (p. 261)

Another real world problem is that much choice architecture is bogged down with sludge - the inefficiency in the choice system – including barriers, red tape, delays, opaque costs, and hidden or difficult to use off-ramps (e.g., finding the path to unsubscribe from a publication).

The authors show how private entities like social media companies and employers, and public ones like the DMV, present decision situations to users.  Some entities have the decider’s welfare and benefit in mind, others are more concerned with their own power and profits.  It’s no secret that markets give companies an incentive to exploit our DM frailties to increase profits.  The authors explicitly do not support the policy of “presumed consent” embedded in many choice situations where the designer has assumed a desirable answer and is trying to get more deciders to end up there. 

The authors’ view is their work has led to many governments around the world establishing “nudge” departments to identify better routes for implementing social policies.

Our Perspective

First, the authors have a construct that is totally consistent with our notion of a system.  A true teleological system includes a designer (the authors), a client (the individual deciders), and a measure of performance (utility as experienced by the decider).  Because we all agree, we’ll give them an A+ for conceptual clarity and completeness.

Second, they pull back the curtain to reveal the deliberate (or haphazard) architecture that underlies many of our on-line experiences where we are asked or required to interact with the source entities.  The authors make clear how often we are being prodded and nudged.  Even the most ostensibly benign sites can suggest what we should be doing through their selection of default choices.  (In fairness, some site operators, like one’s employer, are themselves under the gun to provide complete data to government agencies or insurance companies.  They simply can’t wait indefinitely for employees to make up their minds.)  We need to be alert to defaults that we accept without thinking and choices we make when we know what others have chosen; in both cases, we may end up with a sub-optimal choice for our particular circumstances. 

Thaler and Sunstein are respectable academics so they include lots of endnotes with references to books, journals, mainstream media, government publications, and other sources.  Sunstein was Kahneman’s co-author for Noise, which we reviewed on July 1, 2021.

Bottom line: Nudge is an easy read about how choice architects shape our everyday experiences in the on-line world where user choices exist. 

 

*  Click on the Kahneman label for all our posts related to his work.

**  R.H. Thaler and C.R. Sunstein, Nudge, final ed. (New Haven: Yale University Press) 2021.

Thursday, July 1, 2021

Making Better Decisions: Lessons from Noise by Daniel Kahneman, Oliver Sibony, and Cass R. Sunstein


The authors of Noise: A Flaw in Human Judgment* examine the random variations that occur in judgmental decisions and recommend ways to make more consistent judgments.  Variability is observed when two or more qualified decision makers review the same data or face the same situation and come to different judgments or conclusions.  (Variability can also occur when the same decision maker revisits a previous decision situation and arrives at a different judgment.)  The decision makers may be doctors making diagnoses, engineers designing structures, judges sentencing convicted criminals, or any other situation involving professional judgment.**  Judgments can vary because of two factors: bias and noise.

Bias is systematic, a consistent source of error in judgments.  It creates an observable average difference between actual judgments and theoretical judgments that would reflect a system’s actual or espoused goals and values.  Bias may be exhibited by an individual or a group, e.g., when the criminal justice system treats members of a certain race or class differently from others.

Noise is random scatter, a separate, independent cause of variability in decisions involving judgment.  It is similar to the residual error in a statistical equation, i.e., noise may have a zero average (because higher judgments are balanced by lower ones) but noise can create large variability in individual judgments.  Such inconsistency damages the credibility of the system.  Noise has three components: level, pattern, and occasion. 

Level refers to the difference in the average judgment made by different individuals, e.g., a magistrate may be tough or lenient. 

Pattern refers to the idiosyncrasies of individual judges, e.g., one magistrate may be severe with drunk drivers but easy on minor traffic offenses.  These idiosyncrasies include the internal values, principles, memories, and rules a judge brings to every case, consciously or not. 

Occasion refers to a random instability, e.g., where a fingerprint examiner looking at the same prints finds a match one day and no match on another day.  Occasion noise can be influenced by many factors including a judge’s mood, fatigue, and recent experience with other cases. 

Based on a review of the available literature and their own research, the authors suggest that noise can be a larger contributor to judgment variability than bias, with stable pattern noise larger than level noise or occasion noise.

Ways to reduce noise

Noise can be reduced through interventions at the individual or group level. 

For the individual, interventions include training to help people who make judgments realize how different psychological biases can influence decision making.  The long list of psychological biases in Noise builds on Kahneman’s work in Thinking, Fast and Slow which we reviewed on Dec. 18, 2013.  Such biases include overconfidence; denial of ignorance, which means not acknowledging that important relevant data isn’t known; base rate neglect, where outcomes in other similar cases are ignored; availability, which means the first solutions that come to mind are favored, with no further analysis; and anchoring of subsequent values to an initial offer.  Noise reduction techniques include active open-mindedness, which is the search for information that contradicts one’s initial hypothesis, or positing alternative interpretations of the available evidence; and the use of rankings and anchored scales rather than individual ratings based on vague, open-ended criteria.  Shared professional norms can also contribute to more consistent judgments.

At the group level, noise can be reduced through techniques the authors call decision hygiene.  The underlying belief is that obtaining multiple, independent judgments can increase accuracy, i.e., lead to an answer that is closer to the true or best answer.  For example, a complicated decision can be broken down into multiple dimensions, and each dimension assessed individually and independently.  Group members share their judgments for each dimension, then discus them, and only then combine their findings (and their intuition) into a final decision.  Trained decision observers can be used to watch for signs that familiar biases are affecting someone’s decisions or group dynamics involving position, power, politics, ambition and the like are contaminating the decision process and negating actual independence.

Noise can also be reduced or eliminated by the use of rules, guidelines, or standards. 

Rules are inflexible, thus noiseless.  However, rules (or algorithms) may also have biases coded into them or only apply to their original data set.  They may also drive discretion underground, e.g., where decision makers game the process to obtain the results they prefer.

Guidelines, such as sentencing guidelines for convicted criminals or templates for diagnosing common health problems, are less rigid but still reduce noise.  Guidelines decompose complex decisions into easier sub-judgments on predefined dimensions.  However, judges and doctors push back against mandatory guidelines that reduce their ability to deal with the unique factors of individual cases before them.

Standards are the least rigid noise reduction technique; they delegate power to professionals and are inherently qualitative.  Standards generally require that professionals make decisions that are “reasonable” or “prudent” or “feasible.”  They are related to the shared professional norms previously mentioned.  Judgments based on standards can invite controversy, disagreement, confrontation, and lawsuits.

The authors recognize that in some areas, it is infeasible, too costly, or even undesirable to eliminate noise.  One particular fear is a noise-free system might freeze existing values.  Rules and guidelines need to be flexible to adapt to changing social values or new data.

Our Perspective

We have long promoted the view that decision making (the process) and decisions (the artifacts) are crucial components of a socio-technical system, and have a significant two-way influence relationship with the organization’s culture.  Decision making should be guided by an organization’s policies and priorities, and the process should be robust, i.e., different decision makers should arrive at acceptably similar decisions. 

Many organizations examine (and excoriate) bad decisions and the “bad apples” who made them.  Organizations also need to look at “good” decisions to appreciate how much their professionals disagree when making generally acceptable judgments.  Does the process for making judgments develop the answer best supported by the facts, and then adjust it for preferences (e.g., cost) and values (e.g., safety), or do the fingers of the judges go on the scale at earlier steps?

You may be surprised at the amount of noise in your organization’s professional judgments.  On the other hand, is your organization’s decision making too rigid in some areas?  Decisions made using rules can be quicker and cheaper than prolonged analysis, but may lead to costly errors. which approach has a higher cost for errors?  Operators (or nurses or whoever) may follow the rules punctiliously but sometimes the train may go off the tracks. 

Bottom line: This is an important book that provides a powerful mental model for considering the many factors that influence individual professional judgments.


*  D. Kahneman, O. Sibony, and C.R. Sunstein, Noise: A Flaw in Human Judgment (New York: Little, Brown Spark) 2021.

**  “Professional judgment” implies some uncertainty about the answer, and judges may disagree, but there is a limit on how much disagreement is tolerable.


Monday, December 14, 2020

Implications of Randomness: Lessons from Nassim Taleb

Most of us know Nassim Nicholas Taleb from his bestseller The Black Swan. However, he wrote an earlier book, Fooled by Randomness*, in which he laid out one of his seminal propositions: a lot of things in life that we believe have identifiable, deterministic causes such as prescient decision making or exceptional skills, are actually the result of more random processes. Taleb focuses on financial markets but we believe his observations can refine our thinking about organizational decision making, mental models, and culture.

We'll begin with an example of how Taleb believes we misperceive reality. Consider a group of stockbrokers with successful 5-year track records. Most of us will assume they must be unusually skilled. However, we fail to consider how many other people started out as stockbrokers 5 years ago and fell by the wayside because of poor performance. Even if all the stockbrokers were less skilled than a simple coin flipper, some would still be successful over a 5 year period. The survivors are the result of an essentially random process and their track records mean very little going forward.

Taleb ascribes our failure to correctly see things (our inadequate mental models) to several biases. First is the hindsight bias where the past is always seen as deterministic and feeds our willingness to backfit theories or models to experience after it occurs. Causality can be very complex but we prefer to simplify it. Second, because of survivorship bias, we see and consider only the current survivors from an initial cohort; the losers do not show up in our assessment of the probability of success going forward. Our attribution bias tells us that successes are due to skills, and failures to randomness.

Taleb describes other factors that prevent us from being the rational thinkers postulated by classical economics or Cartesian philosophy. One set of factors arises from how are brains are hardwired and another set from the way we incorrectly process data presented to us.

The brain wiring issues include the work of Daniel Kahneman who describes how we use and rely on heuristics (mental shortcuts that we invoke automatically) to make day-to-day decisions. Thus, we make many decisions without really thinking or applying reason, and we are subject to other built-in biases, including our overconfidence in small samples and the role of emotions in driving our decisions. We reviewed Kahneman's work at length in our Dec. 18, 2013 post. Taleb notes that we also have a hard time recognizing and dealing with risk. Risk detection and risk avoidance are mediated in the emotional part of the brain, not the thinking part, so rational thinking has little to do with risk avoidance.

We also make errors when handling data in a more formal setting. For example, we ignore the mathematical truth that initial sample sizes matter greatly, much more than the sample size as a percentage of the overall population. We also ignore regression to the mean, which says that absent systemic changes, performance will eventually return to its average value. More perniciously, ignorant or unethical researchers will direct their computers to look for any significant relationship in a data set, a practice that can often produce a spurious relationship because all the individual tests have their own error rates. “Data snoops” will define some rule, then go looking for data that supports it. Why are researchers inclined to fudge their analyses? Because research with no significant result does not get published.

Our Perspective

We'll start with the obvious: Taleb has a large ego and is not shy about calling out people with whom he disagrees or does not respect. That said, his observations have useful implications for how we conceptualize the socio-technical systems in which we operate, i.e., our mental models, and present specific challenges for the culture of our organizations.

In our view, the three driving functions for any system's performance over time are determinism (cause and effect), choice (decision making), and probability. At heart, Taleb's world view is that the world functions more probabilistically than most people realize. A method he employs to illustrate alternative futures is Monte Carlo simulation, which we used to forecast nuclear power plant performance back in the 1990s. We wanted plant operators to see that certain low-probability events, i.e., Black Swans**, could occur in spite of the best efforts to eliminate them via plant design, improved equipment and procedures, and other means. Some unfortunate outcomes could occur because they were baked into the system from the get-go and eventually manifested. This is what Charles Perrow meant by “normal accidents” where normal system performance excursions go beyond system boundaries. For more on Perrow, see our Aug. 29,2013 post.

Of course, the probability distribution of system performance may not be stationary over time. In the most extreme case, when all system attributes change, it's called regime change. In addition, system performance may be nonlinear, where small inputs may lead to a disproportionate response, or poor performance can build slowly and suddenly cascade into failure. For some systems, no matter how specifically they are described, there will inherently be some possibility of errors, e.g., consider healthcare tests and diagnoses where both false positives and false negatives can be non-trivial occurrences.

What does this mean for organizational culture? For starters, the organization must acknowledge that many of its members are inherently somewhat irrational. It can try to force greater rationality on its members through policies, procedures, and practices, instilled by training and enforced by supervision, but there will always be leaks. A better approach would be to develop defense in depth designs, error-tolerant sub-systems with error correction capabilities, and a “just culture” that recognizes that honest mistakes will occur.

Bottom line: You should think awhile about how many aspects of your work environment have probabilistic attributes.

 

* N.N. Taleb, Fooled by Randomness, 2nd ed. (New York: Random House) 2004.

** Black swans are not always bad. For example, an actor can have one breakthrough role that leads to fame and fortune; far more actors will always be waiting tables and parking cars.

Tuesday, August 25, 2020

How to Consider Unknown Unknowns: Hints from McKinsey

Our July 31, 2020 post on medical errors discussed the importance of the “differential diagnosis” where a doctor thinks “I believe this patient has X but what else could it be?” We can usually consider that as a decision situation with known unknowns, i.e., looking for another needle in a haystack based on the available evidence. But what if you don’t know what you don’t know? How do you create other possibilities, threats or opportunities, or different futures out of thin air? A 2015 McKinsey article* provides some suggestions for getting started. There is nothing really new but it reiterates some important points we have been making here on Safetymatters.

The authors begin by noting executives’ decision making processes often coalesce around “managing the probable,” i.e., attempting to fit a current decision into a model that has worked before. The questions they ask and the data they seek tend to narrow, not expand, the decision and its context. This is an efficient way to approach many everyday decisions but excessively simple models are not appropriate for complicated decisions like how to approach a changing market or define a market that does not yet exist. All models constrain the eventual solution and simple models constrain it the most, perhaps leading to a totally wrong answer.

Decision situations that are dramatically different, complex, and uncertain require a more open-ended approach, the authors call it “leading the possible.” In such situations, decision makers should acknowledge they don’t know how uncertain environmental conditions will unfold or complex systems will evolve. The authors propose three non-traditional mental habits to identify and explore the possibilities.

Ask different questions

Ask questions that open up possibilities rather than narrowing the discussion and constraining the solution. Sample questions include: What do I expect not to find? How could I adjust to the unexpected? What might I be discounting or explaining away too quickly? What would happen if I changed one or more of my core assumptions? We would add: Is fear of missing out prodding me to move too rashly or complacency allowing me to not move at all?

As Hans Rosling said: “Beware of simple ideas and simple solutions. . . . Welcome complexity.” (see our Dec. 3, 2018 post)

Take multiple perspectives

Decision makers, especially senior managers, need to escape the echo chamber of the sycophants who surround them. They should consider how people who are very different from themselves might view the same decision situation. They can consult people who are knowledgeable but frustrating or irritating, or outside their usual internal circle such as junior staff, or even dissatisfied customers. Such perspectives can be insightful and surprising.

Other thought leaders have suggested similar approaches. For example, Ray Dalio proposes thoughtful disagreement where decision makers seek out brilliant people who disagree with them to gain a deeper understanding of decision situations (see our April 17, 2018 post) or Charlan Nemeth on the usefulness of authentic dissent in decision situations (see our June 29, 2020 post).

Recognize systems

The authors’ appreciation for systems thinking mirrors what we’ve been saying for years. (For example, see our Jan. 6, 2017 post.) Decision makers should be looking at the evolution of the forest, not examining individual trees. We need to acknowledge and accept that “Elements in a system can be connected in ways that are not immediately apparent.” The widest view is the most powerful but people have “been trained to follow our natural inclination to examine the component parts. We assume a straightforward and linear connection between cause and effect. Finally, we look for root causes at the center of problems. In doing these things, we often fail to perceive the broader forces at work.”


The authors realize that leaders who can apply the new habits may have different attributes than earlier senior managers. Traditional leaders are clear, confident, and decisive. However, their preference for managing the probable leaves them more open to being blindsided. In contrast, new leaders need to exhibit “humility, a keen sense of their own limitations, an insatiable curiosity, and an orientation to learning and development.”

Our Perspective

This article promotes more expansive mental models for decision making in formal organizations, models that deemphasize reliance on reductionism and linear, cause-effect thinking. We applaud the authors’ intent.

McKinsey is pretty good at publishing small bite “news you can use” articles. However, they do not contain any of the secret sauce for which McKinsey charges its clients top dollar.

Bottom line: Some of you don’t want to read 300 page books on management so here’s an 8 page article with a few good points.


* Z. Achi and J.G. Berger, “Delighting in the Possible,” McKinsey Quarterly (March 2015).

Monday, June 29, 2020

A Culture that Supports Dissent: Lessons from In Defense of Troublemakers by Charlan Nemeth

Charlan Nemeth is a psychology professor at the University of California, Berkeley.  Her research and practical experience inform her conclusion that the presence of authentic dissent during the decision making process leads to better informed and more creative decisions.  This post presents highlights from her 2018 book* and provides our perspective on her views.

Going along to get along

Most people are inclined to go along with the majority in a decision making situation, even when they believe the majority is wrong.  Why?  Because the majority has power and status, most organizational cultures value consensus and cohesion, and most people want to avoid conflict. (179)

An organization’s leader(s) may create a culture of agreement but consensus, aka the tyranny of the majority, gives the culture its power over members.  People consider decisions from the perspective of the consensus, and they seek and analyze information selectively to support the majority opinion.  The overall effect is sub-optimal decision making; following the majority requires no independent information gathering, no creativity, and no real thinking. (36,81,87-88)

Truth matters less than group cohesion.  People will shape and distort reality to support the consensus—they are complicit in their own brainwashing.  They will willingly “unknow” their beliefs, i.e., deny something they know to be true, to go along.  They live in information bubbles that reinforce the consensus, and are less likely to pay attention to other information or a different problem that may arise.  To get along, most employees don’t speak up when they see problems. (32,42,98,198)

“Groupthink” is an extreme form of consensus, enabled by a norm of cohesion, a strong leader, situational stress, and no real expectation that a better idea than the leader’s is possible.  The group dynamic creates a feedback loop where people repeat and reinforce the information they have in common, leading to more extreme views and eventually the impetus to take action.  Nemeth’s illustrative example is the decision by President John Kennedy and his advisors to authorize the disastrous Bay of Pigs invasion.** (140-142)

Dissent adds value to the decision making process

Dissent breaks the blind following of the majority and stimulates thought that is more independent and divergent, i.e., creates more alternatives and considers facts on all sides of the issue.  Importantly, the decision making process is improved even when the dissenter is wrong because it increases the group’s chances of identifying correct solutions. (7-8,12,18,116,180) 

Dissent takes courage but can be contagious; a single dissenter can encourage others to speak up.  Anonymous dissent can help protect the dissenter from the group. (37,47) 

Dissent must be authentic, i.e., it must reflect the true beliefs of the dissenter.  To persuade others, the dissenter must remain consistent in his position.  He can only change because of new or changing information.  Only authentic, persistent dissent will force others to confront the possibility that they may be wrong.  At the end of the day, getting a deal may require the dissenter to compromise, but changing the minds of others requires consistency. (58,63-64,67,115,190)

Alternatives to dissent

Other, less antagonistic, approaches to improving decision making have been promoted.  Nemeth finds them lacking.

Training is the go to solution in many organizations but is not very effective in addressing biases or getting people to speak up to realities of power and hierarchies.   Dissent is superior to training because it prompts reconsidering positions and contemplating alternatives. (101,107)

Classical brainstorming incorporates several rules for generating ideas, including withholding criticism of ideas that have been put forth.  However, Nemeth found in her research that allowing (but not mandating) criticism led to more ideas being generated.   In her view, it’s the “combat between different positions that provides the benefits to decision making.” (131,136)

Demographic diversity is promoted as a way to get more input into decisions.  But demographics such as race or gender are not as helpful as diversity of skills, knowledge, and backgrounds (and a willingness to speak up), along with leaders who genuinely welcome different viewpoints. (173,175,200)

The devil’s advocate approach can be better than nothing, but it generally leads to considering the negatives of the original position, i.e., the group focuses on better defenses for that position rather than alternatives to it.  Group members believe the approach is fake or acting (even when the advocate really believes it) so it doesn’t promote alternative thinking or force participants to confront the possibility that they may be wrong.  The approach is contrived to stimulate divergent thinking but it actually creates an illusion that all sides have been considered while preserving group cohesion. (182-190,203-04)

Dissent is not free for the individual or the group

Dissenters are disliked, ridiculed, punished, or worse.  Dissent definitely increases conflict and sometimes lowers morale in the group.  It requires a culture where people feel safe in expressing dissent, and it’s even better if dissent is welcomed.  The culture should expect that everyone will be treated with respect. (197-98,209)

Our Perspective

We have long argued that leaders should get the most qualified people, regardless of rank or role, to participate in decision making and that alternative positions should be encouraged and considered.  Nemeth’s work strengthens and extends our belief in the value of different views.

If dissent is perceived as an honest effort to attain the truth of a situation, it should be encouraged by management and tolerated, if not embraced, by peers.  Dissent may dissuade the group from linear cause-effect, path of least resistance thinking.  We see a similar practice in Ray Dalio’s concepts of an idea meritocracy and radical open-mindedness, described in our April 17, 2018 review of his book Principles.  In Dalio’s firm, employees are expected to engage in lively debate, intellectual combat even, over key decisions.  His people have an obligation to speak up if they disagree.  Not everyone can do this; a third of Dalio’s new hires are gone within eighteen months.

On the other hand, if dissent is perceived as self-serving or tattling, then the group will reject it like a foreign virus.  Let’s face it: nobody likes a rat.

We agree with Nemeth’s observation that training is not likely to improve the quality of an organization’s decision making.  Training can give people skills or techniques for better decision making but training does not address the underlying values that steer group decision making dynamics. 

Much academic research of this sort is done using students as test subjects.***  They are readily available, willing to participate, and follow directions.  Some folks think the results don’t apply to older adults in formal organizations.  We disagree.  It’s easier to form stranger groups with students who don’t have to worry about power and personal relationships than people in work situations; underlying psychological mechanisms can be clearly and cleanly exposed.

Bottom line: This is a lucid book written for popular consumption, not an academic journal, and is worth a read. 


(Give me the liberty to know, to utter, and to argue freely according to conscience. — John Milton)


*  C. Nemeth, In Defense of Troublemakers (New York: Basic Books, 2018).

**  Kennedy learned from the Bay of Pigs fiasco.  He used a much more open and inclusive decision making process during the Cuban Missile Crisis.

***  For example, Daniel Kahneman’s research reported in Thinking, Fast and Slow, which we reviewed Dec. 18, 2013.

Friday, March 8, 2019

Decision Making, Values, and Culture Change

Typical New Yorker cover
In the nuclear industry, most decisions are at least arguably “hard,” i.e., decision makers can agree on the facts and identify areas where there is risk or uncertainty.  A recent New Yorker article* on making an indisputably “soft” decision got us wondering if the methods and philosophy described in the article might provide some insight into qualitative personal decisions in the nuclear space.

Author Joshua Rothman’s interest in decision making was piqued by the impending birth of his first child.  When exactly did he decide that he wanted children (after not wanting them) and then participate with his wife to make it happen?  As he says, “If I made a decision, it wasn’t a very decisive one.”  Thus began his research into decision making methods and philosophy.

Rothman opens with a quick review of several decision making techniques.  He describes Benjamin Franklin’s “prudential algebra,” Charles Darwin’s lists of pros and cons, Leo Tolstoy’s expositions in War and Peace (where it appears the biggest decisions basically make themselves), and modern decision science processes that develop decisions through iterative activities performed by groups, scenario planning and war games. 

Eventually the author gets to decision theory, which holds that sound decisions flow from values.  Decision makers ask what they value and then seek to maximize it.  But what if “we’re unsure what we care about, or when we anticipate that what we care about might shift”?  What if we opt to change our values? 

The focus on values leads to philosophy.  Rothman draws heavily on the work of Agnes Callard, a philosopher at the University of Chicago, who believes that life-altering decisions are not made suddenly but through a more gradual process: “Old Person aspires to become New Person.”  Callard emphasizes that aspiration is different from ambition.  Ambitious people know exactly why they’re doing something, e.g., taking a class to get a good grade or modeling different behavior to satisfy regulatory scrutiny.  Aspirants, on the other hand, have a harder time because they have a less clear sense of their current activities’ value and can only hope their future selves can understand and appreciate it.  “To aspire, Callard writes, is to judge one’s present-day self by the standards of a future self who doesn’t yet exist.”

Our Perspective

We can consider the change of an organization’s culture as the integration over time of the changes in all its members’ behaviors and values.  We know that values underlie culture and significant cultural change requires shifting the actual (as opposed to the espoused) values of the organization.  This is not easy.  The organization’s more ambitious members will find it easier to get with the program; they know change is essential and are willing to adapt to keep their jobs or improve their standing.  The merely aspiring will have a harder time.  Because they lack a clear picture of the future organizational culture, they may be troubled by unexplored options, i.e., some different path or future that might be equally good or even better.  They may learn that no matter how deeply they study the experience of others, they still don’t really know what they’re getting into.  They don’t understand what the change experience will be like and how it will affect them.  They may be frustrated to discover that modeling desired new behaviors does not help because they still feel like the same people in the old culture.  Since personal change is not instantaneous, they may even get stuck somewhere between the old culture and the new culture.

Bottom line: Cultural change is harder for some people than others.  This article is an easy read that offers an introduction to the personal dynamics associated with changing one’s outlook or values.

*  J. Rothman, “The Art of Decision-Making,” The New Yorker (Jan. 21, 2019).  Retrieved March 1, 2019.

Monday, December 3, 2018

Nuclear Safety Culture: Lessons from Factfulness by Hans Rosling

This book* is about biases that prevent us from making fact-based decisions.  It is based on the author’s world-wide work as a doctor and public health researcher.  We saw it on Bill Gates’ 2018 summer reading list.

Rosling discusses ten instincts (or reasons) why our individual worldviews (or mental models) are systematically wrong and prevent us from seeing situations are they truly are and making fact-based decisions about them.

Rosling mostly addresses global issues but the same instincts can affect our approach to work-related decision making from the enterprise level down to the individual.  We briefly discuss each instinct and highlight how it may hinder us from making good decisions during everyday work and one-off investigations.

The gap instinct

This is “that irresistible temptation we have to divide all kinds of things into two distinct and often conflicting groups, with an imagined gap—a huge chasm of injustice—in between.” (p. 26)  This is reinforced by our “strong dramatic instinct toward binary thinking . . .” (p. 42)  The gap instinct can apply to our thinking about safety, e.g., in the Safety I mental model there is acceptable performance and intolerable performance, with no middle ground and no normal transitions back and forth.  Rosling notes that usually there is no clear cleavage between two groups, even if it seems like that from the averages.  We saw this in Dekker's analysis of health provider data (reviewed Oct. 29, 2018) where both favorable and unfavorable patient outcomes exhibited the same negative work process traits.

The negativity instinct

This is “our tendency to notice the bad more than the good.” (p. 51)  We do not perceive  improvements that are “too slow, too fragmented, or too small one-by-one to ever qualify as news.” (p. 54)  “There are three things going on here: the misremembering of the past [erroneously glorifying the “good old days”]; selective reporting by journalists and activists; and the feeling that as long as things are bad it’s heartless to say they are getting better.” (p. 70)  To tell the truth, we don’t see this instinct inside the nuclear world where facilities with long-standing cultural problems (i.e., bad) are constantly reporting progress (i.e., getting better) while their cultural conditions still remain unacceptable.

The straight line instinct

This is the expectation that a line of data will continue straight into the future.  Most of you have technical training or exposure and know that accurate extrapolations can take many shapes including straight, s-bends, asymptotes, humps or exponential growth. 

The fear instinct

“[F]ears are hardwired deep in our brains for obvious evolutionary reasons.” (p. 105)  “The media cannot resist tapping into our fear instinct. It is such an easy way to grab our attention.” (p. 106)  Rosling observes that hundreds of elderly people who fled Fukushima to escape radiation ended up dying “because of the mental and physical stresses of the evacuation itself or of life in evacuation shelters.” (p. 114)  In other words, they fled something frightening (a perceived risk) and ended up in danger (a real risk).  How often does fear, e.g., fear of bad press, enter into your organization’s decision making?

The size instinct 


We overweight things that look big to us.  “It is instinctive to look at a lonely number and misjudge its importance.  It is also instinctive . . . to misjudge the importance of a single instance or an identifiable victim.” (p. 125)  Does the nuclear industry overreact to some single instances?

The generalization instinct

“[T]he generalization instinct makes “us” think of “them” as all the same.” (p. 140)  At the macro level, this is where the bad “isms” exist: racism, sexism, ageism, classism, etc.  But your coworkers may practice generalization on a more subtle, micro level.  How many people do you work with who think the root cause of most incidents is human error?  Or somewhat more generously, human error, inadequate procedures and/or equipment malfunctions— but not the larger socio-technical system?  Do people jump to conclusions based on an inadequate or incorrect categorization of a problem?  Are categories, rather than facts, used as explanations?  Are vivid examples used to over-glamorize alleged progress or over-dramatize poor outcomes?

The destiny instinct

“The destiny instinct is the idea that innate characteristics determine the destinies of people, countries, religions, or cultures.” (p. 158)  Culture includes deep-seated beliefs, where feelings can be disguised as facts.  Does your work culture assume that some people are naturally bad apples?

The single perspective instinct

This is preference for single causes and single solutions.  It is the fundamental weakness of Safety I where the underlying attitude is that problems arise from individuals who need to be better controlled.  Rosling advises us to “Beware of simple ideas and simple solutions. . . . Welcome complexity.” (p. 189)  We agree.

The blame instinct

“The blame instinct is the instinct to find a clear, simple reason for why something bad has happened. . . . when things go wrong, it must be because of some bad individual with bad intentions. . . . This undermines our ability to solve the problem, or prevent it from happening again, . . . To understand most of the world’s significant problems we have to look beyond a guilty individual and to the system.” (p. 192)  “Look for causes, not villains. When something goes wrong don’t look for an individual or a group to blame. Accept that bad things can happen without anyone intending them to.  Instead spend your energy on understanding the multiple interacting causes, or system, that created the situation.  Look for systems, not heroes.” (p. 204)  We totally agree with Rosling’s endorsement of a systems approach.

The urgency instinct

“The call to action makes you think less critically, decide more quickly, and act now.” (p. 209)  In a true emergency, people will fall back on their training (if any) and hope for the best.  However, in most situations, you should seek more information.  Beware of data that is relevant but inaccurate, or accurate but irrelevant.  Be wary of predictions that fail to acknowledge that the future is uncertain.

Our Perspective

The series of decisions an organization makes is a visible artifact of its culture and its decision making process internalizes culture.  Because of this linkage, we have long been interested in how organizations and individuals can make better decisions, where “better” means fact- and reality-based and consistent with the organization’s mission and espoused values.

We have reviewed many works that deal with decision making.  This book adds value because it is based on the author’s research and observations around the world; it is not based on controlled studies in a laboratory or observations in a single organization.  It uses very good graphics to illustrate various data sets, including changes, e.g., progress, over time.

Rosling believed “it has never been easier or more important for business leaders and employees to act on a fact-based worldview.” (p. 228)   His book is engagingly written and easy to read.  It is Rosling’s swan song; he died in 2017.

Bottom line: Rosling advocates for robust decision making, accurate mental models, and a systems approach.  We like it.


*  H. Rosling, O. Rosling and A.R. Rönnlund, Factfulness, 1st ed. ebook (New York: Flatiron, 2018).

Tuesday, April 17, 2018

Nuclear Safety Culture: Insights from Principles by Ray Dalio

Book cover
Ray Dalio is the billionaire founder/builder of Bridgewater Associates, an investment management firm.  Principles* catalogs his policies, practices and lessons-learned for understanding reality and making decisions for achieving goals in that reality.  The book appears to cover every possible aspect of managerial and organizational behavior.  Our plan is to focus on two topics near and dear to us—decision making and culture—for ideas that could help strengthen nuclear safety culture (NSC).  We will then briefly summarize some of Dalio’s other thoughts on management.  Key concepts are shown in italics.

Decision Making

We’ll begin with Dalio’s mental model of reality.  Reality is a system of universal cause-effect relationships that repeat and evolve like a perpetual motion machine.  The system dynamic is driven by evolution (“the single greatest force in the universe” (p. 142)) which is the process of adaptation.

Because many situations repeat themselves, principles (policies or rules) advance the goal of making decisions in a systematic, repeatable way.  Any decision situation has two major steps: learning (obtaining and synthesizing data about the current situation) and deciding what to do.  Logic, reason and common sense are the primary decision making mechanisms, supported by applicable existing principles and tools, e.g., expected value calculations or evidence-based decision making tools.  The lessons learned from each decision situation can be incorporated into existing or new principles.  Practicing the principles develops good habits, i.e., automatic, reflexive behavior in the specified situations.  Ultimately, the principles can be converted into algorithms that can be computerized and used to support the human decision makers.

Believability weighting can be applied during the decision making process to obtain data or opinions about solutions.  Believable people can be anyone in the organization but are limited to those “who 1) have repeatedly and successfully accomplished the thing in question, and 2) . . . can logically explain the cause-effect relationships behind their conclusions.” (p. 371)  Believability weighting supplements and challenges responsible decision makers but does not overrule them.  Decision makers can also make use of thoughtful disagreement where they seek out brilliant people who disagree with them to gain a deeper understanding of decision situations.

The organization needs a process to get beyond disagreement.  After all discussion, the responsible party exercises his/her decision making authority.  Ultimately, those who disagree have to get on board (“get in sync”) and support the decision or leave the organization.

The two biggest barriers to good decision making are ego and blind spots.  Radical open-mindedness recognizes the search for what’s true and the best answer is more important than the need for any specific person, no matter their position in the organization, to be right.

Culture

Organizations and the individuals who populate them should also be viewed as machines.  Both are imperfect but capable of improving. The organization is a machine made up of culture and people that produces outcomes that provide feedback from which learning can occur.  Mistakes are natural but it is unacceptable to not learn from them.  Every problem is an opportunity to improve the machine.  

People are generally imperfect machines.  People are more emotional than logical.   They suffer from ego (subconscious drivers of thoughts) and blind spots (failure to see weaknesses in themselves).  They have different character attributes.  In short, people are all “wired” differently.  A strong culture with clear principles is needed to get and keep everyone in sync with each other and in pursuit of the organization’s goals.

Mutual adjustment takes place when people interact with culture.  Because people are different and the potential to change their wiring is low** it is imperative to select new employees who will embrace the existing culture.  If they can’t or won’t, or lack ability, they have to go.  Even with its stringent hiring practices, about a third of Bridgewater’s new hires are gone by the end of eighteen months.

Human relations are built on meaningful relationships, radical truth and tough love.  Meaningful relationships means people give more consideration to others than themselves and exhibit genuine caring for each other.  Radical truth means you are “transparent with your thoughts and open-mindedly accepting the feedback of others.” (p. 268)  Tough love recognizes that criticism is essential for improvement towards excellence; everyone in the organization is free to criticize any other member, no matter their position in the hierarchy.  People have an obligation to speak up if they disagree. 

“Great cultures bring problems and disagreements to the surface and solve them well . . .” (p. 299)  The culture should support a five-step management approach: Have clear goals, don’t tolerate problems, diagnose problems when they occur, design plans to correct the problems, and do what’s necessary to implement the plans, even if the decisions are unpopular.  The culture strives for excellence so it’s intolerant of folks who aren’t excellent and goal achievement is more important than pleasing others in the organization.

More on Management 


Dalio’s vision for Bridgewater is “an idea meritocracy in which meaningful work and meaningful relationships are the goals and radical truth and radical transparency are the ways of achieving them . . .” (p. 539)  An idea meritocracy is “a system that brings together smart, independent thinkers and has them productively disagree to come up with the best possible thinking and resolve their disagreements in a believability-weighted way . . .” (p. 308)  Radical truth means “not filtering one’s thoughts and one’s questions, especially the critical ones.” (ibid.)  Radical transparency means “giving most everyone the ability to see most everything.” (ibid.)

A person is a machine operating within a machine.  One must be one’s own machine designer and manager.  In managing people and oneself, take advantage of strengths and compensate for weaknesses via guardrails and soliciting help from others.  An example of a guardrail is assigning a team member whose strengths balance another member’s weaknesses.  People must learn from their own bad decisions so self-reflection after making a mistake is essential.  Managers must ascertain if mistakes are evidence of a weakness and whether compensatory action is required or, if the weakness is intolerable, termination.  Because values, abilities and skills are the drivers of behavior management should have a full profile for each employee.

Governance is the system of checks and balances in an organization.  No one is above the system, including the founder-owner.  In other words, senior managers like Dalio can be subject to the same criticism as any other employee.

Leadership in the traditional sense (“I say, you do”) is not so important in an idea meritocracy because the optimal decisions arise from a group process.  Managers are seen as decision makers, system designers and shapers who can visualize a better future and then build it.   Leaders “must be willing to recruit individuals who are willing to do the work that success requires.” (p. 520)

Our Perspective

We recognize international investment management is way different from nuclear power management so some of Dalio’s principles can only be applied to the nuclear industry in a limited way, if at all.  One obvious example of a lack of fit is the area of risk management.  The investing environment is extremely competitive with players evolving rapidly and searching for any edge.  Timely bets (investments) must be made under conditions where the risk of failure is many orders of magnitude greater than what acceptable in the nuclear industry.  Other examples include the relentless, somewhat ruthless, pursuit of goals and a willingness to jettison people that is foreign to the utility world.

But we shouldn’t throw the baby out with the bath.  While Dalio’s approach may be too extreme for wholesale application in your environment it does provide a comparison (note we don’t say “standard”) for your organization’s performance.  Does your decision making process measure up to Dalio’s in terms of robustness, transparency and the pursuit of truth?  Does your culture really strive for excellence (and eliminate those who don’t share that vision) or is it an effort constrained by hierarchical, policy or political realities?

This is a long book but it’s easy to read and key points are repeated often.  Not all of it is novel; many of the principles are based on observations or techniques that have been around for awhile and should be familiar to you.  For example, ideas about how human minds work are drawn, in part, from Daniel Kahneman; an integrated hierarchy of goals looks like Management by Objectives; and a culture that doesn’t automatically punish people for making mistakes or tolerable errors sounds like a “just culture” albeit with some mandatory individual learning attached.

Bottom line: Give this book a quick look.  It can’t hurt and might help you get a clearer picture of how your own organization actually operates.



*  R. Dalio, Principles (New York: Simon & Schuster, 2017).  This book was recommended to us by a Safetymatters reader.  Please contact us if you have any material you would like us to review.

**  A person’s basic values and abilities are relatively fixed, although skills may be improved through training.