Showing posts with label Kahneman. Show all posts
Showing posts with label Kahneman. Show all posts

Thursday, May 25, 2023

The National Academies on Behavioral Economics

Report cover
A National Academies of Sciences, Engineering, and Medicine (NASEM) committee recently published a report* on the contributions of behavioral economics (BE) to public policy.  BE is “an approach to understanding human behavior and decision making that integrates knowledge from psychology and other behavioral fields with economic analysis.” (p. Summ-1)

The report’s first section summarizes the history and development of the field of behavioral economics.  Classical economics envisions the individual person as a decision maker who has all relevant information available, and makes rational decisions that maximize his overall, i.e. short- and long-term, self-interest.  In contrast, BE recognizes that actual people making real decisions have many built-in biases, limitations, and constraints.  The following five principles apply to the decision making processes behavioral economists study:

Limited Attention and Cognition - The extent to which people pay limited attention to relevant aspects of their environment and often make cognitive errors.

Inaccurate Beliefs - Individuals can have incorrect perceptions or information about situations, relevant incentives, their own abilities, and the beliefs of others.

Present Bias - People tend to disproportionately focus on issues that are in front of them in the present moment.

Reference Dependence and Framing - Individuals tend to consider how their decision options relate to a particular reference point, e.g., the status quo, rather than considering all available possibilities. People are also sensitive to the way decision problems are framed, i.e., how options are presented, and this affects what comes to their attention and can lead to different perceptions, reactions, and choices.

Social Preferences and Social Norms - Decision makers often consider how their decisions affect others, how they compare with others, and how their decisions imply values and conformance with social norms.

The task of policy makers is to acknowledge these limitations and present decision situations to people in ways that people can comprehend and help them make decisions that will serve their own and society’s interests.  In practice this means decision situations “can be designed to modify the habitual and unconscious ways that people act and make decisions.” (p. Summ-3)

Decision situation designers use various interventions to inform and guide individuals’ decision making.  The NASEM committee mapped 23 possible interventions against the 5 principles.  It’s impractical to list all the interventions here but the more graspable ones include:

Defaults – The starting decision option is the designer’s preferred choice; the decision maker must actively choose a different option.

De-biasing – Attempt to correct inaccurate beliefs by presenting salient information related to past performance of the individual decision maker or a relevant reference group.

Mental Models – Update or change the decision maker’s mental representation of how the world works.

Reminders – Use reminders to cut through inattention, highlight desired behavior, and focus the decision maker on a future goal or desired state.

Framing – Focus the decision maker on a specific reference point, e.g., a default option or the negative consequences of inaction (not choosing any option).

Social Comparison and Feedback - Explicitly compare an individual’s performance with a relevant comparison or reference group, e.g., the individual’s professional peers.

Interventions can range from “nudges” that alter people’s behavior without forbidding any options to designs that are much stronger than nudges and are, in effect, efforts to enforce conformity.

The bulk of the report describes the theory, research, and application of BE in six public policy domains: health, retirement benefits, social safety net benefits, climate change, education, and criminal justice.  The NASEM committee reviewed current research and interventions in each domain and recommended areas for future research activity.  There is too much material to summarize so we’ll provide a single illustrative sample.

Because we have written about culture and safety practices in the healthcare industry, we will recap the report’s discussion of efforts to modify or support medical clinicians’ behavior.  Clinicians often work in busy, sometimes chaotic, settings that place multiple demands on their attention and must make frequent, critical decisions under time pressure.  On occasion, they provide more (or less) health care than a patient’s clinical condition warrants; they also make errors.  Research and interventions to date address present bias and limited attention by changing defaults, and invoke social norms by providing information on an individual’s performance relative to others.  An example of a default intervention is to change mandated checklists from opt-in (the response for each item must be specified) to opt-out (the most likely answer for each item is pre-loaded; the clinician can choose to change it).  An example of using social norms is to provide information on the behavior and performance of peers, e.g., in the quantity and type of prescriptions written.

Overall recommendations

The report’s recommendations are typical for this type of overview: improve the education of future policy makers, apply the key principles in public policy formulation, and fund and emphasize future research.  Such research should include better linkage of behavioral principles and insights to specific intervention and policy goals, and realize the potential for artificial intelligence and machine learning approaches to improve tailoring and targeting of interventions.

Our Perspective

We have written about decision making for years, mostly about how organizational culture (values and norms) affect decision making.  We’ve also reviewed the insights and principles highlighted in the subject report.  For example, our December 18, 2013 post on Daniel Kahneman’s work described people’s built-in decision making biases.  Our June 6, 2022 post on Thaler and Sunstein’s book Nudge discussed the application of behavioral economic principles in the design of ideal (and ethical) decision making processes.  These authors’ works are recognized as seminal in the subject report.

On the subject of ethics, the NASEM committee’s original mission included considering ethical issues related to the use of behavioral economics but ethics’ mention is the report is not much more than a few cautionary notes.  This is thin gruel for a field that includes many public and private actors deciding what people should do instead of letting them decide for themselves.

As evidenced by the report, the application of behavioral economics is widespread and growing.  It’s easy to see its use being supercharged by artificial intelligence and machine learning.  “Behavioral economics” sounds academic and benign.  Maybe we should start calling it behavioral engineering.

Bottom line: Read this report.  You need to know about this stuff.


*  National Academies of Sciences, Engineering, and Medicine, “Behavioral Economics: Policy Impact and Future Directions,” (Washington, DC: The National Academies Press, 2023).

Monday, June 6, 2022

Guiding People to Better Decisions: Lessons from Nudge by Richard Thaler and Cass Sunstein

Safetymatters reports on organizational culture, the values and beliefs that underlie an organization’s essential activities.  One such activity is decision-making (DM) and we’ve said an organization’s DM processes should be robust and replicable.  DM must incorporate the organization’s priorities, allocate its resources, and handle the inevitable goal conflicts which arise.

In a related area, we’ve written about the biases that humans exhibit in their personal DM processes, described most notably in the work by Daniel Kahneman.*  These biases affect decisions people make, or contribute to, on behalf of their organizations, and personal decisions that only impact the decision maker himself.

Thaler and Sunstein also recognize that humans are not perfectly rational decision makers (citing Kahneman’s work, among others) and seek to help people make better decisions based on insights from behavioral science and applied economics.  Nudge** focuses on the presentation of decision situations and alternatives to decision makers on public and private sector websites.  It describes the nitty-gritty of identifying, analyzing, and manipulating decision factors, i.e., the architecture of choice. 

The authors examine the choice architecture for a specific class of decisions: where groups of people make individual choices from a set of alternatives.  Choice architecture consists of curation and navigation tools.  Curation refers to the set of alternatives presented to the decision maker.  Navigation tools sound neutral but small details can have a significant effect on a decider’s behavior. 

The authors discuss many examples including choosing a healthcare or retirement plan, deciding whether or not to become an organ donor, addressing climate change, and selecting a home mortgage.  In each case, they describe different ways of presenting the decision choices, and their suggestions for an optimal approach.  Their recommendations are guided by their philosophy of “libertarian paternalism” which means decision makers should be free to choose, but should be guided to an alternative that would maximize the decider’s utility, as defined by the decision maker herself.

Nudge concentrates on which alternatives are presented to a decider and how they are presented.  Is the decision maker asked to opt-in or opt-out with respect to major decisions?  Are many alternatives presented or a subset of possibilities?  A major problem in the real world is that people can have difficulty in seeing how choices will end up affecting their lives.  What is the default if the decision maker doesn’t make a selection?  This is important: default options are powerful nudges; they can be welfare enhancing for the decider or self-serving for the organization.  Ideally, default choices should be “consistent with choices people would make if they all the relevant information, were not subject to behavioral biases, and had the time to make a thoughtful choice.” (p. 261)

Another real world problem is that much choice architecture is bogged down with sludge - the inefficiency in the choice system – including barriers, red tape, delays, opaque costs, and hidden or difficult to use off-ramps (e.g., finding the path to unsubscribe from a publication).

The authors show how private entities like social media companies and employers, and public ones like the DMV, present decision situations to users.  Some entities have the decider’s welfare and benefit in mind, others are more concerned with their own power and profits.  It’s no secret that markets give companies an incentive to exploit our DM frailties to increase profits.  The authors explicitly do not support the policy of “presumed consent” embedded in many choice situations where the designer has assumed a desirable answer and is trying to get more deciders to end up there. 

The authors’ view is their work has led to many governments around the world establishing “nudge” departments to identify better routes for implementing social policies.

Our Perspective

First, the authors have a construct that is totally consistent with our notion of a system.  A true teleological system includes a designer (the authors), a client (the individual deciders), and a measure of performance (utility as experienced by the decider).  Because we all agree, we’ll give them an A+ for conceptual clarity and completeness.

Second, they pull back the curtain to reveal the deliberate (or haphazard) architecture that underlies many of our on-line experiences where we are asked or required to interact with the source entities.  The authors make clear how often we are being prodded and nudged.  Even the most ostensibly benign sites can suggest what we should be doing through their selection of default choices.  (In fairness, some site operators, like one’s employer, are themselves under the gun to provide complete data to government agencies or insurance companies.  They simply can’t wait indefinitely for employees to make up their minds.)  We need to be alert to defaults that we accept without thinking and choices we make when we know what others have chosen; in both cases, we may end up with a sub-optimal choice for our particular circumstances. 

Thaler and Sunstein are respectable academics so they include lots of endnotes with references to books, journals, mainstream media, government publications, and other sources.  Sunstein was Kahneman’s co-author for Noise, which we reviewed on July 1, 2021.

Bottom line: Nudge is an easy read about how choice architects shape our everyday experiences in the on-line world where user choices exist. 

 

*  Click on the Kahneman label for all our posts related to his work.

**  R.H. Thaler and C.R. Sunstein, Nudge, final ed. (New Haven: Yale University Press) 2021.

Friday, December 10, 2021

Prepping for Threats: Lessons from Risk: A User’s Guide by Gen. Stanley McChrystal.

Gen. McChrystal was a U.S. commander in Afghanistan; you may remember he was fired by President Obama for making, and allowing subordinates to make, disparaging comments about then-Vice President Biden.  However, McChrystal was widely respected as a soldier and leader, and his recent book* on strengthening an organization’s “risk immune system” caught our attention.  This post summarizes its key points, focusing on items relevant to formal civilian organizations.

McChrystal describes a system that can detect, assess, respond to, and learn from risks.**  His mental model consists of two major components: (1) ten Risk Control Factors, interrelated dimensions for dealing with risks and (2) eleven Solutions, strategies that can be used to identify and address weaknesses in the different factors.  His overall objective is to create a resilient organization that can successfully respond to challenges and threats. 

Risk Control Factors

These are things under the control of an organization and its leadership, including physical assets, processes, practices, policies, and culture.

Communication – The organization must have the physical ability and willingness to exchange clear, complete, and intelligible information, and identify and deal with propaganda or misinformation.

Narrative – An articulated organizational purpose and mission.  It describes Who we are, What we do, and Why we do it.  The narrative drives (and we’d say is informed by) values, beliefs, and action.

Structure – Organizational design defines decision spaces and communication networks, implies power (both actual and perceived authority), suggests responsibilities, and influences culture.

Technology – This is both the hardware/software and how the organization applies it.  It include an awareness of how much authority is being transferred to machines, our level of dependence on them, our vulnerability to interruptions, and the unintended consequences of new technologies.

Diversity – Leaders must actively leverage different perspectives and abilities, inoculate the organization against groupthink, i.e., norms of consensus, and encourage productive conflict and a norm of skepticism.  (See our June 29, 2020 post on A Culture that Supports Dissent: Lessons from In Defense of Troublemakers by Charlan Nemeth.)

Bias – Biases are assumptions about the world that affect our outlook and decision making, and cause us to ignore or discount many risks.  In McChrystal’s view “[B]ias is an invisible hand driven by self-interest.” (See our July 1, 2021 and Dec.18, 2013 posts on Daniel Kahneman’s work on identifying and handling biases.) 

Action – Leaders have to proactively overcome organizational inertia, i.e., a bias against starting something new or changing course.  Inertia manifests in organizational norms that favor the status quo and tolerate internal resistance to change.

Timing – Getting the “when” of action right.  Leaders have to initiate action at the right time with the right speed to yield optimum impact.

Adaptability – Organizations have to respond to changing risks and environments.  Leaders need to develop their organization’s willingness and ability to change.

Leadership – Leaders have to direct and inspire the overall system, and stimulate and coordinate the other Risk Control Factors.  Leaders must communicate the vision and personify the narrative.  In practice, they need to focus on asking the right questions and sense the context of a given situation, embracing the new before necessity is evident. (See our Nov. 9, 2018 post for an example of effective leadership.)

Solutions

The Solutions are strategies or methods to identify weaknesses in and strengthen the risk control factors.  In McChrystal’s view, each Solution is particularly applicable to certain factors, as shown in Table 1.

Assumptions check – Assessment of the reasonableness and relative importance of assumptions that underlie decisions.  It’s the qualitative and quantitative analyses of strengths and weaknesses of supporting arguments, modified by the judgment of thoughtful people.

Risk review – Assessment of when hazards may arrive and the adequacy of the organization’s preparations.

Risk alignment check – Leaders should recognize that different perspectives on risks exist and should be considered in the overall response.

Gap analysis – Identify the space between current actions and desired goals.

Snap assessment – Short-term, limited scope analyses of immediate hazards.  What’s happening?  How well are we responding?

Communications check – Ensure processes and physical systems are in place and working.

Tabletop exercise – A limited duration simulation that tests specific aspects of the organization’s risk response.

War game (functional exercise) – A pressure test in real time to show how the organization comprehensively reacts to a competitor’s action or unforeseen event.

Red teaming – Exercises involving third parties to identify organizational vulnerabilities and blind spots.

Pre-mortem – A discussion focusing on the things mostly likely to go wrong during the execution of a plan. 

After-action review – A self-assessment that identifies things that went well and areas for improvement.


 


Table 1  Created by Safetymatters

 

Our Perspective

McChrystal did not invent any of his Risk Control Factors and we have discussed many of these topics over the years.***  His value-add is organizing them as a system and recognizing their interrelatedness.  The entire system has to perform to identify, prepare for, and respond to risks, i.e., threats that can jeopardize the organization’s mission success.

This review emphasizes McChrystal’s overall risk management model.  The book also includes many examples of risks confronted, ignored, or misunderstood in the military, government, and commercial arenas.  Some, like Blockbuster’s failure to acquire Netflix when it had the opportunity, had poor outcomes; others, like the Cuban missile crisis or Apollo 13, worked out better.

The book appears aimed at senior leaders but all managers from department heads on up can benefit from thinking more systematically about how their organizations respond to threats from, or changes in, the external environment. 

There are hundreds of endnotes to document the text but the references are more Psychology Today than the primary sources we favor.

Bottom line: This is an easy to read example of the “management cookbook” genre.  It has a lot of familiar information in one place.

 

*  S. McChrystal and A. Butrico, Risk: A User’s Guide (New York: Portfolio) 2021.  Butrico is McChrystal’s speechwriter.

**  Risk to McChrystal is a combination of a threat and one’s vulnerability to the threat.  Threats are usually external to the organization while vulnerabilities exist because of internal aspects.

***  For example, click on the Management or Decision Making labels to pull up posts in related areas.

Thursday, July 1, 2021

Making Better Decisions: Lessons from Noise by Daniel Kahneman, Oliver Sibony, and Cass R. Sunstein


The authors of Noise: A Flaw in Human Judgment* examine the random variations that occur in judgmental decisions and recommend ways to make more consistent judgments.  Variability is observed when two or more qualified decision makers review the same data or face the same situation and come to different judgments or conclusions.  (Variability can also occur when the same decision maker revisits a previous decision situation and arrives at a different judgment.)  The decision makers may be doctors making diagnoses, engineers designing structures, judges sentencing convicted criminals, or any other situation involving professional judgment.**  Judgments can vary because of two factors: bias and noise.

Bias is systematic, a consistent source of error in judgments.  It creates an observable average difference between actual judgments and theoretical judgments that would reflect a system’s actual or espoused goals and values.  Bias may be exhibited by an individual or a group, e.g., when the criminal justice system treats members of a certain race or class differently from others.

Noise is random scatter, a separate, independent cause of variability in decisions involving judgment.  It is similar to the residual error in a statistical equation, i.e., noise may have a zero average (because higher judgments are balanced by lower ones) but noise can create large variability in individual judgments.  Such inconsistency damages the credibility of the system.  Noise has three components: level, pattern, and occasion. 

Level refers to the difference in the average judgment made by different individuals, e.g., a magistrate may be tough or lenient. 

Pattern refers to the idiosyncrasies of individual judges, e.g., one magistrate may be severe with drunk drivers but easy on minor traffic offenses.  These idiosyncrasies include the internal values, principles, memories, and rules a judge brings to every case, consciously or not. 

Occasion refers to a random instability, e.g., where a fingerprint examiner looking at the same prints finds a match one day and no match on another day.  Occasion noise can be influenced by many factors including a judge’s mood, fatigue, and recent experience with other cases. 

Based on a review of the available literature and their own research, the authors suggest that noise can be a larger contributor to judgment variability than bias, with stable pattern noise larger than level noise or occasion noise.

Ways to reduce noise

Noise can be reduced through interventions at the individual or group level. 

For the individual, interventions include training to help people who make judgments realize how different psychological biases can influence decision making.  The long list of psychological biases in Noise builds on Kahneman’s work in Thinking, Fast and Slow which we reviewed on Dec. 18, 2013.  Such biases include overconfidence; denial of ignorance, which means not acknowledging that important relevant data isn’t known; base rate neglect, where outcomes in other similar cases are ignored; availability, which means the first solutions that come to mind are favored, with no further analysis; and anchoring of subsequent values to an initial offer.  Noise reduction techniques include active open-mindedness, which is the search for information that contradicts one’s initial hypothesis, or positing alternative interpretations of the available evidence; and the use of rankings and anchored scales rather than individual ratings based on vague, open-ended criteria.  Shared professional norms can also contribute to more consistent judgments.

At the group level, noise can be reduced through techniques the authors call decision hygiene.  The underlying belief is that obtaining multiple, independent judgments can increase accuracy, i.e., lead to an answer that is closer to the true or best answer.  For example, a complicated decision can be broken down into multiple dimensions, and each dimension assessed individually and independently.  Group members share their judgments for each dimension, then discus them, and only then combine their findings (and their intuition) into a final decision.  Trained decision observers can be used to watch for signs that familiar biases are affecting someone’s decisions or group dynamics involving position, power, politics, ambition and the like are contaminating the decision process and negating actual independence.

Noise can also be reduced or eliminated by the use of rules, guidelines, or standards. 

Rules are inflexible, thus noiseless.  However, rules (or algorithms) may also have biases coded into them or only apply to their original data set.  They may also drive discretion underground, e.g., where decision makers game the process to obtain the results they prefer.

Guidelines, such as sentencing guidelines for convicted criminals or templates for diagnosing common health problems, are less rigid but still reduce noise.  Guidelines decompose complex decisions into easier sub-judgments on predefined dimensions.  However, judges and doctors push back against mandatory guidelines that reduce their ability to deal with the unique factors of individual cases before them.

Standards are the least rigid noise reduction technique; they delegate power to professionals and are inherently qualitative.  Standards generally require that professionals make decisions that are “reasonable” or “prudent” or “feasible.”  They are related to the shared professional norms previously mentioned.  Judgments based on standards can invite controversy, disagreement, confrontation, and lawsuits.

The authors recognize that in some areas, it is infeasible, too costly, or even undesirable to eliminate noise.  One particular fear is a noise-free system might freeze existing values.  Rules and guidelines need to be flexible to adapt to changing social values or new data.

Our Perspective

We have long promoted the view that decision making (the process) and decisions (the artifacts) are crucial components of a socio-technical system, and have a significant two-way influence relationship with the organization’s culture.  Decision making should be guided by an organization’s policies and priorities, and the process should be robust, i.e., different decision makers should arrive at acceptably similar decisions. 

Many organizations examine (and excoriate) bad decisions and the “bad apples” who made them.  Organizations also need to look at “good” decisions to appreciate how much their professionals disagree when making generally acceptable judgments.  Does the process for making judgments develop the answer best supported by the facts, and then adjust it for preferences (e.g., cost) and values (e.g., safety), or do the fingers of the judges go on the scale at earlier steps?

You may be surprised at the amount of noise in your organization’s professional judgments.  On the other hand, is your organization’s decision making too rigid in some areas?  Decisions made using rules can be quicker and cheaper than prolonged analysis, but may lead to costly errors. which approach has a higher cost for errors?  Operators (or nurses or whoever) may follow the rules punctiliously but sometimes the train may go off the tracks. 

Bottom line: This is an important book that provides a powerful mental model for considering the many factors that influence individual professional judgments.


*  D. Kahneman, O. Sibony, and C.R. Sunstein, Noise: A Flaw in Human Judgment (New York: Little, Brown Spark) 2021.

**  “Professional judgment” implies some uncertainty about the answer, and judges may disagree, but there is a limit on how much disagreement is tolerable.


Monday, December 14, 2020

Implications of Randomness: Lessons from Nassim Taleb

Most of us know Nassim Nicholas Taleb from his bestseller The Black Swan. However, he wrote an earlier book, Fooled by Randomness*, in which he laid out one of his seminal propositions: a lot of things in life that we believe have identifiable, deterministic causes such as prescient decision making or exceptional skills, are actually the result of more random processes. Taleb focuses on financial markets but we believe his observations can refine our thinking about organizational decision making, mental models, and culture.

We'll begin with an example of how Taleb believes we misperceive reality. Consider a group of stockbrokers with successful 5-year track records. Most of us will assume they must be unusually skilled. However, we fail to consider how many other people started out as stockbrokers 5 years ago and fell by the wayside because of poor performance. Even if all the stockbrokers were less skilled than a simple coin flipper, some would still be successful over a 5 year period. The survivors are the result of an essentially random process and their track records mean very little going forward.

Taleb ascribes our failure to correctly see things (our inadequate mental models) to several biases. First is the hindsight bias where the past is always seen as deterministic and feeds our willingness to backfit theories or models to experience after it occurs. Causality can be very complex but we prefer to simplify it. Second, because of survivorship bias, we see and consider only the current survivors from an initial cohort; the losers do not show up in our assessment of the probability of success going forward. Our attribution bias tells us that successes are due to skills, and failures to randomness.

Taleb describes other factors that prevent us from being the rational thinkers postulated by classical economics or Cartesian philosophy. One set of factors arises from how are brains are hardwired and another set from the way we incorrectly process data presented to us.

The brain wiring issues include the work of Daniel Kahneman who describes how we use and rely on heuristics (mental shortcuts that we invoke automatically) to make day-to-day decisions. Thus, we make many decisions without really thinking or applying reason, and we are subject to other built-in biases, including our overconfidence in small samples and the role of emotions in driving our decisions. We reviewed Kahneman's work at length in our Dec. 18, 2013 post. Taleb notes that we also have a hard time recognizing and dealing with risk. Risk detection and risk avoidance are mediated in the emotional part of the brain, not the thinking part, so rational thinking has little to do with risk avoidance.

We also make errors when handling data in a more formal setting. For example, we ignore the mathematical truth that initial sample sizes matter greatly, much more than the sample size as a percentage of the overall population. We also ignore regression to the mean, which says that absent systemic changes, performance will eventually return to its average value. More perniciously, ignorant or unethical researchers will direct their computers to look for any significant relationship in a data set, a practice that can often produce a spurious relationship because all the individual tests have their own error rates. “Data snoops” will define some rule, then go looking for data that supports it. Why are researchers inclined to fudge their analyses? Because research with no significant result does not get published.

Our Perspective

We'll start with the obvious: Taleb has a large ego and is not shy about calling out people with whom he disagrees or does not respect. That said, his observations have useful implications for how we conceptualize the socio-technical systems in which we operate, i.e., our mental models, and present specific challenges for the culture of our organizations.

In our view, the three driving functions for any system's performance over time are determinism (cause and effect), choice (decision making), and probability. At heart, Taleb's world view is that the world functions more probabilistically than most people realize. A method he employs to illustrate alternative futures is Monte Carlo simulation, which we used to forecast nuclear power plant performance back in the 1990s. We wanted plant operators to see that certain low-probability events, i.e., Black Swans**, could occur in spite of the best efforts to eliminate them via plant design, improved equipment and procedures, and other means. Some unfortunate outcomes could occur because they were baked into the system from the get-go and eventually manifested. This is what Charles Perrow meant by “normal accidents” where normal system performance excursions go beyond system boundaries. For more on Perrow, see our Aug. 29,2013 post.

Of course, the probability distribution of system performance may not be stationary over time. In the most extreme case, when all system attributes change, it's called regime change. In addition, system performance may be nonlinear, where small inputs may lead to a disproportionate response, or poor performance can build slowly and suddenly cascade into failure. For some systems, no matter how specifically they are described, there will inherently be some possibility of errors, e.g., consider healthcare tests and diagnoses where both false positives and false negatives can be non-trivial occurrences.

What does this mean for organizational culture? For starters, the organization must acknowledge that many of its members are inherently somewhat irrational. It can try to force greater rationality on its members through policies, procedures, and practices, instilled by training and enforced by supervision, but there will always be leaks. A better approach would be to develop defense in depth designs, error-tolerant sub-systems with error correction capabilities, and a “just culture” that recognizes that honest mistakes will occur.

Bottom line: You should think awhile about how many aspects of your work environment have probabilistic attributes.

 

* N.N. Taleb, Fooled by Randomness, 2nd ed. (New York: Random House) 2004.

** Black swans are not always bad. For example, an actor can have one breakthrough role that leads to fame and fortune; far more actors will always be waiting tables and parking cars.

Friday, July 31, 2020

Culture in Healthcare: Lessons from When We Do Harm by Danielle Ofri, MD

In her book*, Dr. Ofri takes a hard look at the prevalence of medical errors in the healthcare system.  She reports some familiar statistics** and fixes, but also includes highly detailed case studies where errors large and small cascaded over time and the patients died.  This post summarizes her main observations.  She does not provide a tight summary of a less error-prone healthcare culture but she drops enough crumbs that we can infer its desirable attributes.

Healthcare is provided by a system

The system includes the providers, the supporting infrastructure, and factors in the external environment.  Ofri observes that medical care is exceedingly complicated and some errors are inevitable.  Because errors are inevitable, the system should emphasize error recognition and faster recovery with a goal of harm reduction.

She shares our view that the system permits errors to occur so fixes should focus on the system and not on the individual who made an error.***  System failures will eventually trap the most conscientious provider.  She opines that most medical errors are the result of a cascade of actions that compound one another; we would say the system is tightly coupled.

System “improvements” intended to increase efficiency can actually reduce effectiveness.  For example, electronic medical records can end up dictating providers’ practices, fragmenting thoughts and interfering with the flow of information between doctor and patient.****  Data field defaults and copy and paste shortcuts can create new kinds of errors.  Diagnosis codes driven by insurance company billing requirements can distort the diagnostic process.  In short, patient care becomes subservient to documentation.

Other changes can have unforeseen consequences.  For example, scheduling fewer working hours for interns leads to fewer diagnostic and medication errors but also results in more patient handoffs (where half of adverse medical events are rooted.)    

Aviation-inspired checklists have limited applicability

Checklists have reduced error rates for certain procedures but can lead to unintended consequences, e.g., mindless check-off of the items (to achieve 100% completion in the limited time available) and provider focus on the checklist while ignoring other things that are going on, including emergent issues.

Ofri thinks the parallels between healthcare and aviation are limited because of the complexity of human physiology.  While checklists may be helpful for procedures, doctors ascribe limited value to process checklists that guide their thinking.

Malpractice suits do not meaningfully reduce the medical error rate

Doctors fear malpractice suits so they practice defensive medicine, prescribing extra tests and treatments which have their own risks of injury and false positives, and lead to extra cost.  Medical equipment manufacturers also fear lawsuits so they design machines that sound alarms for all matters great and small; alarms are so numerous they are often simply ignored by the staff.

Hospital management culture is concerned about protecting the hospital’s financial interests against threats, including lawsuits.  A Cone of Silence is dropped over anything that could be considered an error and no information is released to the public, including family members of the injured or dead patient.  As a consequence, it is estimated that fewer than 10% of medical errors ever come to light.  There is no national incident reporting system because of the resistance of providers, hospitals, and trial lawyers.

The reality is a malpractice suit is not practical in the vast majority of cases of possible medical error.  The bar is very high: your doctor must have provided sub-standard care that caused your injury/death and resulted in quantifiable damages.  Cases are very expensive and time-consuming to prepare and the legal system, like the medical system, is guided by money so an acceptable risk-reward ratio has to be there for the lawyers.***** 

Desirable cultural attributes for reducing medical errors

In Ofri’s view, culture includes hierarchy, communications skill, training traditions, work ethic, egos, socialization, and professional ideals.  The primary cultural attribute for reducing errors is a willingness of individuals to assume ownership and get the necessary things done amid a diffusion of responsibility.  This must be taught by example and individuals must demand comparable behavior from their colleagues.

Providing medical care is a team business

Effective collaboration among team members is key, as is the ability (or duty even) of lower-status members to point out problems and errors without fear of retribution.  Leaders must encourage criticism, forbid scapegoating, and not allow hierarchy and egos to overrule what is right and true.  Where practical, training should be performed in groups who actually work together to build communication skills.

Doctors and nurses need time and space to think

Doctors need the time to develop differential diagnosis, to ask and answer “What else could it be?”  The provider’s thought process is the source of most diagnostic error, and subject to explicit and implicit biases, emotions, and distraction.  However, stopping to think can cause delays which can be reported as shortcomings by the tracking system.  The culture must acknowledge uncertainty (fueled by false positives and negatives), address overconfidence, and promote feedback, especially from patients.

Errors and near misses need to be reported without liability or shame.

The culture should regard reporting an adverse event as a routine and ordinary task.  This is a big lift for people steeped in the hierarchy of healthcare and the impunity of its highest ranked members.  Another factor to be overcome is the reluctance of doctors to report errors because of their feelings of personal and professional shame.

Ofri speaks favorably of a “just culture” that recognizes that unintentional error is possible, but risky behavior like taking shortcuts requires (system) intervention, and negligence should be disciplined.  In addition, there should not be any bias in how penalties are handed out, e.g., based on status.

In sum, Ofri says healthcare will always be an imperfect system.  Ultimately, what patients want is acknowledgement of errors and apology for them from doctors.

Our Perspective

Ofri’s major contribution is her review of the evidence showing how pervasive medical errors are and how the healthcare industry works overtime to deny and avoid responsibility for them.

Her suggestions for a safer healthcare culture echo what we’ve been saying for years about the attributes of a strong safety culture.  Reducing the error rates will be hard for many reasons.  For example, Ofri observes medical training forges a lifelong personal identity and reverence for tradition; in our view, it also builds in resistance to change.  The biases in decision making that she mentions are not trivial.  For one discussion of such biases, see our Dec. 18, 2013 review of Daniel Kahneman’swork.

Bottom line: After you read this, you will be clutching your rosary a little tighter if you have to go to a hospital for a major injury or illness.  You are more responsible for your own care than you think.


*  D. Ofri, When We Do Harm (Boston: Beacon Press, 2020).

**  For example, a study reporting that almost 4% of hospitalizations resulted in medical injury, of which 14% were fatal, and doctors’ diagnostic accuracy is estimated to be in the range of 90%.

***  It has been suggested that the term “error” be replaced with “adverse medical event” to reduce the implicit focus on individuals.

****  Ofri believes genuine conversation with a patient is the doctor’s single most important diagnostic tool.

***** As an example of the power of money, when Medicare started fining hospitals for shortcomings, the hospitals started cleaning up their problems.