Tuesday, November 12, 2024

The Failures Lurking Beneath Your Successes


Sidney Dekker recently published an essay* in which he argues that industrial accidents don’t just pop out of nowhere; rather they could have been hiding all along “in the green” which we take to mean behind acceptable (“green”) safety observations and indicators, and successful process outcomes. 

He begins by summarizing the current state of research related to industrial accidents and their context in different organizations’ management and culture.  Looking at incidents with fatalities, he observes that the evidence supports a finding that both failure-free performance AND a persistent string of minor failures precede accidents.  His inference is that accidents arise from the same pool of activities that produce an organization’s successes. 

He goes on to describe two different types of organizations, one that is basically dishonest where the lack of reported accidents is caused by active strategies to keep incidents from being reported and a second where there is a genuine lack of incidents over a considerable period.  Both types are “in the green” but both are headed for an eventual fall.    

Active suppression of bad news

This model starts with significant differences between work as designed and work as it is actually performed.  Workers may comply with rules and procedures when under observation but revert to short cuts to meet production goals when no one is looking.  The disconnect between the ideal and reality is amplified when the bosses don’t want to hear bad news, e.g., reports about non-conformances or minor incidents.  The organization has no interest in analyzing the overall system for connections and interactions that can cause process accidents.  Safety targets are apparently being met which misleads top management into thinking safety is under control.

A string of operational success

This organization is actually running accident-free.  However, the ultimate failures lie in the processes that produce success.  These processes inevitably produce other consequences that can evolve into accidents or even threaten the organization’s existence.  Management is confident success will continue indefinitely because the string of successes is visible while the marginal erosion in the system and increasing accident potential is invisible.

Recommended actions

In all organizations, top managers need to understand and question their key processes.  In their interactions with middle managers and workers, bosses need to ask: What sacrifices are being made to continue our string of successes?  Are middle managers suppressing bad news from flowing upward?  What safety tradeoffs are being made to increase efficiency?  Are we at the top sending a signal downward that our quest for perfect safety performance (e.g., a Zero Lost Days program) has no room for honesty about problems and bad news?  Do we encourage diverse, divergent and dissenting voices and listen for safety-related signals that we would otherwise miss?  Active questioning can raise sensitivity to signs that indicate organizational brittleness and an increasing exposure to the risk of unfavorable outcomes.

Our perspective

Dekker is fairly well-known in the safety culture space and we have reviewed his work on several occasions.  (Click on the Dekker label for earlier posts.)  That said, there is nothing really new here.  Several years ago, Dekker was involved with research that showed accidents derived from the same processes as success.**  The notion that the seeds of failure are sown in a string of successes for both safety conscious organizations and their production at all costs counterparts is based on an important principle in Safety II thinking: normal system functioning leads to mostly good and occasionally bad results.

Dekker says the mental model for this dates back to Max Weber, we say it goes back at least to G.W.F. Hegel with his concept of the dialectic with its thesis (operational success), growing antithesis (costs and externalities), and their fusion into a synthesis (a new model, or extinction).  For example, top management may deny goal conflict (safety vs production vs cost) exists or paper over it with doubletalk but eventually the true priorities will become evident.  To see this in action, check out Boeing’s current travails.

One attribute he doesn’t mention is the extent to which organizations will go to “prove” that their current system design – policies, procedures, practices, resources, etc. – is satisfactory and it is individual workers who cause problems and accidents.  They simply need more training, more supervision, and/or more discipline.  And some are “bad apples” who need to be separated.  This is the Safety I mental model in practice.

This essay does not break new ground and you only need to read it if you are unfamiliar with Safety I or II type thinking.  However, it does remind us that future accidents are likely hiding in the activities top managers currently regard as successful in perpetuity – or at least for the number of quarters they will be receiving their incentive bonuses based on the organization’s financial performance.  

  

*  S. Dekker, “Safety Theater: Where your accidents hide in the green” Oct. 26, 2024.

**  We reviewed that work on Oct. 29, 2018.

Thursday, August 22, 2024

Nuclear Regulator and Licensee Safety Culture Interrelationships


The Nuclear Energy Agency (NEA) has published an interesting report* on its research into how nuclear regulators and nuclear power plant operators influence each other’s safety culture (SC).  This is a topic that has not received much attention in the past.  We will summarize the report and then provide our perspective on it.


To begin, the report’s authors are aware of the power imbalance between the regulator and the licensee.  They recognize the regulator’s actions will be more directive and authoritative than the licensee’s.  In addition, one of the regulator’s goals is (or should be) to reinforce the licensee’s accountability for safety and strengthen its SC.

That said, there is still the potential for influence to flow in both directions.  The desired end state is “a reciprocal, co-operative style of interaction, characterised by respect, openness and trust, with a shared focus on safety and learning.” (p. 9)  

How the regulator influences licensee SC

The regulator exhibits certain characteristics (called “enablers”) that create the framework for interactions with the licensee.  The enablers are the regulatory regime (including structure, policies, rules, processes, and practices), the regulator’s technical capability, and its leadership and management.  The interactions with the licensee include communications, organizational and personal relationships, and the regulator’s behavior.  The key points here are that the regulatory regime should be predictable and consistent and the regulator’s behavior must exhibit safety as a core value.

An important aspect of the regulator-licensee relationship is that while the regulator may pull many different levers to influence the licensee’s SC, the regulator must maintain sufficient distance from the licensee to maintain the confidence of the public and politicians in the regulator’s independence.  The regulator needs a “Goldilocks” approach, avoiding a prescriptive role that evidences toughness but can diminish the licensee’s sense of accountability for safety while not appearing to be a victim of regulatory capture.

Another characteristic of the regulator is its own feedback loop, i.e., its ability to learn and improve its performance based on experience.  If it’s successful, then the regulator’s influence on the licensee, including its SC, may increase; if the regulator can’t or won’t learn and adapt, then its influence on the licensee may actually decrease over time.  

How the licensee influences the regulator’s SC

The licensee also has enablers and interactions which appear as a mirror image of the regulator’s.  The licensee’s enablers are also the regulatory regime, and licensee’s technical capability, leadership, and management.  A big difference is the licensee is more of a “taker” of the regulatory regime although the licensee may have significant, meaningful input into the development of regulatory policies and rules.

The licensee’s interactions with the regulator also include communications, organizational and personal relationships, and the licensee’s behavior.  The licensee mainly influences the regulator by the licensee’s actions – the way it communicates and reacts to the regulator’s inquiries, requests, orders, and other actions.  The extent to which licensee actions exhibit a strong SC (e.g., a questioning attitude, focus on safety, conservative decision making, and a commitment to safety) will most likely positively affect the regulator’s response and attitudes.

The licensee also has the ability (or inability) to learn and improve its performance based on experience.  Self-assessments are a major way the licensee demonstrates to the regulator its commitment to continuous improvement.

One overarching objective of the licensee is to get the regulator to affirm the licensee’s commitment to safety.  This is vital for establishing and maintaining positive relationships with the licensee’s stakeholders, i.e., the customers, ratepayers, politicians, and investors (if any) in the external environment.  “The licensee benefits from the public being assured of the independent scrutiny applied by the regulatory body.” (p. 28)

Our perspective

While the regulator-licensee mutual influence relationship may not have been studied much, the process of mutual adaptation where one entity simultaneously adapts to and causes changes in another entity is well-known in many fields including business and the social sciences.

There is no surprise that the regulator seeks to influence/strengthen a licensee’s SC; it’s part of the regulator’s job.  What’s interesting in this report is the cataloging of the various types of interactions that may result in such influence.

The licensee’s influence on the regulator is more informal, i.e., it’s not backed by any actual authority.  On the other hand, if the licensee is more technically competent and squared away than the regulator, it would be difficult to ignore such a role model.  

Going to the heart of the matter, how much can/does the licensee affect the regulator’s SC?  Through its interactions, including constructive criticism, the licensee can affect the regulatory regime – the direction and content of policies and rules, and the relative harshness of the regulator’s oversight practices – but does that really move the needle on the regulator’s SC?  You be the judge.

At best, the regulator and the licensee serve as positive role models for one another.  

We applaud the authors for repeatedly asking readers to think holistically and systematically about the regulator-licensee relationship and the socio-legal-political environment in which that relationship exists.  They recognize that the larger system has many stakeholders, with competing as well as common interests.  We have been proponents of systems thinking since the inception of Safetymatters.

Bottom line: This is a good case study of an under examined social phenomenon.  It also has descriptions of desirable SC characteristics sprinkled throughout the text.


*  Nuclear Energy Agency, “The Mutual Impact of Nuclear Regulatory Bodies and License Holders from a Safety Culture Perspective,” OECD Publishing, Paris (2024).  The NEA is a part of the Organization for Economic Cooperation and Development (OECD).

Tuesday, April 2, 2024

Systems Engineering’s Role in Addressing Society’s Problems

Guru Madhavan, a National Academy of Engineering senior scholar, has a new book about how engineering can contribute to solving society’s most complex and intractable problems.  He published a related article* on the National Academies website.  The author describes four different types of problems, i.e., decision situations.  Importantly, he advocates a systems engineering** perspective for addressing each type.  We will summarize his approach and provide our perspective on it.

He begins with a metaphor of clocks and clouds.  Clocks operate on logical principles and underlie much of our physical world.  Clouds form and reform, no two are alike, they defy logic, only the instant appearance is real – a metaphor for many of our complex social problems.
 
Hard problems

Hard problems can be essentially bounded.  The systems engineer can identify components, interrelationships, processes, desired outcomes, and measures of performance.  The system can be optimized by applying mathematics, scientific knowledge, and experience.  The system designers’ underlying belief is that a best outcome exists and is achievable.  In our view, this is a world of clocks.

Soft problems

Soft problems arise in the field of human behavior, which is complicated by political and psychological factors.  Because goals may be unclear, and constraints complicate system design, soft problems cannot be solved like hard problems.

Soft problems involve technology, psychology, and sociology and resolving them may yield an outcome that’s not the best (optimal) but good enough.  Results are based on satisficing, an approach that satisfies and suffices.  We’d say clouds are forming overhead.
 
Messy problems

Messy problems emerge from divisions created by people’s differing value sets, belief systems, ideologies, and convictions.  An example would be trying to stop the spread of a pathogen while respecting a culture’s traditional burial practices.  In these situations, the system designer must try to transform the nature of the entity and/or its environment by dissolving the problem into manageable elements and moving them toward a desired state in which the problem no longer arises.  In the example above, this might mean creating dignified burial rituals and promoting safe public health practices.

Wicked problems

The cloudiest problems are the “wicked” ones.  A wicked problem emerges when hard, soft, and messy problems simultaneously exist together.  This means optimal solutions, satisficing resolutions, and dissolution may also co-exist.  A comprehensive model of a wicked problem might show solution(s) within a resolution, and a dissolution might contain resolutions and solutions.  As a consequence, engineers need to possess “competency—and consciousness— . . . to develop a balanced blend of hard solutions, soft resolutions, and messy dissolutions to wicked problems.”

Our perspective

People form their mental models of the world based on their education, training, and lived experiences.  These mental models are representations of how the world works.  They are usually less than totally accurate because of people’s cognitive limitations and built-in biases.

We have long argued that technocrats who traditionally manage and operate complicated industrial facilities, e.g., nuclear power plants, have inadequate mental models, i.e., they are clock people.  Their models are limited to cause-effect thinking; their focus is on fixing the obvious hard problems in front of them.  As a result, their fixes are limited: change a procedure or component design, train harder, supervise more closely, and apply discipline, including getting rid of the bad apples, as necessary.  Rinse and repeat.

In contrast, we assert that problem solving must recognize the existence of complex socio-technical systems.  Fixes need to address both physical issues and psychological and social concerns.  Analysts must consider relationships between hard and soft system components.  Problem solvers need to be cloud people.  

Proper systems thinking understands that problems seldom exist in isolation.  They are surrounded by a task environment that may contain conflicting goals (e.g., production vs. safety) and a solution space limited by company policies, resource limitations, and organizational politics.  The external legal-political environment can also influence goals and further constrain the solution space.

Madhavan has provided some good illustrations of mental models for problem solving, starting with the (relatively) easiest “hard” physical problems and moving through more complicated models to the realm of wicked problems that may, in some cases, be effectively unsolvable.

Bottom line: this is a good refresher for people who are already systems thinkers and a good introduction for people who aren’t.


*  G. Madhavan, “Engineering Our Wicked Problems,” National Academy of Engineering Perspectives (March 6, 2024).  Online only.

**  In Madhavan’s view, systems engineering considers all facets of a problem, recognizes sensitivities, shapes synergies, and accounts for side effects.

Saturday, March 2, 2024

Boeing’s Safety Culture Under the FAA’s Microscope

The Federal Aviation Administration (FAA) recently released its report* on the safety culture (SC) at Boeing.  The FAA Expert Panel was tasked with reviewing SC after two crashes involving the latest models of Boeing’s 737 MAX airplanes.  The January 2024 door plug blowout happened as the report was nearing completion and reinforces the report’s findings.

737 MAXX door plug

The report has been summarized and widely reported in mainstream media and we will not review all its findings and recommendations here.  We want to focus on two parts of the report that address topics we have long promoted as being keys to understanding how strong (or weak) an organization’s SC is, viz., an organization’s decision-making processes and executive compensation.  In addition, we will discuss a topic that’s new to us, how to ensure the independence of employees whose work includes assessing company work products from the regulator’s perspective.

Decision-making

An organization’s decision-making processes create some of the most visible artifacts of the organization’s culture: a string of decisions (guided by policies, procedures, and priorities) and their consequences.

The report begins with a clear FAA description of decision-making’s important role in a Safety Management System (SMS) and an organization’s overall management.  In part, an “SMS is all about decision-making. Thus it has to be a decision-maker's tool, not a traditional safety program separate and distinct from business and operational decision making.” (p. 10)

However, the panel’s finding on Boeing’s SMS is a mixed bag.  “Boeing provided evidence that it is using its SMS to evaluate product safety decisions and some business decisions. The Expert Panel’s review of Boeing’s SMS documentation revealed detailed procedures on how to use SMS to evaluate product safety decisions, but there are no detailed procedures on how to determine which business decisions affect safety or how they should be evaluated under SMS.” (emphasis added) (p. 35)

The associated recommendation is “Develop detailed procedures to determine which business activities should be evaluated under SMS and how to evaluate those decisions.” (ibid.)  We think the recommendation addresses the specific problem identified in the finding.

One of the major inputs to a decision-making system is an organization’s priorities.  The FAA says safety should always be the top priority but Boeing’s commitment to safety has arguably weakened over time.

“Boeing provided the Expert Panel with a copy of the Boeing Safety Management System Policy, dated April 2022, which states, in part, “… we make safety our top priority.” Boeing revised this policy in August 2023 with . . .  a change to the message “we make safety our top priority” to “safety is our foundation.”” (p. 29)

Lowering the bar did not help.  “The [Expert] panel observed documentation, survey responses, and employee interviews that did not provide objective evidence of a foundational commitment to safety that matched Boeing’s descriptions of that objective.” (p. 22)

Boeing also created seeds of confusion for its safety decision makers.  Boeing implemented its SMS to operate alongside (and not replace or integrate with) its existing safety program.

“During interviews, Boeing employees highlighted that SMS implementation was not to disrupt existing safety program or systems.  SMS operating procedure documents spoke of SMS as the overarching safety program but then also provided segregation of SMS-focused activities from legacy safety activities . . .” (p. 24)

Executive compensation

We have long said that if safety performance is important to an organization then their senior managers’ compensation should have a safety performance-related component. 

Boeing has included safety in its executive financial incentive program.  Safety is one of five factors comprising operational performance which, in turn, is combined with financial performance to determine company-level performance.  Because of the weights used in the incentive model, “The Product Safety measure comprised approximately 4% of the overall 2022 Annual Incentive Award.” (p. 28)

Is 4% enough to influence executive behavior?  You be the judge.

Employee independence from undue management influence   

Boeing’s relationship with the FAA has an aspect that we don’t see in other industries. 

Boeing holds an Organization Designation Authorization (ODA) from the FAA. This allows Boeing to “make findings and issue certificates, i.e., perform discretionary functions in engineering, manufacturing, operations, airworthiness, or maintenance on behalf of the [FAA] Administrator.” (p. 12)

Basically, the FAA delegates some of its authority to Boeing employees, the ODA Unit Members (UMs), who then perform certain assessment and certification tasks.  “When acting as a representative of the Administrator, an individual is required to perform in a manner consistent with the policies, guidelines, and directives of the FAA. When performing a delegated function, an individual is legally distinct from, and must act independent of, the ODA holder.” (ibid.)  These employees are supposed to take the FAA’s view of situations and apply the FAA’s rules even if the FAA’s interests are in conflict with Boeing’s business interests. 

This might work in a perfect world but in Boeing’s world, it’s had and has problems, primarily “Boeing’s restructuring of the management of the ODA unit decreased opportunities for interference and retaliation against UMs, and provides effective organizational messaging regarding independence of UMs. However, the restructuring, while better, still allows opportunities for retaliation to occur, particularly with regards to salary and furlough ranking.” (emphasis added) (p. 5)  In addition, “The ability to comply with the ODA’s approved procedures is present; however, the integration of the SMS processes, procedures, and data collection requirements has not been accomplished.” (p. 26)

To an outsider, this looks like bad organizational design and practices. 

The U.S. commercial nuclear industry offers a useful contrast.  The regulator (Nuclear Regulatory Commission) expects its licensees to follow established procedures, perform required tests and inspections, and report any problems to the NRC.  Self-reporting is key to an effective relationship built on a base of trust.  However, it’s “trust but verify.”  The NRC has their own full-time employees in all the power plants, performing inspections, monitoring licensee operations, and interacting with licensee personnel.  The inspectors’ findings can lead, and have led, to increased oversight of licensee activities by the NRC.

Our perspective

It’s obvious that Boeing has emphasized production over safety.  The problems described above are evidence of broad systemic issues which are not amenable to quick fixes.  Integrating SC into everyday decision-making is hard work of the “continuous improvement” variety; it will not happen by management fiat.  Adjusting the compensation plan will require the Board to take safety more seriously.  Reworking the ODA program to eliminate all pressures and goal conflicts may not be possible; this is a big problem because the FAA has effectively deputized 1,000 people to perform FAA functions at Boeing. (p. 25)

The report only covers the most visible SC issues.  Complacency, normalization of deviation, the multitude of biases that can affect decision-making, and other corrosive factors are perennial threats to a strong SC and can affect “the natural drift in organizations.” (p. 40)  Such drift may lead to everything from process inefficiencies to tragic safety failures.

Boeing has taken one step: they fired the head of the 737 MAX program.**  Organizations often toss a high-level executive into a volcano to appease the regulatory gods and buy some time.  Boeing’s next challenge is that the FAA has given Boeing 90 days to fix its quality problems highlighted by the door plug blowout.***

Bottom line: Grab your popcorn, the show is just starting.  Boeing is probably too big to fail but it is definitely going to be pulled through the wringer. 


*  Section 103Organization Designation Authorizations (ODA) for Transport Airplanes Expert Panel Review Report,” Federal Aviation Administration (Feb. 26, 2024). 

**  N. Robertson, “Boeing fires head of 737 Max program,” The Hill (Feb. 21, 2024).

***  D. Shepardson and V. Insinna, “FAA gives Boeing 90 days to develop plan to address quality issues,” Reuters (Feb. 28, 2024).

Friday, October 6, 2023

A Straightforward Recipe for Changing Culture

Center for Open Science
Source: COS website


We recently came across a clear, easily communicated road map for implementing cultural change.*  We’ll provide some background information on the author’s motivation for developing the road map, a summary of it, and our perspective on it.

The author, Brian Nosek, is executive director of the Center for Open Science (COS).  The mission of COS is to increase the openness, integrity, and reproducibility of scientific research.  Specifically, they propose that researchers publish the initial description of their studies so that original plans can be compared with actual results.  In addition, researchers should “share the materials, protocols, and data that they produced in the research so that others could confirm, challenge, extend, or reuse the work.”  Overall, the COS proposes a major change from how much research is presently conducted.

Currently, a lot of research is done in private, i.e., more or less in secret, usually with the objective of getting results published, preferably in a prestigious journal.  Frequent publishing is fundamental to getting and keeping a job, being promoted, and obtaining future funding for more research, in other words, having a successful career.  Researchers know that publishers generally prefer findings that are novel, positive (e.g., a treatment is effective), and tidy (the evidence fits together).

Getting from the present to the future requires a significant change in the culture of scientific research.  Nosek describes the steps to implement such change using a pyramid, shown below, as his visual model.  Similar to Abraham Maslow’s Hierarchy of Needs, a higher level of the pyramid can only be achieved if the lower levels are adequately satisfied.


Source: "Strategy for Culture Change"

Each level represents a different step for changing a culture:

•    Infrastructure refers to an open source database where researchers can register their projects, share their data, and show their work.
•    The User Interface of the infrastructure must be easy to use and compatible with researchers' existing workflows.
•    New research Communities will be built around new norms (e.g., openness and sharing) and behavior, supported and publicized by the infrastructure.
•    Incentives refer to redesigned reward and recognition systems (e.g., research funding and prizes, and institutional hiring and promotion schemes) that motivate desired behaviors.
•    Public and private Policy changes codify and normalize the new system, i.e., specify the new requirements for conducting research.
     
Our Perspective

As long-time consultants to senior managers, we applaud Nosek’s change model.  It is straightforward and adequately complete, and can be easily visualized.  We used to spend a lot of time distilling complicated situations into simple graphics that communicated strategically important points.

We also totally support his call to change the reward system to motivate the new, desirable behaviors.  We have been promoting this viewpoint for years with respect to safety culture: If an organization or other entity values safety and wants safe activities and outcomes, then they should compensate the senior leadership accordingly, i.e., pay for safety performance, and stop promoting the nonsense that safety is intrinsic to the entity’s functioning and leaders should provide it basically for free.

All that said, implementing major cultural change is not as simple as Nosek makes it sound.

First off, the status quo can have enormous sticking power.  Nosek acknowledges it is defined by strong norms, incentives, and policies.  Participants know the rules and how the system works, in particular they know what they must do to obtain the rewards and recognition.  Open research is an anathema to many researchers and their sponsors; this is especially true when a project is aimed at creating some kind of competitive advantage for the researcher or the institution.  Secrecy is also valued when researchers may (or do) come up with the “wrong answer” – findings that show a product is not effective or has dangerous side effects, or an entire industry’s functioning is hazardous for society.

Second, the research industry exists in a larger environment of social, political and legal factors.  Many elected officials, corporate and non-profit bosses, and other thought leaders may say they want and value a world of open research but in private, and in their actions, believe they are better served (and supported) by the existing regime.  The legal system in particular is set up to reinforce the current way of doing business, e.g., through patents.

Finally, systemic change means fiddling with the system dynamics, the physical and information flows, inter-component interfaces, and feedback loops that create system outcomes.  To the extent such outcomes are emergent properties, they are created by the functioning of the system itself and cannot be predicted by examining or adjusting separate system components.  Large-scale system change can be a minefield of unexpected or unintended consequences.

Bottom line: A clear model for change is essential but system redesigners need to tread carefully.  


*  B. Nosek, “Strategy for Culture Change,” blog post (June 11th, 2019).

Friday, August 4, 2023

Real Systems Pursue Goals

System Model Control Panel
System Model Control Panel
On March 10, 2023 we posted about a medical journal editorial that advocated for incorporating more systems thinking in hospital emergency rooms’ (ERs) diagnostic processes.  Consistent with Safetymatters’ core beliefs, we approved of using systems thinking in complicated decision situations such as those arising in the ER. 

The article prompted a letter to the editor in which the author said the approach described in the original editorial wasn’t a true systems approach because it wasn’t specifically goal-oriented.  We agree with that author’s viewpoint.  We often argue for more systems thinking and describe mental models of systems with components, dynamic relationships among the components, feedback loops, control functions such as rules and culture, and decision maker inputs.  What we haven’t emphasized as much, probably because we tend to take it for granted, is that a bona fide system is teleological, i.e., designed to achieve a goal. 

It’s important to understand what a system’s goal is.  This may be challenging because the system’s goal may contain multiple sub-goals.  For example, a medical clinician may order a certain test.  The lab has a goal: to produce accurate, timely, and reliable results for tests that have been ordered.  But the clinician’s goal is different: to develop a correct diagnosis of a patient’s condition.  The goal of the hospital of which the clinician and lab are components may be something else: to produce generally acceptable patient outcomes, at reasonable cost, without incurring undue legal problems or regulatory oversight.  System components (the clinician and the lab) may have goals which are hopefully supportive of, or at least consistent with, overall system goals.

The top-level system, e.g., a healthcare provider, may not have a single goal, it may have multiple, independent goals that can conflict with one another.  Achieving the best quality may conflict with keeping costs within budgets.  Achieving perfect safety may conflict with the need to make operational decisions under time pressure and with imperfect or incomplete information.  One of the most important responsibilities of top management is defining how the system recognizes and deals with goal conflict.

In addition to goals, we need to discuss two other characteristics of full-fledged systems: a measure of performance and a defined client.* 

The measure of performance shows the system designers, users, managers, and overseers how well the system’s goal(s) are being achieved through the functioning of system components as affected by the system’s decision makers.  Like goals, the measure of performance may have multiple dimensions or sub-measures.  In a well-designed system, the summation of the set of sub-measures should be sufficient to describe overall system performance.  

The client is the entity whose interests are served by the system.  Identifying the client can be tricky.  Consider a city’s system for serving its unhoused population.  The basic system consists of a public agency to oversee the services, entities (often nongovernmental organizations, or NGOs) that provide the services, suppliers (e.g., landlords who offer buildings for use as housing), and the unhoused population.  Who is the client of this system, i.e., who benefits from its functioning?  The politicians, running for re-election, who authorize and sustain the public agency?  The public agency bureaucrats angling for bigger budgets and more staff?  The NGOs who are looking for increased funding?  Landlords who want rent increases?  Or the unhoused who may be looking for a private room with a lockable door, or may be resistant to accepting any services because of their mental, behavioral, or social problems?  It’s easy to see that many system participants do better, i.e., get more pie, if the “homeless problem” is never fully resolved.

For another example, look at the average public school district in the U.S.  At first blush, the students are the client.  But what about the elected state commissioner of education and the associated bureaucracy that establish standards and curricula for the districts?  And the elected district directors and district bureaucracy?  And the parents’ rights organizations?  And the teachers’ unions?  All of them claim to be working to further the students’ interests but what do they really care about?  How about political or organizational power, job security, and money?  The students could be more of a secondary consideration.

We could go on.  The point is we are surrounded by many social-legal-political-technical systems and who and what they are actually serving may not be those they purport to serve.

  

*  These system characteristics are taken from the work of a systems pioneer, Prof. C. West Churchman of UC Berkeley.  For more information, see his The Design of Inquiring Systems (New York: Basic Books) 1971.

Thursday, May 25, 2023

The National Academies on Behavioral Economics

Report cover
A National Academies of Sciences, Engineering, and Medicine (NASEM) committee recently published a report* on the contributions of behavioral economics (BE) to public policy.  BE is “an approach to understanding human behavior and decision making that integrates knowledge from psychology and other behavioral fields with economic analysis.” (p. Summ-1)

The report’s first section summarizes the history and development of the field of behavioral economics.  Classical economics envisions the individual person as a decision maker who has all relevant information available, and makes rational decisions that maximize his overall, i.e. short- and long-term, self-interest.  In contrast, BE recognizes that actual people making real decisions have many built-in biases, limitations, and constraints.  The following five principles apply to the decision making processes behavioral economists study:

Limited Attention and Cognition - The extent to which people pay limited attention to relevant aspects of their environment and often make cognitive errors.

Inaccurate Beliefs - Individuals can have incorrect perceptions or information about situations, relevant incentives, their own abilities, and the beliefs of others.

Present Bias - People tend to disproportionately focus on issues that are in front of them in the present moment.

Reference Dependence and Framing - Individuals tend to consider how their decision options relate to a particular reference point, e.g., the status quo, rather than considering all available possibilities. People are also sensitive to the way decision problems are framed, i.e., how options are presented, and this affects what comes to their attention and can lead to different perceptions, reactions, and choices.

Social Preferences and Social Norms - Decision makers often consider how their decisions affect others, how they compare with others, and how their decisions imply values and conformance with social norms.

The task of policy makers is to acknowledge these limitations and present decision situations to people in ways that people can comprehend and help them make decisions that will serve their own and society’s interests.  In practice this means decision situations “can be designed to modify the habitual and unconscious ways that people act and make decisions.” (p. Summ-3)

Decision situation designers use various interventions to inform and guide individuals’ decision making.  The NASEM committee mapped 23 possible interventions against the 5 principles.  It’s impractical to list all the interventions here but the more graspable ones include:

Defaults – The starting decision option is the designer’s preferred choice; the decision maker must actively choose a different option.

De-biasing – Attempt to correct inaccurate beliefs by presenting salient information related to past performance of the individual decision maker or a relevant reference group.

Mental Models – Update or change the decision maker’s mental representation of how the world works.

Reminders – Use reminders to cut through inattention, highlight desired behavior, and focus the decision maker on a future goal or desired state.

Framing – Focus the decision maker on a specific reference point, e.g., a default option or the negative consequences of inaction (not choosing any option).

Social Comparison and Feedback - Explicitly compare an individual’s performance with a relevant comparison or reference group, e.g., the individual’s professional peers.

Interventions can range from “nudges” that alter people’s behavior without forbidding any options to designs that are much stronger than nudges and are, in effect, efforts to enforce conformity.

The bulk of the report describes the theory, research, and application of BE in six public policy domains: health, retirement benefits, social safety net benefits, climate change, education, and criminal justice.  The NASEM committee reviewed current research and interventions in each domain and recommended areas for future research activity.  There is too much material to summarize so we’ll provide a single illustrative sample.

Because we have written about culture and safety practices in the healthcare industry, we will recap the report’s discussion of efforts to modify or support medical clinicians’ behavior.  Clinicians often work in busy, sometimes chaotic, settings that place multiple demands on their attention and must make frequent, critical decisions under time pressure.  On occasion, they provide more (or less) health care than a patient’s clinical condition warrants; they also make errors.  Research and interventions to date address present bias and limited attention by changing defaults, and invoke social norms by providing information on an individual’s performance relative to others.  An example of a default intervention is to change mandated checklists from opt-in (the response for each item must be specified) to opt-out (the most likely answer for each item is pre-loaded; the clinician can choose to change it).  An example of using social norms is to provide information on the behavior and performance of peers, e.g., in the quantity and type of prescriptions written.

Overall recommendations

The report’s recommendations are typical for this type of overview: improve the education of future policy makers, apply the key principles in public policy formulation, and fund and emphasize future research.  Such research should include better linkage of behavioral principles and insights to specific intervention and policy goals, and realize the potential for artificial intelligence and machine learning approaches to improve tailoring and targeting of interventions.

Our Perspective

We have written about decision making for years, mostly about how organizational culture (values and norms) affect decision making.  We’ve also reviewed the insights and principles highlighted in the subject report.  For example, our December 18, 2013 post on Daniel Kahneman’s work described people’s built-in decision making biases.  Our June 6, 2022 post on Thaler and Sunstein’s book Nudge discussed the application of behavioral economic principles in the design of ideal (and ethical) decision making processes.  These authors’ works are recognized as seminal in the subject report.

On the subject of ethics, the NASEM committee’s original mission included considering ethical issues related to the use of behavioral economics but ethics’ mention is the report is not much more than a few cautionary notes.  This is thin gruel for a field that includes many public and private actors deciding what people should do instead of letting them decide for themselves.

As evidenced by the report, the application of behavioral economics is widespread and growing.  It’s easy to see its use being supercharged by artificial intelligence and machine learning.  “Behavioral economics” sounds academic and benign.  Maybe we should start calling it behavioral engineering.

Bottom line: Read this report.  You need to know about this stuff.


*  National Academies of Sciences, Engineering, and Medicine, “Behavioral Economics: Policy Impact and Future Directions,” (Washington, DC: The National Academies Press, 2023).

Friday, March 10, 2023

A Systems Approach to Diagnosis in Healthcare Emergency Departments

JAMA logo

A recent op-ed* in JAMA advocated greater use of systems thinking to reduce diagnostic errors in emergency departments (EDs).  The authors describe the current situation – diagnostic errors occur at an estimated 5.7% rate – and offer 3 insights why systems thinking may contribute to interventions that reduce this error rate.  We will summarize their observations and then provide our perspective.

First, they point out that diagnostic errors are not limited to the ED, in fact, such errors occur in all specialties and areas of health care.  Diagnosis is often complicated and practitioners are under time pressure to come up with an answer.  The focus of interventions should be on reducing incorrect diagnoses that result in harm to patients.  Fortunately, studies have shown that “just 15 clinical conditions accounted for 68% of diagnostic errors associated with high-severity harms,” which should help narrow the focus for possible interventions.  However, simply doing more of the current approaches, e.g., more “testing,” is not going to be effective.  (We’ll explain why later.)

Second, diagnostic errors are often invisible; if they were visible, they would be recognized and corrected in the moment.  The system needs “practical value-added ways to define and measure diagnostic errors in real time, . . .”

Third, “Because of the perception of personal culpability associated with diagnostic errors, . . . health care professionals have relied on the heroism of individual clinicians . . . to prevent diagnostic errors.”  Because humans are not error-free, the system as it currently exists will inevitably produce some errors.  Possible interventions include checklists, cognitive aids, machine learning, and training modules aimed at the Top 15 problematic clinical conditions. “The paradigm of how we interpret diagnostic errors must shift from trying to “fix” individual clinicians to creating systems-level solutions to reverse system errors.”

Our Perspective

It will come as no surprise that we endorse the authors’ point of view: healthcare needs to utilize more systems thinking to increase the safety and effectiveness of its myriad diagnostic and treatment processes.  Stakeholders must acknowledge that the current system for delivering healthcare services has error rates consistent with its sub-optimal design.  Because of that, tinkering with incremental changes, e.g., the well-publicized effort to reduce infections from catheters, will yield only incremental improvements in safety.  At best, they will only expose the next stratum of issues that are limiting system performance.

Incremental improvements are based on fragmented mental models of the healthcare system.  Proper systems thinking starts with a complete mental model of a healthcare system and how it operates.  We have described a more complete mental model in other posts so we will only summarize it here.  A model has components, e.g., doctors, nurses, support staff, and facilities.  And the model is dynamic, which means components are not fixed entities but ones whose quality and quantity varies over time.  In addition, the inter-relationships between and among the components can also vary over time.  Component behavior is directed by both relatively visible factors – policies, procedures, and practices – and softer control functions such as the level of trust between individuals, different groups, and hierarchical levels, i.e., bosses and workers.  Importantly, component behavior is also influenced by feedback from other components.  These feedback loops can be positive or negative, i.e., they can reinforce certain behaviors or seek to reduce or eliminate them.  For more on mental models, see our May 21, 2021, Nov. 6, 2019, and Oct. 9, 2019 posts.

One key control factor is organizational culture, i.e., the values and assumptions about reality shared by members.  In the healthcare environment, the most important subset of culture is safety culture (SC).  Safety should be a primary consideration in all activities in a healthcare organization.  For example, in a strong SC, the reporting of an adverse event such as an error should be regarded as a routine and ordinary task.  The reluctance of doctors to report errors because of their feelings of personal and professional shame, or fear of malpractice allegations or discipline, must be overcome.  For more on SC, see our May 21, 2021 and July 31, 2020 posts.

Organizational structure is another control factor, one that basically defines the upper limit of organizational performance.  Does the existing structure facilitate communication, learning, and performance improvement or do silos create barriers?  Do professional organizations and unions create focal points the system designer can leverage to improve performance or are they separate power structures whose interests and goals may conflict with those of the larger system?  What is the quality of management’s behavior, especially their decision making processes, and how is management influenced by their goals, policy constraints, environmental pressures (e.g., to advance equity and diversity) and compensation scheme?

As noted earlier, the authors observe that EDs depend on individual doctors to arrive at correct diagnoses in spite of inadequate information or time pressure and doctors who can do this well are regarded as heroes.  We note that doctors who are less effective may be shuffled off to the side or in egregious cases, labeled “bad apples” and tossed out of the organization.  This is an incorrect viewpoint.  Competent, dedicated individuals are necessary, of course, but the system designer should focus on making the system more error tolerant (so any errors cause no or minimal harm) and resilient (so errors are recognized and corrective actions implemented.)          

Bottom line: more systems thinking is needed in healthcare and articles like this help move the needle in the correct direction.


*  J.A. Edlow and P.J. Pronovost, “Misdiagnosis in the Emergency Department: Time for a System Solution,” JAMA (Journal of the American Medical Association), Vol. 329, No. 8 (Feb. 28, 2023), pp. 631-632.