Friday, October 6, 2023

A Straightforward Recipe for Changing Culture

Center for Open Science
Source: COS website

We recently came across a clear, easily communicated road map for implementing cultural change.*  We’ll provide some background information on the author’s motivation for developing the road map, a summary of it, and our perspective on it.

The author, Brian Nosek, is executive director of the Center for Open Science (COS).  The mission of COS is to increase the openness, integrity, and reproducibility of scientific research.  Specifically, they propose that researchers publish the initial description of their studies so that original plans can be compared with actual results.  In addition, researchers should “share the materials, protocols, and data that they produced in the research so that others could confirm, challenge, extend, or reuse the work.”  Overall, the COS proposes a major change from how much research is presently conducted.

Currently, a lot of research is done in private, i.e., more or less in secret, usually with the objective of getting results published, preferably in a prestigious journal.  Frequent publishing is fundamental to getting and keeping a job, being promoted, and obtaining future funding for more research, in other words, having a successful career.  Researchers know that publishers generally prefer findings that are novel, positive (e.g., a treatment is effective), and tidy (the evidence fits together).

Getting from the present to the future requires a significant change in the culture of scientific research.  Nosek describes the steps to implement such change using a pyramid, shown below, as his visual model.  Similar to Abraham Maslow’s Hierarchy of Needs, a higher level of the pyramid can only be achieved if the lower levels are adequately satisfied.

Source: "Strategy for Culture Change"

Each level represents a different step for changing a culture:

•    Infrastructure refers to an open source database where researchers can register their projects, share their data, and show their work.
•    The User Interface of the infrastructure must be easy to use and compatible with researchers' existing workflows.
•    New research Communities will be built around new norms (e.g., openness and sharing) and behavior, supported and publicized by the infrastructure.
•    Incentives refer to redesigned reward and recognition systems (e.g., research funding and prizes, and institutional hiring and promotion schemes) that motivate desired behaviors.
•    Public and private Policy changes codify and normalize the new system, i.e., specify the new requirements for conducting research.
Our Perspective

As long-time consultants to senior managers, we applaud Nosek’s change model.  It is straightforward and adequately complete, and can be easily visualized.  We used to spend a lot of time distilling complicated situations into simple graphics that communicated strategically important points.

We also totally support his call to change the reward system to motivate the new, desirable behaviors.  We have been promoting this viewpoint for years with respect to safety culture: If an organization or other entity values safety and wants safe activities and outcomes, then they should compensate the senior leadership accordingly, i.e., pay for safety performance, and stop promoting the nonsense that safety is intrinsic to the entity’s functioning and leaders should provide it basically for free.

All that said, implementing major cultural change is not as simple as Nosek makes it sound.

First off, the status quo can have enormous sticking power.  Nosek acknowledges it is defined by strong norms, incentives, and policies.  Participants know the rules and how the system works, in particular they know what they must do to obtain the rewards and recognition.  Open research is an anathema to many researchers and their sponsors; this is especially true when a project is aimed at creating some kind of competitive advantage for the researcher or the institution.  Secrecy is also valued when researchers may (or do) come up with the “wrong answer” – findings that show a product is not effective or has dangerous side effects, or an entire industry’s functioning is hazardous for society.

Second, the research industry exists in a larger environment of social, political and legal factors.  Many elected officials, corporate and non-profit bosses, and other thought leaders may say they want and value a world of open research but in private, and in their actions, believe they are better served (and supported) by the existing regime.  The legal system in particular is set up to reinforce the current way of doing business, e.g., through patents.

Finally, systemic change means fiddling with the system dynamics, the physical and information flows, inter-component interfaces, and feedback loops that create system outcomes.  To the extent such outcomes are emergent properties, they are created by the functioning of the system itself and cannot be predicted by examining or adjusting separate system components.  Large-scale system change can be a minefield of unexpected or unintended consequences.

Bottom line: A clear model for change is essential but system redesigners need to tread carefully.  

*  B. Nosek, “Strategy for Culture Change,” blog post (June 11th, 2019).

Friday, August 4, 2023

Real Systems Pursue Goals

System Model Control Panel
System Model Control Panel
On March 10, 2023 we posted about a medical journal editorial that advocated for incorporating more systems thinking in hospital emergency rooms’ (ERs) diagnostic processes.  Consistent with Safetymatters’ core beliefs, we approved of using systems thinking in complicated decision situations such as those arising in the ER. 

The article prompted a letter to the editor in which the author said the approach described in the original editorial wasn’t a true systems approach because it wasn’t specifically goal-oriented.  We agree with that author’s viewpoint.  We often argue for more systems thinking and describe mental models of systems with components, dynamic relationships among the components, feedback loops, control functions such as rules and culture, and decision maker inputs.  What we haven’t emphasized as much, probably because we tend to take it for granted, is that a bona fide system is teleological, i.e., designed to achieve a goal. 

It’s important to understand what a system’s goal is.  This may be challenging because the system’s goal may contain multiple sub-goals.  For example, a medical clinician may order a certain test.  The lab has a goal: to produce accurate, timely, and reliable results for tests that have been ordered.  But the clinician’s goal is different: to develop a correct diagnosis of a patient’s condition.  The goal of the hospital of which the clinician and lab are components may be something else: to produce generally acceptable patient outcomes, at reasonable cost, without incurring undue legal problems or regulatory oversight.  System components (the clinician and the lab) may have goals which are hopefully supportive of, or at least consistent with, overall system goals.

The top-level system, e.g., a healthcare provider, may not have a single goal, it may have multiple, independent goals that can conflict with one another.  Achieving the best quality may conflict with keeping costs within budgets.  Achieving perfect safety may conflict with the need to make operational decisions under time pressure and with imperfect or incomplete information.  One of the most important responsibilities of top management is defining how the system recognizes and deals with goal conflict.

In addition to goals, we need to discuss two other characteristics of full-fledged systems: a measure of performance and a defined client.* 

The measure of performance shows the system designers, users, managers, and overseers how well the system’s goal(s) are being achieved through the functioning of system components as affected by the system’s decision makers.  Like goals, the measure of performance may have multiple dimensions or sub-measures.  In a well-designed system, the summation of the set of sub-measures should be sufficient to describe overall system performance.  

The client is the entity whose interests are served by the system.  Identifying the client can be tricky.  Consider a city’s system for serving its unhoused population.  The basic system consists of a public agency to oversee the services, entities (often nongovernmental organizations, or NGOs) that provide the services, suppliers (e.g., landlords who offer buildings for use as housing), and the unhoused population.  Who is the client of this system, i.e., who benefits from its functioning?  The politicians, running for re-election, who authorize and sustain the public agency?  The public agency bureaucrats angling for bigger budgets and more staff?  The NGOs who are looking for increased funding?  Landlords who want rent increases?  Or the unhoused who may be looking for a private room with a lockable door, or may be resistant to accepting any services because of their mental, behavioral, or social problems?  It’s easy to see that many system participants do better, i.e., get more pie, if the “homeless problem” is never fully resolved.

For another example, look at the average public school district in the U.S.  At first blush, the students are the client.  But what about the elected state commissioner of education and the associated bureaucracy that establish standards and curricula for the districts?  And the elected district directors and district bureaucracy?  And the parents’ rights organizations?  And the teachers’ unions?  All of them claim to be working to further the students’ interests but what do they really care about?  How about political or organizational power, job security, and money?  The students could be more of a secondary consideration.

We could go on.  The point is we are surrounded by many social-legal-political-technical systems and who and what they are actually serving may not be those they purport to serve.


*  These system characteristics are taken from the work of a systems pioneer, Prof. C. West Churchman of UC Berkeley.  For more information, see his The Design of Inquiring Systems (New York: Basic Books) 1971.

Thursday, May 25, 2023

The National Academies on Behavioral Economics

Report cover
A National Academies of Sciences, Engineering, and Medicine (NASEM) committee recently published a report* on the contributions of behavioral economics (BE) to public policy.  BE is “an approach to understanding human behavior and decision making that integrates knowledge from psychology and other behavioral fields with economic analysis.” (p. Summ-1)

The report’s first section summarizes the history and development of the field of behavioral economics.  Classical economics envisions the individual person as a decision maker who has all relevant information available, and makes rational decisions that maximize his overall, i.e. short- and long-term, self-interest.  In contrast, BE recognizes that actual people making real decisions have many built-in biases, limitations, and constraints.  The following five principles apply to the decision making processes behavioral economists study:

Limited Attention and Cognition - The extent to which people pay limited attention to relevant aspects of their environment and often make cognitive errors.

Inaccurate Beliefs - Individuals can have incorrect perceptions or information about situations, relevant incentives, their own abilities, and the beliefs of others.

Present Bias - People tend to disproportionately focus on issues that are in front of them in the present moment.

Reference Dependence and Framing - Individuals tend to consider how their decision options relate to a particular reference point, e.g., the status quo, rather than considering all available possibilities. People are also sensitive to the way decision problems are framed, i.e., how options are presented, and this affects what comes to their attention and can lead to different perceptions, reactions, and choices.

Social Preferences and Social Norms - Decision makers often consider how their decisions affect others, how they compare with others, and how their decisions imply values and conformance with social norms.

The task of policy makers is to acknowledge these limitations and present decision situations to people in ways that people can comprehend and help them make decisions that will serve their own and society’s interests.  In practice this means decision situations “can be designed to modify the habitual and unconscious ways that people act and make decisions.” (p. Summ-3)

Decision situation designers use various interventions to inform and guide individuals’ decision making.  The NASEM committee mapped 23 possible interventions against the 5 principles.  It’s impractical to list all the interventions here but the more graspable ones include:

Defaults – The starting decision option is the designer’s preferred choice; the decision maker must actively choose a different option.

De-biasing – Attempt to correct inaccurate beliefs by presenting salient information related to past performance of the individual decision maker or a relevant reference group.

Mental Models – Update or change the decision maker’s mental representation of how the world works.

Reminders – Use reminders to cut through inattention, highlight desired behavior, and focus the decision maker on a future goal or desired state.

Framing – Focus the decision maker on a specific reference point, e.g., a default option or the negative consequences of inaction (not choosing any option).

Social Comparison and Feedback - Explicitly compare an individual’s performance with a relevant comparison or reference group, e.g., the individual’s professional peers.

Interventions can range from “nudges” that alter people’s behavior without forbidding any options to designs that are much stronger than nudges and are, in effect, efforts to enforce conformity.

The bulk of the report describes the theory, research, and application of BE in six public policy domains: health, retirement benefits, social safety net benefits, climate change, education, and criminal justice.  The NASEM committee reviewed current research and interventions in each domain and recommended areas for future research activity.  There is too much material to summarize so we’ll provide a single illustrative sample.

Because we have written about culture and safety practices in the healthcare industry, we will recap the report’s discussion of efforts to modify or support medical clinicians’ behavior.  Clinicians often work in busy, sometimes chaotic, settings that place multiple demands on their attention and must make frequent, critical decisions under time pressure.  On occasion, they provide more (or less) health care than a patient’s clinical condition warrants; they also make errors.  Research and interventions to date address present bias and limited attention by changing defaults, and invoke social norms by providing information on an individual’s performance relative to others.  An example of a default intervention is to change mandated checklists from opt-in (the response for each item must be specified) to opt-out (the most likely answer for each item is pre-loaded; the clinician can choose to change it).  An example of using social norms is to provide information on the behavior and performance of peers, e.g., in the quantity and type of prescriptions written.

Overall recommendations

The report’s recommendations are typical for this type of overview: improve the education of future policy makers, apply the key principles in public policy formulation, and fund and emphasize future research.  Such research should include better linkage of behavioral principles and insights to specific intervention and policy goals, and realize the potential for artificial intelligence and machine learning approaches to improve tailoring and targeting of interventions.

Our Perspective

We have written about decision making for years, mostly about how organizational culture (values and norms) affect decision making.  We’ve also reviewed the insights and principles highlighted in the subject report.  For example, our December 18, 2013 post on Daniel Kahneman’s work described people’s built-in decision making biases.  Our June 6, 2022 post on Thaler and Sunstein’s book Nudge discussed the application of behavioral economic principles in the design of ideal (and ethical) decision making processes.  These authors’ works are recognized as seminal in the subject report.

On the subject of ethics, the NASEM committee’s original mission included considering ethical issues related to the use of behavioral economics but ethics’ mention is the report is not much more than a few cautionary notes.  This is thin gruel for a field that includes many public and private actors deciding what people should do instead of letting them decide for themselves.

As evidenced by the report, the application of behavioral economics is widespread and growing.  It’s easy to see its use being supercharged by artificial intelligence and machine learning.  “Behavioral economics” sounds academic and benign.  Maybe we should start calling it behavioral engineering.

Bottom line: Read this report.  You need to know about this stuff.

*  National Academies of Sciences, Engineering, and Medicine, “Behavioral Economics: Policy Impact and Future Directions,” (Washington, DC: The National Academies Press, 2023).

Friday, March 10, 2023

A Systems Approach to Diagnosis in Healthcare Emergency Departments

JAMA logo

A recent op-ed* in JAMA advocated greater use of systems thinking to reduce diagnostic errors in emergency departments (EDs).  The authors describe the current situation – diagnostic errors occur at an estimated 5.7% rate – and offer 3 insights why systems thinking may contribute to interventions that reduce this error rate.  We will summarize their observations and then provide our perspective.

First, they point out that diagnostic errors are not limited to the ED, in fact, such errors occur in all specialties and areas of health care.  Diagnosis is often complicated and practitioners are under time pressure to come up with an answer.  The focus of interventions should be on reducing incorrect diagnoses that result in harm to patients.  Fortunately, studies have shown that “just 15 clinical conditions accounted for 68% of diagnostic errors associated with high-severity harms,” which should help narrow the focus for possible interventions.  However, simply doing more of the current approaches, e.g., more “testing,” is not going to be effective.  (We’ll explain why later.)

Second, diagnostic errors are often invisible; if they were visible, they would be recognized and corrected in the moment.  The system needs “practical value-added ways to define and measure diagnostic errors in real time, . . .”

Third, “Because of the perception of personal culpability associated with diagnostic errors, . . . health care professionals have relied on the heroism of individual clinicians . . . to prevent diagnostic errors.”  Because humans are not error-free, the system as it currently exists will inevitably produce some errors.  Possible interventions include checklists, cognitive aids, machine learning, and training modules aimed at the Top 15 problematic clinical conditions. “The paradigm of how we interpret diagnostic errors must shift from trying to “fix” individual clinicians to creating systems-level solutions to reverse system errors.”

Our Perspective

It will come as no surprise that we endorse the authors’ point of view: healthcare needs to utilize more systems thinking to increase the safety and effectiveness of its myriad diagnostic and treatment processes.  Stakeholders must acknowledge that the current system for delivering healthcare services has error rates consistent with its sub-optimal design.  Because of that, tinkering with incremental changes, e.g., the well-publicized effort to reduce infections from catheters, will yield only incremental improvements in safety.  At best, they will only expose the next stratum of issues that are limiting system performance.

Incremental improvements are based on fragmented mental models of the healthcare system.  Proper systems thinking starts with a complete mental model of a healthcare system and how it operates.  We have described a more complete mental model in other posts so we will only summarize it here.  A model has components, e.g., doctors, nurses, support staff, and facilities.  And the model is dynamic, which means components are not fixed entities but ones whose quality and quantity varies over time.  In addition, the inter-relationships between and among the components can also vary over time.  Component behavior is directed by both relatively visible factors – policies, procedures, and practices – and softer control functions such as the level of trust between individuals, different groups, and hierarchical levels, i.e., bosses and workers.  Importantly, component behavior is also influenced by feedback from other components.  These feedback loops can be positive or negative, i.e., they can reinforce certain behaviors or seek to reduce or eliminate them.  For more on mental models, see our May 21, 2021, Nov. 6, 2019, and Oct. 9, 2019 posts.

One key control factor is organizational culture, i.e., the values and assumptions about reality shared by members.  In the healthcare environment, the most important subset of culture is safety culture (SC).  Safety should be a primary consideration in all activities in a healthcare organization.  For example, in a strong SC, the reporting of an adverse event such as an error should be regarded as a routine and ordinary task.  The reluctance of doctors to report errors because of their feelings of personal and professional shame, or fear of malpractice allegations or discipline, must be overcome.  For more on SC, see our May 21, 2021 and July 31, 2020 posts.

Organizational structure is another control factor, one that basically defines the upper limit of organizational performance.  Does the existing structure facilitate communication, learning, and performance improvement or do silos create barriers?  Do professional organizations and unions create focal points the system designer can leverage to improve performance or are they separate power structures whose interests and goals may conflict with those of the larger system?  What is the quality of management’s behavior, especially their decision making processes, and how is management influenced by their goals, policy constraints, environmental pressures (e.g., to advance equity and diversity) and compensation scheme?

As noted earlier, the authors observe that EDs depend on individual doctors to arrive at correct diagnoses in spite of inadequate information or time pressure and doctors who can do this well are regarded as heroes.  We note that doctors who are less effective may be shuffled off to the side or in egregious cases, labeled “bad apples” and tossed out of the organization.  This is an incorrect viewpoint.  Competent, dedicated individuals are necessary, of course, but the system designer should focus on making the system more error tolerant (so any errors cause no or minimal harm) and resilient (so errors are recognized and corrective actions implemented.)          

Bottom line: more systems thinking is needed in healthcare and articles like this help move the needle in the correct direction.

*  J.A. Edlow and P.J. Pronovost, “Misdiagnosis in the Emergency Department: Time for a System Solution,” JAMA (Journal of the American Medical Association), Vol. 329, No. 8 (Feb. 28, 2023), pp. 631-632.