Showing posts with label Systems View. Show all posts
Showing posts with label Systems View. Show all posts

Tuesday, April 2, 2024

Systems Engineering’s Role in Addressing Society’s Problems

Guru Madhavan, a National Academy of Engineering senior scholar, has a new book about how engineering can contribute to solving society’s most complex and intractable problems.  He published a related article* on the National Academies website.  The author describes four different types of problems, i.e., decision situations.  Importantly, he advocates a systems engineering** perspective for addressing each type.  We will summarize his approach and provide our perspective on it.

He begins with a metaphor of clocks and clouds.  Clocks operate on logical principles and underlie much of our physical world.  Clouds form and reform, no two are alike, they defy logic, only the instant appearance is real – a metaphor for many of our complex social problems.
 
Hard problems

Hard problems can be essentially bounded.  The systems engineer can identify components, interrelationships, processes, desired outcomes, and measures of performance.  The system can be optimized by applying mathematics, scientific knowledge, and experience.  The system designers’ underlying belief is that a best outcome exists and is achievable.  In our view, this is a world of clocks.

Soft problems

Soft problems arise in the field of human behavior, which is complicated by political and psychological factors.  Because goals may be unclear, and constraints complicate system design, soft problems cannot be solved like hard problems.

Soft problems involve technology, psychology, and sociology and resolving them may yield an outcome that’s not the best (optimal) but good enough.  Results are based on satisficing, an approach that satisfies and suffices.  We’d say clouds are forming overhead.
 
Messy problems

Messy problems emerge from divisions created by people’s differing value sets, belief systems, ideologies, and convictions.  An example would be trying to stop the spread of a pathogen while respecting a culture’s traditional burial practices.  In these situations, the system designer must try to transform the nature of the entity and/or its environment by dissolving the problem into manageable elements and moving them toward a desired state in which the problem no longer arises.  In the example above, this might mean creating dignified burial rituals and promoting safe public health practices.

Wicked problems

The cloudiest problems are the “wicked” ones.  A wicked problem emerges when hard, soft, and messy problems simultaneously exist together.  This means optimal solutions, satisficing resolutions, and dissolution may also co-exist.  A comprehensive model of a wicked problem might show solution(s) within a resolution, and a dissolution might contain resolutions and solutions.  As a consequence, engineers need to possess “competency—and consciousness— . . . to develop a balanced blend of hard solutions, soft resolutions, and messy dissolutions to wicked problems.”

Our perspective

People form their mental models of the world based on their education, training, and lived experiences.  These mental models are representations of how the world works.  They are usually less than totally accurate because of people’s cognitive limitations and built-in biases.

We have long argued that technocrats who traditionally manage and operate complicated industrial facilities, e.g., nuclear power plants, have inadequate mental models, i.e., they are clock people.  Their models are limited to cause-effect thinking; their focus is on fixing the obvious hard problems in front of them.  As a result, their fixes are limited: change a procedure or component design, train harder, supervise more closely, and apply discipline, including getting rid of the bad apples, as necessary.  Rinse and repeat.

In contrast, we assert that problem solving must recognize the existence of complex socio-technical systems.  Fixes need to address both physical issues and psychological and social concerns.  Analysts must consider relationships between hard and soft system components.  Problem solvers need to be cloud people.  

Proper systems thinking understands that problems seldom exist in isolation.  They are surrounded by a task environment that may contain conflicting goals (e.g., production vs. safety) and a solution space limited by company policies, resource limitations, and organizational politics.  The external legal-political environment can also influence goals and further constrain the solution space.

Madhavan has provided some good illustrations of mental models for problem solving, starting with the (relatively) easiest “hard” physical problems and moving through more complicated models to the realm of wicked problems that may, in some cases, be effectively unsolvable.

Bottom line: this is a good refresher for people who are already systems thinkers and a good introduction for people who aren’t.


*  G. Madhavan, “Engineering Our Wicked Problems,” National Academy of Engineering Perspectives (March 6, 2024).  Online only.

**  In Madhavan’s view, systems engineering considers all facets of a problem, recognizes sensitivities, shapes synergies, and accounts for side effects.

Friday, October 6, 2023

A Straightforward Recipe for Changing Culture

Center for Open Science
Source: COS website


We recently came across a clear, easily communicated road map for implementing cultural change.*  We’ll provide some background information on the author’s motivation for developing the road map, a summary of it, and our perspective on it.

The author, Brian Nosek, is executive director of the Center for Open Science (COS).  The mission of COS is to increase the openness, integrity, and reproducibility of scientific research.  Specifically, they propose that researchers publish the initial description of their studies so that original plans can be compared with actual results.  In addition, researchers should “share the materials, protocols, and data that they produced in the research so that others could confirm, challenge, extend, or reuse the work.”  Overall, the COS proposes a major change from how much research is presently conducted.

Currently, a lot of research is done in private, i.e., more or less in secret, usually with the objective of getting results published, preferably in a prestigious journal.  Frequent publishing is fundamental to getting and keeping a job, being promoted, and obtaining future funding for more research, in other words, having a successful career.  Researchers know that publishers generally prefer findings that are novel, positive (e.g., a treatment is effective), and tidy (the evidence fits together).

Getting from the present to the future requires a significant change in the culture of scientific research.  Nosek describes the steps to implement such change using a pyramid, shown below, as his visual model.  Similar to Abraham Maslow’s Hierarchy of Needs, a higher level of the pyramid can only be achieved if the lower levels are adequately satisfied.


Source: "Strategy for Culture Change"

Each level represents a different step for changing a culture:

•    Infrastructure refers to an open source database where researchers can register their projects, share their data, and show their work.
•    The User Interface of the infrastructure must be easy to use and compatible with researchers' existing workflows.
•    New research Communities will be built around new norms (e.g., openness and sharing) and behavior, supported and publicized by the infrastructure.
•    Incentives refer to redesigned reward and recognition systems (e.g., research funding and prizes, and institutional hiring and promotion schemes) that motivate desired behaviors.
•    Public and private Policy changes codify and normalize the new system, i.e., specify the new requirements for conducting research.
     
Our Perspective

As long-time consultants to senior managers, we applaud Nosek’s change model.  It is straightforward and adequately complete, and can be easily visualized.  We used to spend a lot of time distilling complicated situations into simple graphics that communicated strategically important points.

We also totally support his call to change the reward system to motivate the new, desirable behaviors.  We have been promoting this viewpoint for years with respect to safety culture: If an organization or other entity values safety and wants safe activities and outcomes, then they should compensate the senior leadership accordingly, i.e., pay for safety performance, and stop promoting the nonsense that safety is intrinsic to the entity’s functioning and leaders should provide it basically for free.

All that said, implementing major cultural change is not as simple as Nosek makes it sound.

First off, the status quo can have enormous sticking power.  Nosek acknowledges it is defined by strong norms, incentives, and policies.  Participants know the rules and how the system works, in particular they know what they must do to obtain the rewards and recognition.  Open research is an anathema to many researchers and their sponsors; this is especially true when a project is aimed at creating some kind of competitive advantage for the researcher or the institution.  Secrecy is also valued when researchers may (or do) come up with the “wrong answer” – findings that show a product is not effective or has dangerous side effects, or an entire industry’s functioning is hazardous for society.

Second, the research industry exists in a larger environment of social, political and legal factors.  Many elected officials, corporate and non-profit bosses, and other thought leaders may say they want and value a world of open research but in private, and in their actions, believe they are better served (and supported) by the existing regime.  The legal system in particular is set up to reinforce the current way of doing business, e.g., through patents.

Finally, systemic change means fiddling with the system dynamics, the physical and information flows, inter-component interfaces, and feedback loops that create system outcomes.  To the extent such outcomes are emergent properties, they are created by the functioning of the system itself and cannot be predicted by examining or adjusting separate system components.  Large-scale system change can be a minefield of unexpected or unintended consequences.

Bottom line: A clear model for change is essential but system redesigners need to tread carefully.  


*  B. Nosek, “Strategy for Culture Change,” blog post (June 11th, 2019).

Friday, August 4, 2023

Real Systems Pursue Goals

System Model Control Panel
System Model Control Panel
On March 10, 2023 we posted about a medical journal editorial that advocated for incorporating more systems thinking in hospital emergency rooms’ (ERs) diagnostic processes.  Consistent with Safetymatters’ core beliefs, we approved of using systems thinking in complicated decision situations such as those arising in the ER. 

The article prompted a letter to the editor in which the author said the approach described in the original editorial wasn’t a true systems approach because it wasn’t specifically goal-oriented.  We agree with that author’s viewpoint.  We often argue for more systems thinking and describe mental models of systems with components, dynamic relationships among the components, feedback loops, control functions such as rules and culture, and decision maker inputs.  What we haven’t emphasized as much, probably because we tend to take it for granted, is that a bona fide system is teleological, i.e., designed to achieve a goal. 

It’s important to understand what a system’s goal is.  This may be challenging because the system’s goal may contain multiple sub-goals.  For example, a medical clinician may order a certain test.  The lab has a goal: to produce accurate, timely, and reliable results for tests that have been ordered.  But the clinician’s goal is different: to develop a correct diagnosis of a patient’s condition.  The goal of the hospital of which the clinician and lab are components may be something else: to produce generally acceptable patient outcomes, at reasonable cost, without incurring undue legal problems or regulatory oversight.  System components (the clinician and the lab) may have goals which are hopefully supportive of, or at least consistent with, overall system goals.

The top-level system, e.g., a healthcare provider, may not have a single goal, it may have multiple, independent goals that can conflict with one another.  Achieving the best quality may conflict with keeping costs within budgets.  Achieving perfect safety may conflict with the need to make operational decisions under time pressure and with imperfect or incomplete information.  One of the most important responsibilities of top management is defining how the system recognizes and deals with goal conflict.

In addition to goals, we need to discuss two other characteristics of full-fledged systems: a measure of performance and a defined client.* 

The measure of performance shows the system designers, users, managers, and overseers how well the system’s goal(s) are being achieved through the functioning of system components as affected by the system’s decision makers.  Like goals, the measure of performance may have multiple dimensions or sub-measures.  In a well-designed system, the summation of the set of sub-measures should be sufficient to describe overall system performance.  

The client is the entity whose interests are served by the system.  Identifying the client can be tricky.  Consider a city’s system for serving its unhoused population.  The basic system consists of a public agency to oversee the services, entities (often nongovernmental organizations, or NGOs) that provide the services, suppliers (e.g., landlords who offer buildings for use as housing), and the unhoused population.  Who is the client of this system, i.e., who benefits from its functioning?  The politicians, running for re-election, who authorize and sustain the public agency?  The public agency bureaucrats angling for bigger budgets and more staff?  The NGOs who are looking for increased funding?  Landlords who want rent increases?  Or the unhoused who may be looking for a private room with a lockable door, or may be resistant to accepting any services because of their mental, behavioral, or social problems?  It’s easy to see that many system participants do better, i.e., get more pie, if the “homeless problem” is never fully resolved.

For another example, look at the average public school district in the U.S.  At first blush, the students are the client.  But what about the elected state commissioner of education and the associated bureaucracy that establish standards and curricula for the districts?  And the elected district directors and district bureaucracy?  And the parents’ rights organizations?  And the teachers’ unions?  All of them claim to be working to further the students’ interests but what do they really care about?  How about political or organizational power, job security, and money?  The students could be more of a secondary consideration.

We could go on.  The point is we are surrounded by many social-legal-political-technical systems and who and what they are actually serving may not be those they purport to serve.

  

*  These system characteristics are taken from the work of a systems pioneer, Prof. C. West Churchman of UC Berkeley.  For more information, see his The Design of Inquiring Systems (New York: Basic Books) 1971.

Friday, March 10, 2023

A Systems Approach to Diagnosis in Healthcare Emergency Departments

JAMA logo

A recent op-ed* in JAMA advocated greater use of systems thinking to reduce diagnostic errors in emergency departments (EDs).  The authors describe the current situation – diagnostic errors occur at an estimated 5.7% rate – and offer 3 insights why systems thinking may contribute to interventions that reduce this error rate.  We will summarize their observations and then provide our perspective.

First, they point out that diagnostic errors are not limited to the ED, in fact, such errors occur in all specialties and areas of health care.  Diagnosis is often complicated and practitioners are under time pressure to come up with an answer.  The focus of interventions should be on reducing incorrect diagnoses that result in harm to patients.  Fortunately, studies have shown that “just 15 clinical conditions accounted for 68% of diagnostic errors associated with high-severity harms,” which should help narrow the focus for possible interventions.  However, simply doing more of the current approaches, e.g., more “testing,” is not going to be effective.  (We’ll explain why later.)

Second, diagnostic errors are often invisible; if they were visible, they would be recognized and corrected in the moment.  The system needs “practical value-added ways to define and measure diagnostic errors in real time, . . .”

Third, “Because of the perception of personal culpability associated with diagnostic errors, . . . health care professionals have relied on the heroism of individual clinicians . . . to prevent diagnostic errors.”  Because humans are not error-free, the system as it currently exists will inevitably produce some errors.  Possible interventions include checklists, cognitive aids, machine learning, and training modules aimed at the Top 15 problematic clinical conditions. “The paradigm of how we interpret diagnostic errors must shift from trying to “fix” individual clinicians to creating systems-level solutions to reverse system errors.”

Our Perspective

It will come as no surprise that we endorse the authors’ point of view: healthcare needs to utilize more systems thinking to increase the safety and effectiveness of its myriad diagnostic and treatment processes.  Stakeholders must acknowledge that the current system for delivering healthcare services has error rates consistent with its sub-optimal design.  Because of that, tinkering with incremental changes, e.g., the well-publicized effort to reduce infections from catheters, will yield only incremental improvements in safety.  At best, they will only expose the next stratum of issues that are limiting system performance.

Incremental improvements are based on fragmented mental models of the healthcare system.  Proper systems thinking starts with a complete mental model of a healthcare system and how it operates.  We have described a more complete mental model in other posts so we will only summarize it here.  A model has components, e.g., doctors, nurses, support staff, and facilities.  And the model is dynamic, which means components are not fixed entities but ones whose quality and quantity varies over time.  In addition, the inter-relationships between and among the components can also vary over time.  Component behavior is directed by both relatively visible factors – policies, procedures, and practices – and softer control functions such as the level of trust between individuals, different groups, and hierarchical levels, i.e., bosses and workers.  Importantly, component behavior is also influenced by feedback from other components.  These feedback loops can be positive or negative, i.e., they can reinforce certain behaviors or seek to reduce or eliminate them.  For more on mental models, see our May 21, 2021, Nov. 6, 2019, and Oct. 9, 2019 posts.

One key control factor is organizational culture, i.e., the values and assumptions about reality shared by members.  In the healthcare environment, the most important subset of culture is safety culture (SC).  Safety should be a primary consideration in all activities in a healthcare organization.  For example, in a strong SC, the reporting of an adverse event such as an error should be regarded as a routine and ordinary task.  The reluctance of doctors to report errors because of their feelings of personal and professional shame, or fear of malpractice allegations or discipline, must be overcome.  For more on SC, see our May 21, 2021 and July 31, 2020 posts.

Organizational structure is another control factor, one that basically defines the upper limit of organizational performance.  Does the existing structure facilitate communication, learning, and performance improvement or do silos create barriers?  Do professional organizations and unions create focal points the system designer can leverage to improve performance or are they separate power structures whose interests and goals may conflict with those of the larger system?  What is the quality of management’s behavior, especially their decision making processes, and how is management influenced by their goals, policy constraints, environmental pressures (e.g., to advance equity and diversity) and compensation scheme?

As noted earlier, the authors observe that EDs depend on individual doctors to arrive at correct diagnoses in spite of inadequate information or time pressure and doctors who can do this well are regarded as heroes.  We note that doctors who are less effective may be shuffled off to the side or in egregious cases, labeled “bad apples” and tossed out of the organization.  This is an incorrect viewpoint.  Competent, dedicated individuals are necessary, of course, but the system designer should focus on making the system more error tolerant (so any errors cause no or minimal harm) and resilient (so errors are recognized and corrective actions implemented.)          

Bottom line: more systems thinking is needed in healthcare and articles like this help move the needle in the correct direction.


*  J.A. Edlow and P.J. Pronovost, “Misdiagnosis in the Emergency Department: Time for a System Solution,” JAMA (Journal of the American Medical Association), Vol. 329, No. 8 (Feb. 28, 2023), pp. 631-632.

Thursday, November 17, 2022

A Road Map for Reducing Diagnostic Errors in Healthcare

A recent article* about how to reduce diagnostic errors in healthcare caught our attention, for a couple of reasons.  First, it describes a fairly comprehensive checklist of specific practices to address diagnostic errors, and second, the practices include organizational culture and reflect systems thinking, both subjects dear to us.  The checklist’s purpose is to help an organization rate its current performance and identify areas for improvement.

The authors used a reasonable method to develop the checklist: they convened an anonymous Delphi group, identified and ranked initial lists of practices, shared the information among the group, then collected and organized the updated rankings.  The authors then sent the draft checklist to several hospital managers, i.e., the kind of people who would have to implement the approach, for their input on feasibility and clarity.  The final checklist was then published.

The checklist focuses on diagnostic errors, i.e., missed, delayed, or wrong diagnoses.  It does not address other major types of healthcare errors, e.g., botched procedures, drug mix-ups, or provider hygiene practices.

The authors propose 10 practices, summarized below, to assess current performance and direct interventions with respect to diagnostic errors:

1.    Senior leadership builds a “board-to-bedside” accountability framework to measure and improve diagnostic safety.

2.    Promote a just culture and create a psychologically safe environment that encourages clinicians and staff to share opportunities to improve diagnostic safety without fear of retribution.

3.    Create feedback loops to increase information flow about patients’ diagnostic and treatment-related outcomes after handoffs from one provider/department to another.

4.    Develop multidisciplinary perspectives to understand and address contributory factors in the analysis of diagnostic safety events.

5.    Seek patient and family feedback to identify and understand diagnostic safety concerns.

6.    Encourage patients to review their health records and ask questions about their diagnoses and related treatments.

7.    Prioritize equity in diagnostic safety efforts.

8-10.    Establish standardized systems and processes to (1) encourage direct, collaborative interactions between treating clinical teams and diagnostic specialties; (2) ensure reliable communication of diagnostic information between care providers and with patients and families; and (3) close the loop on communication and follow up on abnormal test results and referrals.

Our Perspective

We support the authors recognition that diagnostic errors are difficult to analyze; they can involve clinical uncertainty, the natural evolution of diagnosis as more information becomes available, and cognitive errors, all exacerbated by system vulnerabilities.  Addressing such errors requires a systems approach.  

The emphasis on a just culture and establishing feedback loops is good.  We would add the importance of management commitment to fixing and learning from identified problems, and a management compensation plan that includes monetary incentives for doing this.

However, we believe the probability of a healthcare organization establishing dedicated infrastructure to address diagnostic errors is very low.  First, the authors recognize there is no existing business case to address such errors.  In addition, we suspect there is some uncertainty around how often such errors occur.  The authors say these errors affect at least 5% of US adult outpatients annually but that number is based on a single mini-meta study.**

As a consequence, senior management is not currently motivated by either fear (e.g., higher costs, excessive losses to lawsuits, regulatory sanctions or fines, or reputational loss) or greed (e.g., professional recognition or monetary incentives) to take action.  So our recommended first step should be to determine which types of medical errors present the greatest threats to an institution, how many occur, and then determine what can be done to prevent them or minimize their consequences.  (See our July 31, 2020 post on Dr. Danielle Ofri’s book When We Do Harm for more on medical errors.)

Second, the organization has other competing goals demanding attention and resources so management’s inclination will be to minimize costs by simply extending any existing error identification and resolution program to include diagnostic errors.

Third, diagnosis is not a cut-and-dried process, like inserting a catheter, double-checking patients’ names, or hand washing.  The diagnostic process is essentially probabilistic, with different diagnoses possible from the same data, and to some degree, subjective.  Management probably does not want a stand-alone system that second guesses and retrospectively judges doctors’ decisions and opinions.  Such an approach could be perceived as intruding on doctors’ freedom to exercise professional judgment and is bad for morale.

Bottom line: The checklist is well-intentioned but a bit naïve.  It is a good guide for identifying weak spots and hazards in a healthcare organization, and the overall approach is not necessarily limited to diagnostic errors.   


*  Singh, H., Mushtaq, U., Marinez, A., Shahid, U., Huebner, J., McGaffigan, P., and Upadhyay, D.K., “Developing the Safer Dx Checklist of Ten Safety Recommendations for Health Care Organizations to Address Diagnostic Errors,” The Joint Commission Journal on Quality and Patient Safety, No. 48, Aug. 10, 2022, pp. 581–590.  The Joint Commission is an entity that inspects and accredits healthcare providers, mainly hospitals.

**  Singh, H., Meyer, A.N.D., and Thomas, E.J., “The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations,” BMJ Quality and Safety, Vol. 23, No. 9, April 2014, pp. 727–731.


Monday, June 6, 2022

Guiding People to Better Decisions: Lessons from Nudge by Richard Thaler and Cass Sunstein

Safetymatters reports on organizational culture, the values and beliefs that underlie an organization’s essential activities.  One such activity is decision-making (DM) and we’ve said an organization’s DM processes should be robust and replicable.  DM must incorporate the organization’s priorities, allocate its resources, and handle the inevitable goal conflicts which arise.

In a related area, we’ve written about the biases that humans exhibit in their personal DM processes, described most notably in the work by Daniel Kahneman.*  These biases affect decisions people make, or contribute to, on behalf of their organizations, and personal decisions that only impact the decision maker himself.

Thaler and Sunstein also recognize that humans are not perfectly rational decision makers (citing Kahneman’s work, among others) and seek to help people make better decisions based on insights from behavioral science and applied economics.  Nudge** focuses on the presentation of decision situations and alternatives to decision makers on public and private sector websites.  It describes the nitty-gritty of identifying, analyzing, and manipulating decision factors, i.e., the architecture of choice. 

The authors examine the choice architecture for a specific class of decisions: where groups of people make individual choices from a set of alternatives.  Choice architecture consists of curation and navigation tools.  Curation refers to the set of alternatives presented to the decision maker.  Navigation tools sound neutral but small details can have a significant effect on a decider’s behavior. 

The authors discuss many examples including choosing a healthcare or retirement plan, deciding whether or not to become an organ donor, addressing climate change, and selecting a home mortgage.  In each case, they describe different ways of presenting the decision choices, and their suggestions for an optimal approach.  Their recommendations are guided by their philosophy of “libertarian paternalism” which means decision makers should be free to choose, but should be guided to an alternative that would maximize the decider’s utility, as defined by the decision maker herself.

Nudge concentrates on which alternatives are presented to a decider and how they are presented.  Is the decision maker asked to opt-in or opt-out with respect to major decisions?  Are many alternatives presented or a subset of possibilities?  A major problem in the real world is that people can have difficulty in seeing how choices will end up affecting their lives.  What is the default if the decision maker doesn’t make a selection?  This is important: default options are powerful nudges; they can be welfare enhancing for the decider or self-serving for the organization.  Ideally, default choices should be “consistent with choices people would make if they all the relevant information, were not subject to behavioral biases, and had the time to make a thoughtful choice.” (p. 261)

Another real world problem is that much choice architecture is bogged down with sludge - the inefficiency in the choice system – including barriers, red tape, delays, opaque costs, and hidden or difficult to use off-ramps (e.g., finding the path to unsubscribe from a publication).

The authors show how private entities like social media companies and employers, and public ones like the DMV, present decision situations to users.  Some entities have the decider’s welfare and benefit in mind, others are more concerned with their own power and profits.  It’s no secret that markets give companies an incentive to exploit our DM frailties to increase profits.  The authors explicitly do not support the policy of “presumed consent” embedded in many choice situations where the designer has assumed a desirable answer and is trying to get more deciders to end up there. 

The authors’ view is their work has led to many governments around the world establishing “nudge” departments to identify better routes for implementing social policies.

Our Perspective

First, the authors have a construct that is totally consistent with our notion of a system.  A true teleological system includes a designer (the authors), a client (the individual deciders), and a measure of performance (utility as experienced by the decider).  Because we all agree, we’ll give them an A+ for conceptual clarity and completeness.

Second, they pull back the curtain to reveal the deliberate (or haphazard) architecture that underlies many of our on-line experiences where we are asked or required to interact with the source entities.  The authors make clear how often we are being prodded and nudged.  Even the most ostensibly benign sites can suggest what we should be doing through their selection of default choices.  (In fairness, some site operators, like one’s employer, are themselves under the gun to provide complete data to government agencies or insurance companies.  They simply can’t wait indefinitely for employees to make up their minds.)  We need to be alert to defaults that we accept without thinking and choices we make when we know what others have chosen; in both cases, we may end up with a sub-optimal choice for our particular circumstances. 

Thaler and Sunstein are respectable academics so they include lots of endnotes with references to books, journals, mainstream media, government publications, and other sources.  Sunstein was Kahneman’s co-author for Noise, which we reviewed on July 1, 2021.

Bottom line: Nudge is an easy read about how choice architects shape our everyday experiences in the on-line world where user choices exist. 

 

*  Click on the Kahneman label for all our posts related to his work.

**  R.H. Thaler and C.R. Sunstein, Nudge, final ed. (New Haven: Yale University Press) 2021.

Thursday, March 31, 2022

The Criminalization of Safety in Healthcare?


On March 25, 2022 a former nurse at Vanderbilt University Medical Center (VUMC) was convicted of gross neglect of an impaired adult and negligent homicide as a consequence of a fatal drug error in 2017.* 

Criminal prosecutions for medical errors are rare, and healthcare stakeholders are concerned about what this conviction may mean for medical practice going forward.  A major concern is practitioners will be less likely to self-report errors for fear of incriminating themselves.

We have previously written about the intersection of criminal charges and safety management and practices.  In 2016 Safetymatters’ Bob Cudlin authored a 3-part series on this topic.  (See his May 24, May 31, and June 7 posts.)  Consistent with our historical focus on systems thinking, Bob reviewed examples in different industries and asked “where does culpability really lie - with individuals? culture? the corporation? or the complex socio-technical systems within which individuals act?”

“Corporations inherently, and often quite intentionally, place significant emphasis on achieving operational and business goals.  These goals at certain junctures may conflict with assuring safety.  The de facto reality is that it is up to the operating personnel to constantly rationalize those conflicts in a way that achieves acceptable safely.”

We are confident this is true in hospital nurses’ working environment.  They are often short-staffed, working overtime, and under pressure from their immediate task environments and larger circumstances such as the ongoing COVID pandemic.  The ceaseless evolution of medical technology means they have to adapt to constantly changing equipment, some of which is problematic.  Many/most healthcare professionals believe errors are inevitable.  See our August 6, 2019 and July 31, 2020 posts for more information about the extent, nature, and consequences of healthcare errors.

At VUMC, medicines are dispensed from locked cabinets after a nurse enters various codes.  The hospital had been having technical problems with the cabinets in early 2017 prior to the nurse’s error.  The nurse could not obtain the proper drug because she was searching using its brand name instead of its generic name.  She entered an override that allowed her to access additional medications and selected the wrong one, a powerful paralyzing agent.  The nurse and other medical personnel noted that entering overrides on the cabinets was a common practice.

VUMC’s problems extended well beyond troublesome medicine cabinets.  An investigator said VUMC had “a heavy burden of responsibility in this matter.”  VUMC did not report the medication error as required by law and told the local medical examiner’s office that the patient died of “natural” causes.  VUMC avoided criminal charges because prosecutors didn’t think they could prove gross negligence. 

Our Perspective

As Bob observed in 2016, “The reality is that criminalization is at its core a “disincentive.”  To be effective it would have to deter actions or decisions that are not consistent with safety but not create a minefield of culpability. . . .  Its best use is probably as an ultimate boundary, to deter intentional misconduct but not be an unintended trap for bad judgment or inadequate performance.”

In the instant case, the nurse did not intend to cause harm but her conduct definitely reflected bad judgment and unacceptable performance.  She probably sealed her own fate when she told law enforcement she “probably just killed a patient” and the licensing board that she had been “complacent” and “distracted.”   

But we see plenty of faults in the larger system, mainly that VUMC used cabinets that held dangerous substances and had a history of technical glitches but allowed users to routinely override cabinet controls to obtain needed medicines.  As far we can tell, VUMC did not implement any compensating safety measures, such as requiring double checking by a colleague or a supervisor’s presence when overrides were performed or “dangerous” medications were withdrawn.

In addition, VUMC’s organizational culture was on full display with their inadequate and misleading reporting of the patient’s death.  VUMC has made no comment on the nurse’s case.  In our view, their overall strategy was to circle the wagons, seal off the wound, and dispose of the bad apple.  Nothing to see here, folks.

Going forward, the remaining VUMC nurses will be on high alert for awhile but their day-to-day task demands will eventually force them to employ risky behaviors in an environment that requires such behavior to accomplish the mission but lacks defense in depth to catch errors before they have drastic consequences.  The nurses will/should be demanding a safer work environment.

Bottom line: Will this event mark a significant moment for accountability in healthcare akin to the George Floyd incident’s impact on U.S. police practices?  You be the judge.

For additional Safetymatters insights click the healthcare label below.

 

*  All discussion of the VUMC incident is based on reporting by National Public Radio (NPR).  See B. Kelman, “As a nurse faces prison for a deadly error, her colleagues worry: Could I be next?” NPR, March 22, 2022; “In Nurse’s Trial, Investigator Says Hospital Bears ‘Heavy’ Responsibility for Patient Death,” NPR, March 24, 2022; “Former nurse found guilty in accidental injection death of 75-year-old patient,” NPR, March 25, 2022.

Friday, December 10, 2021

Prepping for Threats: Lessons from Risk: A User’s Guide by Gen. Stanley McChrystal.

Gen. McChrystal was a U.S. commander in Afghanistan; you may remember he was fired by President Obama for making, and allowing subordinates to make, disparaging comments about then-Vice President Biden.  However, McChrystal was widely respected as a soldier and leader, and his recent book* on strengthening an organization’s “risk immune system” caught our attention.  This post summarizes its key points, focusing on items relevant to formal civilian organizations.

McChrystal describes a system that can detect, assess, respond to, and learn from risks.**  His mental model consists of two major components: (1) ten Risk Control Factors, interrelated dimensions for dealing with risks and (2) eleven Solutions, strategies that can be used to identify and address weaknesses in the different factors.  His overall objective is to create a resilient organization that can successfully respond to challenges and threats. 

Risk Control Factors

These are things under the control of an organization and its leadership, including physical assets, processes, practices, policies, and culture.

Communication – The organization must have the physical ability and willingness to exchange clear, complete, and intelligible information, and identify and deal with propaganda or misinformation.

Narrative – An articulated organizational purpose and mission.  It describes Who we are, What we do, and Why we do it.  The narrative drives (and we’d say is informed by) values, beliefs, and action.

Structure – Organizational design defines decision spaces and communication networks, implies power (both actual and perceived authority), suggests responsibilities, and influences culture.

Technology – This is both the hardware/software and how the organization applies it.  It include an awareness of how much authority is being transferred to machines, our level of dependence on them, our vulnerability to interruptions, and the unintended consequences of new technologies.

Diversity – Leaders must actively leverage different perspectives and abilities, inoculate the organization against groupthink, i.e., norms of consensus, and encourage productive conflict and a norm of skepticism.  (See our June 29, 2020 post on A Culture that Supports Dissent: Lessons from In Defense of Troublemakers by Charlan Nemeth.)

Bias – Biases are assumptions about the world that affect our outlook and decision making, and cause us to ignore or discount many risks.  In McChrystal’s view “[B]ias is an invisible hand driven by self-interest.” (See our July 1, 2021 and Dec.18, 2013 posts on Daniel Kahneman’s work on identifying and handling biases.) 

Action – Leaders have to proactively overcome organizational inertia, i.e., a bias against starting something new or changing course.  Inertia manifests in organizational norms that favor the status quo and tolerate internal resistance to change.

Timing – Getting the “when” of action right.  Leaders have to initiate action at the right time with the right speed to yield optimum impact.

Adaptability – Organizations have to respond to changing risks and environments.  Leaders need to develop their organization’s willingness and ability to change.

Leadership – Leaders have to direct and inspire the overall system, and stimulate and coordinate the other Risk Control Factors.  Leaders must communicate the vision and personify the narrative.  In practice, they need to focus on asking the right questions and sense the context of a given situation, embracing the new before necessity is evident. (See our Nov. 9, 2018 post for an example of effective leadership.)

Solutions

The Solutions are strategies or methods to identify weaknesses in and strengthen the risk control factors.  In McChrystal’s view, each Solution is particularly applicable to certain factors, as shown in Table 1.

Assumptions check – Assessment of the reasonableness and relative importance of assumptions that underlie decisions.  It’s the qualitative and quantitative analyses of strengths and weaknesses of supporting arguments, modified by the judgment of thoughtful people.

Risk review – Assessment of when hazards may arrive and the adequacy of the organization’s preparations.

Risk alignment check – Leaders should recognize that different perspectives on risks exist and should be considered in the overall response.

Gap analysis – Identify the space between current actions and desired goals.

Snap assessment – Short-term, limited scope analyses of immediate hazards.  What’s happening?  How well are we responding?

Communications check – Ensure processes and physical systems are in place and working.

Tabletop exercise – A limited duration simulation that tests specific aspects of the organization’s risk response.

War game (functional exercise) – A pressure test in real time to show how the organization comprehensively reacts to a competitor’s action or unforeseen event.

Red teaming – Exercises involving third parties to identify organizational vulnerabilities and blind spots.

Pre-mortem – A discussion focusing on the things mostly likely to go wrong during the execution of a plan. 

After-action review – A self-assessment that identifies things that went well and areas for improvement.


 


Table 1  Created by Safetymatters

 

Our Perspective

McChrystal did not invent any of his Risk Control Factors and we have discussed many of these topics over the years.***  His value-add is organizing them as a system and recognizing their interrelatedness.  The entire system has to perform to identify, prepare for, and respond to risks, i.e., threats that can jeopardize the organization’s mission success.

This review emphasizes McChrystal’s overall risk management model.  The book also includes many examples of risks confronted, ignored, or misunderstood in the military, government, and commercial arenas.  Some, like Blockbuster’s failure to acquire Netflix when it had the opportunity, had poor outcomes; others, like the Cuban missile crisis or Apollo 13, worked out better.

The book appears aimed at senior leaders but all managers from department heads on up can benefit from thinking more systematically about how their organizations respond to threats from, or changes in, the external environment. 

There are hundreds of endnotes to document the text but the references are more Psychology Today than the primary sources we favor.

Bottom line: This is an easy to read example of the “management cookbook” genre.  It has a lot of familiar information in one place.

 

*  S. McChrystal and A. Butrico, Risk: A User’s Guide (New York: Portfolio) 2021.  Butrico is McChrystal’s speechwriter.

**  Risk to McChrystal is a combination of a threat and one’s vulnerability to the threat.  Threats are usually external to the organization while vulnerabilities exist because of internal aspects.

***  For example, click on the Management or Decision Making labels to pull up posts in related areas.