Monday, December 14, 2020

Implications of Randomness: Lessons from Nassim Taleb

Most of us know Nassim Nicholas Taleb from his bestseller The Black Swan. However, he wrote an earlier book, Fooled by Randomness*, in which he laid out one of his seminal propositions: a lot of things in life that we believe have identifiable, deterministic causes such as prescient decision making or exceptional skills, are actually the result of more random processes. Taleb focuses on financial markets but we believe his observations can refine our thinking about organizational decision making, mental models, and culture.

We'll begin with an example of how Taleb believes we misperceive reality. Consider a group of stockbrokers with successful 5-year track records. Most of us will assume they must be unusually skilled. However, we fail to consider how many other people started out as stockbrokers 5 years ago and fell by the wayside because of poor performance. Even if all the stockbrokers were less skilled than a simple coin flipper, some would still be successful over a 5 year period. The survivors are the result of an essentially random process and their track records mean very little going forward.

Taleb ascribes our failure to correctly see things (our inadequate mental models) to several biases. First is the hindsight bias where the past is always seen as deterministic and feeds our willingness to backfit theories or models to experience after it occurs. Causality can be very complex but we prefer to simplify it. Second, because of survivorship bias, we see and consider only the current survivors from an initial cohort; the losers do not show up in our assessment of the probability of success going forward. Our attribution bias tells us that successes are due to skills, and failures to randomness.

Taleb describes other factors that prevent us from being the rational thinkers postulated by classical economics or Cartesian philosophy. One set of factors arises from how are brains are hardwired and another set from the way we incorrectly process data presented to us.

The brain wiring issues include the work of Daniel Kahneman who describes how we use and rely on heuristics (mental shortcuts that we invoke automatically) to make day-to-day decisions. Thus, we make many decisions without really thinking or applying reason, and we are subject to other built-in biases, including our overconfidence in small samples and the role of emotions in driving our decisions. We reviewed Kahneman's work at length in our Dec. 18, 2013 post. Taleb notes that we also have a hard time recognizing and dealing with risk. Risk detection and risk avoidance are mediated in the emotional part of the brain, not the thinking part, so rational thinking has little to do with risk avoidance.

We also make errors when handling data in a more formal setting. For example, we ignore the mathematical truth that initial sample sizes matter greatly, much more than the sample size as a percentage of the overall population. We also ignore regression to the mean, which says that absent systemic changes, performance will eventually return to its average value. More perniciously, ignorant or unethical researchers will direct their computers to look for any significant relationship in a data set, a practice that can often produce a spurious relationship because all the individual tests have their own error rates. “Data snoops” will define some rule, then go looking for data that supports it. Why are researchers inclined to fudge their analyses? Because research with no significant result does not get published.

Our Perspective

We'll start with the obvious: Taleb has a large ego and is not shy about calling out people with whom he disagrees or does not respect. That said, his observations have useful implications for how we conceptualize the socio-technical systems in which we operate, i.e., our mental models, and present specific challenges for the culture of our organizations.

In our view, the three driving functions for any system's performance over time are determinism (cause and effect), choice (decision making), and probability. At heart, Taleb's world view is that the world functions more probabilistically than most people realize. A method he employs to illustrate alternative futures is Monte Carlo simulation, which we used to forecast nuclear power plant performance back in the 1990s. We wanted plant operators to see that certain low-probability events, i.e., Black Swans**, could occur in spite of the best efforts to eliminate them via plant design, improved equipment and procedures, and other means. Some unfortunate outcomes could occur because they were baked into the system from the get-go and eventually manifested. This is what Charles Perrow meant by “normal accidents” where normal system performance excursions go beyond system boundaries. For more on Perrow, see our Aug. 29,2013 post.

Of course, the probability distribution of system performance may not be stationary over time. In the most extreme case, when all system attributes change, it's called regime change. In addition, system performance may be nonlinear, where small inputs may lead to a disproportionate response, or poor performance can build slowly and suddenly cascade into failure. For some systems, no matter how specifically they are described, there will inherently be some possibility of errors, e.g., consider healthcare tests and diagnoses where both false positives and false negatives can be non-trivial occurrences.

What does this mean for organizational culture? For starters, the organization must acknowledge that many of its members are inherently somewhat irrational. It can try to force greater rationality on its members through policies, procedures, and practices, instilled by training and enforced by supervision, but there will always be leaks. A better approach would be to develop defense in depth designs, error-tolerant sub-systems with error correction capabilities, and a “just culture” that recognizes that honest mistakes will occur.

Bottom line: You should think awhile about how many aspects of your work environment have probabilistic attributes.

 

* N.N. Taleb, Fooled by Randomness, 2nd ed. (New York: Random House) 2004.

** Black swans are not always bad. For example, an actor can have one breakthrough role that leads to fame and fortune; far more actors will always be waiting tables and parking cars.

Monday, November 9, 2020

Setting the Bar for Healthcare: Patient Care Goals from the Joint Commission

Joint Commission HQ
The need for a more effective safety culture (SC) in the field of healthcare is acute: every year tens of thousands of patients are injured or unnecessarily die while in U.S. hospitals. The scope of the problem became widely known known with the publication of “To Err is Human: Building a Safer Health System”* in 2000. This report included two key observations: (1) the cause of the injuries and deaths is not bad people in health care, rather the people are working in bad systems that need to be made safer and (2) legitimate liability concerns discourage the reporting of errors, which means less feedback to the system and less learning from mistakes.

It's 20 years later. Is the healthcare system safer than it was in 2000? Yes. Is safety performance at a satisfactory level? No.

For evidence, we need look no further than a Nov. 18, 2019 blog post** byDr. Mark Chassin, president and CEO of the Joint Commission (JC), the entity responsible for establishing standards for healthcare functions and patient care, and evaluating, accrediting, and certifying healthcare organizations based on their compliance with the standards.

Dr. Chassin summarized the current situation as follows: “The health care industry has directed a substantial amount of time, effort, and resources at solving the problems, and we have seen some progress. That progress has typically occurred one project at a time, with hard-working quality professionals applying a “one-size-fits-all” best practice to address each problem. The resulting improvements have been pretty modest, difficult to sustain, and even more difficult to spread.”

Going forward, he says the industry can make substantial progress by committing to zero harm, overhauling the organizational culture, and utilizing proven process improvement techniques. He singles out the aviation and nuclear power industries for having similar commitments.

But achieving substantial, sustained improvement is a big lift. To get a feel for how big, let's look at the 2020 goals and strategies the JC has established for patient care in hospitals, in other words, where the performance bar is set today.*** We will try to inform your own judgment about their scope and sufficiency by comparing them with corresponding activities in the nuclear power industry.

1. Identify patients correctly by using at least two ways to identify them.

This is a major challenge in a hospital where many patients are entering and leaving the system every day, being transferred to and from different departments, and being treated by multiple individuals who have different roles and ranks, and are treating patients at different levels of intensity for different periods of time. There is really no analogue in the closed, controlled personnel environment of a power plant.

2. Improve staff communication by getting important test results to the right staff person on time.

This should be a familiar challenge to people in any organization, including a power plant, where functions may exist in different organizational silos with their own procedures, vocabulary, and priorities.

3. Use medicines safely by labeling medicines that are not labeled, taking extra care with patients on blood thinners, and managing patients' medicine records for accuracy, completeness, and possible interactions.

This is similar to requirements to accurately label, control, and manage the use of all chemicals used in an industrial facility.

4. Use alarms safely by ensuring that alarms on medical equipment are heard and responded to on time.

In a hospital, it is a problem when multiple alarms are going off at the same time, with differing degrees of urgency for personnel attention and response. In power plants, operators have been known to turn off alarms that are reporting too many false positives. These situations call out for operating and maintenance standards and practices that ensure all activated alarms are valid and deserving of a response.

5. Prevent infection by adhering to Centers for Disease Control or World Health Organization hand cleaning guidelines.

The aim is to keep bad bugs from circulating. Compare this prctice to the myriad procedures, personnel, and equipment dedicated to ensuring nuclear power plant radioactivity is kept in an identified, controlled, and secure environment.

6. Identify patient safety risks by reducing the risk for suicide.

Compare this with the wellness, fitness for duty, and behavioral observation programs at every nuclear power plant.

7. Prevent mistakes in surgery by making sure that the correct surgery is done on the correct patient and at the correct place on the patient’s body, and pausing before the surgery to make sure that a mistake is not being made.

This is similar to tailgate meetings before maintenance activities and using the STAR (Stop-Think-Act-Review) approach before and during work. Think of the potential for error in mirror-image plants; people are bi-lateral but subject to the similar risks.

Our Perspective

The JC's set of goals is thin gruel to show after 20 years. In our view, efforts to date reflect two major shortcomings: a lack of progress in defining and strengthening SC, and a lack of any shared understanding of what the relevant system consists of, how it functions, and how to improve it.

Safety Culture

Our July 31, 2020 post on When We Do Harm by Dr. Danielle Ofri discussed the key attributes for a strong healthcare SC, i.e., one where the probability of errors is much lower than it is today. In Ofri's view, the primary cultural attribute for reducing errors is a willingness of individuals to assume ownership and get the necessary things done, even if it's not in their specific job description, amid a diffusion of responsibility in their task environment. Secondly, all members of the organization, regardless of status, should have the ability (or duty even) to point out problems and errors without fear of retribution. The culture should regard reporting an adverse event as a routine and ordinary task. Third, organizational leaders, including but not limited to senior managers, must encourage criticism, forbid scapegoating, and not allow hierarchy and egos to overrule what is right and true. There should be deference to proven expertise and widely held authority to say “stop” when problems become apparent.

The Healthcare System

The healthcare system includes the providers, the supporting infrastructure, external environmental factors, e.g., regulators and insurance companies, the patients and their families, and all the interrelationships and dynamics between these components. An important dynamic is feedback, where the quality and quantity of output from one component influences performance in other system components. System dynamics create homeostasis, fluctuations, and all levels of performance from superior to failure. Other organizational variables, e.g., management decision-making practices and priorities, and the compensation scheme, provide context for system functioning. For more on system attributes, please see our Oct.9, 2019 post or click the healthcare label.

Bottom line: Compare the JC's efforts with the vast array of safety and SC-related policies, procedures, practices, activities, and dedicated personnel in your workplace. Healthcare has a long way to go.


* Institute of Medicine (L.T. Kohn et al), “To Err Is Human: Building a Safer Health System” (Washington, D.C.: The National Academies Press) 2000. Retrieved Nov. 5, 2020.

** M. Chassin, “To Err is Human: The Next 20 Years,” blog post (Nov. 18, 2019).  Retrieved Nov. 1, 2020.

*** The Joint Commission, “2020Hospital National Patient Safety Goals,” simplified version (July, 2020). Retrieved Nov. 1, 2020.


Tuesday, August 25, 2020

How to Consider Unknown Unknowns: Hints from McKinsey

Our July 31, 2020 post on medical errors discussed the importance of the “differential diagnosis” where a doctor thinks “I believe this patient has X but what else could it be?” We can usually consider that as a decision situation with known unknowns, i.e., looking for another needle in a haystack based on the available evidence. But what if you don’t know what you don’t know? How do you create other possibilities, threats or opportunities, or different futures out of thin air? A 2015 McKinsey article* provides some suggestions for getting started. There is nothing really new but it reiterates some important points we have been making here on Safetymatters.

The authors begin by noting executives’ decision making processes often coalesce around “managing the probable,” i.e., attempting to fit a current decision into a model that has worked before. The questions they ask and the data they seek tend to narrow, not expand, the decision and its context. This is an efficient way to approach many everyday decisions but excessively simple models are not appropriate for complicated decisions like how to approach a changing market or define a market that does not yet exist. All models constrain the eventual solution and simple models constrain it the most, perhaps leading to a totally wrong answer.

Decision situations that are dramatically different, complex, and uncertain require a more open-ended approach, the authors call it “leading the possible.” In such situations, decision makers should acknowledge they don’t know how uncertain environmental conditions will unfold or complex systems will evolve. The authors propose three non-traditional mental habits to identify and explore the possibilities.

Ask different questions

Ask questions that open up possibilities rather than narrowing the discussion and constraining the solution. Sample questions include: What do I expect not to find? How could I adjust to the unexpected? What might I be discounting or explaining away too quickly? What would happen if I changed one or more of my core assumptions? We would add: Is fear of missing out prodding me to move too rashly or complacency allowing me to not move at all?

As Hans Rosling said: “Beware of simple ideas and simple solutions. . . . Welcome complexity.” (see our Dec. 3, 2018 post)

Take multiple perspectives

Decision makers, especially senior managers, need to escape the echo chamber of the sycophants who surround them. They should consider how people who are very different from themselves might view the same decision situation. They can consult people who are knowledgeable but frustrating or irritating, or outside their usual internal circle such as junior staff, or even dissatisfied customers. Such perspectives can be insightful and surprising.

Other thought leaders have suggested similar approaches. For example, Ray Dalio proposes thoughtful disagreement where decision makers seek out brilliant people who disagree with them to gain a deeper understanding of decision situations (see our April 17, 2018 post) or Charlan Nemeth on the usefulness of authentic dissent in decision situations (see our June 29, 2020 post).

Recognize systems

The authors’ appreciation for systems thinking mirrors what we’ve been saying for years. (For example, see our Jan. 6, 2017 post.) Decision makers should be looking at the evolution of the forest, not examining individual trees. We need to acknowledge and accept that “Elements in a system can be connected in ways that are not immediately apparent.” The widest view is the most powerful but people have “been trained to follow our natural inclination to examine the component parts. We assume a straightforward and linear connection between cause and effect. Finally, we look for root causes at the center of problems. In doing these things, we often fail to perceive the broader forces at work.”


The authors realize that leaders who can apply the new habits may have different attributes than earlier senior managers. Traditional leaders are clear, confident, and decisive. However, their preference for managing the probable leaves them more open to being blindsided. In contrast, new leaders need to exhibit “humility, a keen sense of their own limitations, an insatiable curiosity, and an orientation to learning and development.”

Our Perspective

This article promotes more expansive mental models for decision making in formal organizations, models that deemphasize reliance on reductionism and linear, cause-effect thinking. We applaud the authors’ intent.

McKinsey is pretty good at publishing small bite “news you can use” articles. However, they do not contain any of the secret sauce for which McKinsey charges its clients top dollar.

Bottom line: Some of you don’t want to read 300 page books on management so here’s an 8 page article with a few good points.


* Z. Achi and J.G. Berger, “Delighting in the Possible,” McKinsey Quarterly (March 2015).

Friday, July 31, 2020

Culture in Healthcare: Lessons from When We Do Harm by Danielle Ofri, MD

In her book*, Dr. Ofri takes a hard look at the prevalence of medical errors in the healthcare system.  She reports some familiar statistics** and fixes, but also includes highly detailed case studies where errors large and small cascaded over time and the patients died.  This post summarizes her main observations.  She does not provide a tight summary of a less error-prone healthcare culture but she drops enough crumbs that we can infer its desirable attributes.

Healthcare is provided by a system

The system includes the providers, the supporting infrastructure, and factors in the external environment.  Ofri observes that medical care is exceedingly complicated and some errors are inevitable.  Because errors are inevitable, the system should emphasize error recognition and faster recovery with a goal of harm reduction.

She shares our view that the system permits errors to occur so fixes should focus on the system and not on the individual who made an error.***  System failures will eventually trap the most conscientious provider.  She opines that most medical errors are the result of a cascade of actions that compound one another; we would say the system is tightly coupled.

System “improvements” intended to increase efficiency can actually reduce effectiveness.  For example, electronic medical records can end up dictating providers’ practices, fragmenting thoughts and interfering with the flow of information between doctor and patient.****  Data field defaults and copy and paste shortcuts can create new kinds of errors.  Diagnosis codes driven by insurance company billing requirements can distort the diagnostic process.  In short, patient care becomes subservient to documentation.

Other changes can have unforeseen consequences.  For example, scheduling fewer working hours for interns leads to fewer diagnostic and medication errors but also results in more patient handoffs (where half of adverse medical events are rooted.)    

Aviation-inspired checklists have limited applicability

Checklists have reduced error rates for certain procedures but can lead to unintended consequences, e.g., mindless check-off of the items (to achieve 100% completion in the limited time available) and provider focus on the checklist while ignoring other things that are going on, including emergent issues.

Ofri thinks the parallels between healthcare and aviation are limited because of the complexity of human physiology.  While checklists may be helpful for procedures, doctors ascribe limited value to process checklists that guide their thinking.

Malpractice suits do not meaningfully reduce the medical error rate

Doctors fear malpractice suits so they practice defensive medicine, prescribing extra tests and treatments which have their own risks of injury and false positives, and lead to extra cost.  Medical equipment manufacturers also fear lawsuits so they design machines that sound alarms for all matters great and small; alarms are so numerous they are often simply ignored by the staff.

Hospital management culture is concerned about protecting the hospital’s financial interests against threats, including lawsuits.  A Cone of Silence is dropped over anything that could be considered an error and no information is released to the public, including family members of the injured or dead patient.  As a consequence, it is estimated that fewer than 10% of medical errors ever come to light.  There is no national incident reporting system because of the resistance of providers, hospitals, and trial lawyers.

The reality is a malpractice suit is not practical in the vast majority of cases of possible medical error.  The bar is very high: your doctor must have provided sub-standard care that caused your injury/death and resulted in quantifiable damages.  Cases are very expensive and time-consuming to prepare and the legal system, like the medical system, is guided by money so an acceptable risk-reward ratio has to be there for the lawyers.***** 

Desirable cultural attributes for reducing medical errors

In Ofri’s view, culture includes hierarchy, communications skill, training traditions, work ethic, egos, socialization, and professional ideals.  The primary cultural attribute for reducing errors is a willingness of individuals to assume ownership and get the necessary things done amid a diffusion of responsibility.  This must be taught by example and individuals must demand comparable behavior from their colleagues.

Providing medical care is a team business

Effective collaboration among team members is key, as is the ability (or duty even) of lower-status members to point out problems and errors without fear of retribution.  Leaders must encourage criticism, forbid scapegoating, and not allow hierarchy and egos to overrule what is right and true.  Where practical, training should be performed in groups who actually work together to build communication skills.

Doctors and nurses need time and space to think

Doctors need the time to develop differential diagnosis, to ask and answer “What else could it be?”  The provider’s thought process is the source of most diagnostic error, and subject to explicit and implicit biases, emotions, and distraction.  However, stopping to think can cause delays which can be reported as shortcomings by the tracking system.  The culture must acknowledge uncertainty (fueled by false positives and negatives), address overconfidence, and promote feedback, especially from patients.

Errors and near misses need to be reported without liability or shame.

The culture should regard reporting an adverse event as a routine and ordinary task.  This is a big lift for people steeped in the hierarchy of healthcare and the impunity of its highest ranked members.  Another factor to be overcome is the reluctance of doctors to report errors because of their feelings of personal and professional shame.

Ofri speaks favorably of a “just culture” that recognizes that unintentional error is possible, but risky behavior like taking shortcuts requires (system) intervention, and negligence should be disciplined.  In addition, there should not be any bias in how penalties are handed out, e.g., based on status.

In sum, Ofri says healthcare will always be an imperfect system.  Ultimately, what patients want is acknowledgement of errors and apology for them from doctors.

Our Perspective

Ofri’s major contribution is her review of the evidence showing how pervasive medical errors are and how the healthcare industry works overtime to deny and avoid responsibility for them.

Her suggestions for a safer healthcare culture echo what we’ve been saying for years about the attributes of a strong safety culture.  Reducing the error rates will be hard for many reasons.  For example, Ofri observes medical training forges a lifelong personal identity and reverence for tradition; in our view, it also builds in resistance to change.  The biases in decision making that she mentions are not trivial.  For one discussion of such biases, see our Dec. 18, 2013 review of Daniel Kahneman’swork.

Bottom line: After you read this, you will be clutching your rosary a little tighter if you have to go to a hospital for a major injury or illness.  You are more responsible for your own care than you think.


*  D. Ofri, When We Do Harm (Boston: Beacon Press, 2020).

**  For example, a study reporting that almost 4% of hospitalizations resulted in medical injury, of which 14% were fatal, and doctors’ diagnostic accuracy is estimated to be in the range of 90%.

***  It has been suggested that the term “error” be replaced with “adverse medical event” to reduce the implicit focus on individuals.

****  Ofri believes genuine conversation with a patient is the doctor’s single most important diagnostic tool.

***** As an example of the power of money, when Medicare started fining hospitals for shortcomings, the hospitals started cleaning up their problems.

Monday, June 29, 2020

A Culture that Supports Dissent: Lessons from In Defense of Troublemakers by Charlan Nemeth

Charlan Nemeth is a psychology professor at the University of California, Berkeley.  Her research and practical experience inform her conclusion that the presence of authentic dissent during the decision making process leads to better informed and more creative decisions.  This post presents highlights from her 2018 book* and provides our perspective on her views.

Going along to get along

Most people are inclined to go along with the majority in a decision making situation, even when they believe the majority is wrong.  Why?  Because the majority has power and status, most organizational cultures value consensus and cohesion, and most people want to avoid conflict. (179)

An organization’s leader(s) may create a culture of agreement but consensus, aka the tyranny of the majority, gives the culture its power over members.  People consider decisions from the perspective of the consensus, and they seek and analyze information selectively to support the majority opinion.  The overall effect is sub-optimal decision making; following the majority requires no independent information gathering, no creativity, and no real thinking. (36,81,87-88)

Truth matters less than group cohesion.  People will shape and distort reality to support the consensus—they are complicit in their own brainwashing.  They will willingly “unknow” their beliefs, i.e., deny something they know to be true, to go along.  They live in information bubbles that reinforce the consensus, and are less likely to pay attention to other information or a different problem that may arise.  To get along, most employees don’t speak up when they see problems. (32,42,98,198)

“Groupthink” is an extreme form of consensus, enabled by a norm of cohesion, a strong leader, situational stress, and no real expectation that a better idea than the leader’s is possible.  The group dynamic creates a feedback loop where people repeat and reinforce the information they have in common, leading to more extreme views and eventually the impetus to take action.  Nemeth’s illustrative example is the decision by President John Kennedy and his advisors to authorize the disastrous Bay of Pigs invasion.** (140-142)

Dissent adds value to the decision making process

Dissent breaks the blind following of the majority and stimulates thought that is more independent and divergent, i.e., creates more alternatives and considers facts on all sides of the issue.  Importantly, the decision making process is improved even when the dissenter is wrong because it increases the group’s chances of identifying correct solutions. (7-8,12,18,116,180) 

Dissent takes courage but can be contagious; a single dissenter can encourage others to speak up.  Anonymous dissent can help protect the dissenter from the group. (37,47) 

Dissent must be authentic, i.e., it must reflect the true beliefs of the dissenter.  To persuade others, the dissenter must remain consistent in his position.  He can only change because of new or changing information.  Only authentic, persistent dissent will force others to confront the possibility that they may be wrong.  At the end of the day, getting a deal may require the dissenter to compromise, but changing the minds of others requires consistency. (58,63-64,67,115,190)

Alternatives to dissent

Other, less antagonistic, approaches to improving decision making have been promoted.  Nemeth finds them lacking.

Training is the go to solution in many organizations but is not very effective in addressing biases or getting people to speak up to realities of power and hierarchies.   Dissent is superior to training because it prompts reconsidering positions and contemplating alternatives. (101,107)

Classical brainstorming incorporates several rules for generating ideas, including withholding criticism of ideas that have been put forth.  However, Nemeth found in her research that allowing (but not mandating) criticism led to more ideas being generated.   In her view, it’s the “combat between different positions that provides the benefits to decision making.” (131,136)

Demographic diversity is promoted as a way to get more input into decisions.  But demographics such as race or gender are not as helpful as diversity of skills, knowledge, and backgrounds (and a willingness to speak up), along with leaders who genuinely welcome different viewpoints. (173,175,200)

The devil’s advocate approach can be better than nothing, but it generally leads to considering the negatives of the original position, i.e., the group focuses on better defenses for that position rather than alternatives to it.  Group members believe the approach is fake or acting (even when the advocate really believes it) so it doesn’t promote alternative thinking or force participants to confront the possibility that they may be wrong.  The approach is contrived to stimulate divergent thinking but it actually creates an illusion that all sides have been considered while preserving group cohesion. (182-190,203-04)

Dissent is not free for the individual or the group

Dissenters are disliked, ridiculed, punished, or worse.  Dissent definitely increases conflict and sometimes lowers morale in the group.  It requires a culture where people feel safe in expressing dissent, and it’s even better if dissent is welcomed.  The culture should expect that everyone will be treated with respect. (197-98,209)

Our Perspective

We have long argued that leaders should get the most qualified people, regardless of rank or role, to participate in decision making and that alternative positions should be encouraged and considered.  Nemeth’s work strengthens and extends our belief in the value of different views.

If dissent is perceived as an honest effort to attain the truth of a situation, it should be encouraged by management and tolerated, if not embraced, by peers.  Dissent may dissuade the group from linear cause-effect, path of least resistance thinking.  We see a similar practice in Ray Dalio’s concepts of an idea meritocracy and radical open-mindedness, described in our April 17, 2018 review of his book Principles.  In Dalio’s firm, employees are expected to engage in lively debate, intellectual combat even, over key decisions.  His people have an obligation to speak up if they disagree.  Not everyone can do this; a third of Dalio’s new hires are gone within eighteen months.

On the other hand, if dissent is perceived as self-serving or tattling, then the group will reject it like a foreign virus.  Let’s face it: nobody likes a rat.

We agree with Nemeth’s observation that training is not likely to improve the quality of an organization’s decision making.  Training can give people skills or techniques for better decision making but training does not address the underlying values that steer group decision making dynamics. 

Much academic research of this sort is done using students as test subjects.***  They are readily available, willing to participate, and follow directions.  Some folks think the results don’t apply to older adults in formal organizations.  We disagree.  It’s easier to form stranger groups with students who don’t have to worry about power and personal relationships than people in work situations; underlying psychological mechanisms can be clearly and cleanly exposed.

Bottom line: This is a lucid book written for popular consumption, not an academic journal, and is worth a read. 


(Give me the liberty to know, to utter, and to argue freely according to conscience. — John Milton)


*  C. Nemeth, In Defense of Troublemakers (New York: Basic Books, 2018).

**  Kennedy learned from the Bay of Pigs fiasco.  He used a much more open and inclusive decision making process during the Cuban Missile Crisis.

***  For example, Daniel Kahneman’s research reported in Thinking, Fast and Slow, which we reviewed Dec. 18, 2013.

Monday, June 15, 2020

IAEA Working Paper on Safety Culture Traits and Attributes

Working paper cover
The International Atomic Energy Agency (IAEA) has released a working paper* that attempts to integrate (“harmonize”) the efforts by several different entities** to identify and describe desirable safety culture (SC) traits and attributes.  The authors have also tried to make the language of SC less nuclear power specific, i.e., more general and thus helpful to other fields that deal with ionizing radiation, such as healthcare.  Below we list the 10 traits and highlight the associated attributes that we believe are most vital for a strong SC.  We also offer our suggestions for enhancing the attributes to broaden and strengthen the associated trait’s presence in the organization.

Individual Responsibility 


All individuals associated with an organization know and adhere to its standards and expectations.  Individuals promote safe behaviors in all situations, collaborate with other individuals and groups to ensure safety, and “accept the value of diverse thinking in optimizing safety.”

We applaud the positive mention of “diverse thinking.”  We also believe each individual should have the duty to report unsafe situations or behavior to the appropriate authority and this duty should be specified in the attributes.

Questioning Attitude 


Individuals watch for anomalies, conditions, behaviors or activities that can adversely impact safety.  They stop when they are uncertain and get advice or help.  They try to avoid complacency.  “They understand that the technologies are complex and may fail in unforeseen ways . . .” and speak up when they believe something is incorrect.

Acknowledging that technology may “fail in unforeseen ways” is important.  Probabilistic Risk Assessments and similar analyses do not identify all the possible ways bad things can happen. 

Communication

Individuals communicate openly and candidly throughout the organization.  Communication with external organizations and the public is accurate.  The reasons for decisions are communicated.  The expectation that safety is emphasized over competing goals is regularly reinforced.

Leader Responsibility

Leaders place safety above competing goals, model desired safety behaviors, frequently visit work areas, involve individuals at all levels in identifying and resolving issues, and ensure that resources are available and adequate.

“Leaders ensure rewards and sanctions encourage attitudes and behaviors that promote safety.”  An organization’s reward system is a hot button issue for us.  Previous SC framework documents have never addressed management compensation and this one doesn’t either.  If SC and safety performance are important then people from top executives to individual workers should be rewarded (by which we mean paid money) for doing it well.

Leaders should also address work backlogs.  Backlogs send a signal to the organization that sub-optimal conditions are tolerated and, if such conditions continue long enough,  are implicitly acceptable.  Backlogs encourage workarounds and lack of attention to detail, which will eventually create challenges to the safety management system.  

Decision-Making

“Individuals use a consistent, systematic approach to evaluate relevant factors, including risk, when making decisions.”  Organizations develop the ability to adapt in anticipation of unforeseen situations where no procedure or plan applies.

We believe the decision making process should be robust, i.e., different individuals or groups facing the same issue should come up with the same or an equally effective solution.  The organization’s approach to decision making (goals, priorities, steps, etc.) should be documented to the extent practical.  Robustness and transparency support efficient, effective communication of the reasons for decisions.

Work Environment 


“Trust and respect permeate the organization. . . . Differing opinions are encouraged, discussed, and thoughtfully considered.”

In addition, senior managers need to be trusted to tell the truth, do the right things, and not sacrifice subordinates to evade the managers’ own responsibilities.

Continuous Learning 


The organization uses multiple approaches to learn including independent and self-assessments, lessons learned from their own experience, and benchmarking other organizations.

Problem Identification and Resolution

“Issues are thoroughly evaluated to determine underlying causes and whether the issue exists in other areas. . . . The effectiveness of the actions is assessed to ensure issues are adequately addressed. . . . Issues are analysed to identify possible patterns and trends. A broad range of information is evaluated to obtain a holistic view of causes and results.”

This is good but could be stronger.  Leaders should ensure the most knowledgeable individuals, regardless of their role or rank, are involved in addressing an issue. Problem solvers should think about the systemic relationships of issues, e.g., is an issue caused by activity in or feedback from some other sub-system, the result of a built-in time delay, or performance drift that exceeded the system’s capacities?  Will the proposed fix permanently address the issue or is it just a band-aid?

Raising Concerns

The organization encourages personnel to raise safety concerns and does not tolerate harassment, intimidation, retaliation or discrimination for raising safety concerns. 

This is the essence of a Safety Conscious Work Environment and is sine qua non for any high hazard undertaking.

Work Planning 


“Work is planned and conducted such that safety margins are preserved.”

Our Perspective

We have never been shy about criticizing IAEA for some of its feckless efforts to get out in front of the SC parade and pretend to be the drum major.***  However, in this case the agency has been content, so far, to build on the work of others.  It’s difficult for any organization to develop, implement, and maintain a strong, robust SC and the existence of many different SC guidebooks has never been helpful.  This is one step in the right direction.  We’d like to see other high hazard industries, in particular healthcare organizations such as hospitals, take to heart SC lessons learned from the nuclear industry.

Bottom line: This concise paper is worth checking out.


*  IAEA Working Document, “A Harmonized Safety Culture Model” (May 5, 2020).  This document is not an official IAEA publication.

**  Including IAEA, WANO, INPO, and government institutions from the United States, Japan, and Finland.

***  See, for example, our August 1, 2016 post on IAEA’s document describing how to perform safety culture self-assessments.  Click on the IAEA label to see all posts related to IAEA.