Monday, June 29, 2020

A Culture that Supports Dissent: Lessons from In Defense of Troublemakers by Charlan Nemeth

Charlan Nemeth is a psychology professor at the University of California, Berkeley.  Her research and practical experience inform her conclusion that the presence of authentic dissent during the decision making process leads to better informed and more creative decisions.  This post presents highlights from her 2018 book* and provides our perspective on her views.

Going along to get along

Most people are inclined to go along with the majority in a decision making situation, even when they believe the majority is wrong.  Why?  Because the majority has power and status, most organizational cultures value consensus and cohesion, and most people want to avoid conflict. (179)

An organization’s leader(s) may create a culture of agreement but consensus, aka the tyranny of the majority, gives the culture its power over members.  People consider decisions from the perspective of the consensus, and they seek and analyze information selectively to support the majority opinion.  The overall effect is sub-optimal decision making; following the majority requires no independent information gathering, no creativity, and no real thinking. (36,81,87-88)

Truth matters less than group cohesion.  People will shape and distort reality to support the consensus—they are complicit in their own brainwashing.  They will willingly “unknow” their beliefs, i.e., deny something they know to be true, to go along.  They live in information bubbles that reinforce the consensus, and are less likely to pay attention to other information or a different problem that may arise.  To get along, most employees don’t speak up when they see problems. (32,42,98,198)

“Groupthink” is an extreme form of consensus, enabled by a norm of cohesion, a strong leader, situational stress, and no real expectation that a better idea than the leader’s is possible.  The group dynamic creates a feedback loop where people repeat and reinforce the information they have in common, leading to more extreme views and eventually the impetus to take action.  Nemeth’s illustrative example is the decision by President John Kennedy and his advisors to authorize the disastrous Bay of Pigs invasion.** (140-142)

Dissent adds value to the decision making process

Dissent breaks the blind following of the majority and stimulates thought that is more independent and divergent, i.e., creates more alternatives and considers facts on all sides of the issue.  Importantly, the decision making process is improved even when the dissenter is wrong because it increases the group’s chances of identifying correct solutions. (7-8,12,18,116,180) 

Dissent takes courage but can be contagious; a single dissenter can encourage others to speak up.  Anonymous dissent can help protect the dissenter from the group. (37,47) 

Dissent must be authentic, i.e., it must reflect the true beliefs of the dissenter.  To persuade others, the dissenter must remain consistent in his position.  He can only change because of new or changing information.  Only authentic, persistent dissent will force others to confront the possibility that they may be wrong.  At the end of the day, getting a deal may require the dissenter to compromise, but changing the minds of others requires consistency. (58,63-64,67,115,190)

Alternatives to dissent

Other, less antagonistic, approaches to improving decision making have been promoted.  Nemeth finds them lacking.

Training is the go to solution in many organizations but is not very effective in addressing biases or getting people to speak up to realities of power and hierarchies.   Dissent is superior to training because it prompts reconsidering positions and contemplating alternatives. (101,107)

Classical brainstorming incorporates several rules for generating ideas, including withholding criticism of ideas that have been put forth.  However, Nemeth found in her research that allowing (but not mandating) criticism led to more ideas being generated.   In her view, it’s the “combat between different positions that provides the benefits to decision making.” (131,136)

Demographic diversity is promoted as a way to get more input into decisions.  But demographics such as race or gender are not as helpful as diversity of skills, knowledge, and backgrounds (and a willingness to speak up), along with leaders who genuinely welcome different viewpoints. (173,175,200)

The devil’s advocate approach can be better than nothing, but it generally leads to considering the negatives of the original position, i.e., the group focuses on better defenses for that position rather than alternatives to it.  Group members believe the approach is fake or acting (even when the advocate really believes it) so it doesn’t promote alternative thinking or force participants to confront the possibility that they may be wrong.  The approach is contrived to stimulate divergent thinking but it actually creates an illusion that all sides have been considered while preserving group cohesion. (182-190,203-04)

Dissent is not free for the individual or the group

Dissenters are disliked, ridiculed, punished, or worse.  Dissent definitely increases conflict and sometimes lowers morale in the group.  It requires a culture where people feel safe in expressing dissent, and it’s even better if dissent is welcomed.  The culture should expect that everyone will be treated with respect. (197-98,209)

Our Perspective

We have long argued that leaders should get the most qualified people, regardless of rank or role, to participate in decision making and that alternative positions should be encouraged and considered.  Nemeth’s work strengthens and extends our belief in the value of different views.

If dissent is perceived as an honest effort to attain the truth of a situation, it should be encouraged by management and tolerated, if not embraced, by peers.  Dissent may dissuade the group from linear cause-effect, path of least resistance thinking.  We see a similar practice in Ray Dalio’s concepts of an idea meritocracy and radical open-mindedness, described in our April 17, 2018 review of his book Principles.  In Dalio’s firm, employees are expected to engage in lively debate, intellectual combat even, over key decisions.  His people have an obligation to speak up if they disagree.  Not everyone can do this; a third of Dalio’s new hires are gone within eighteen months.

On the other hand, if dissent is perceived as self-serving or tattling, then the group will reject it like a foreign virus.  Let’s face it: nobody likes a rat.

We agree with Nemeth’s observation that training is not likely to improve the quality of an organization’s decision making.  Training can give people skills or techniques for better decision making but training does not address the underlying values that steer group decision making dynamics. 

Much academic research of this sort is done using students as test subjects.***  They are readily available, willing to participate, and follow directions.  Some folks think the results don’t apply to older adults in formal organizations.  We disagree.  It’s easier to form stranger groups with students who don’t have to worry about power and personal relationships than people in work situations; underlying psychological mechanisms can be clearly and cleanly exposed.

Bottom line: This is a lucid book written for popular consumption, not an academic journal, and is worth a read. 


(Give me the liberty to know, to utter, and to argue freely according to conscience. — John Milton)


*  C. Nemeth, In Defense of Troublemakers (New York: Basic Books, 2018).

**  Kennedy learned from the Bay of Pigs fiasco.  He used a much more open and inclusive decision making process during the Cuban Missile Crisis.

***  For example, Daniel Kahneman’s research reported in Thinking, Fast and Slow, which we reviewed Dec. 18, 2013.

Monday, June 15, 2020

IAEA Working Paper on Safety Culture Traits and Attributes

Working paper cover
The International Atomic Energy Agency (IAEA) has released a working paper* that attempts to integrate (“harmonize”) the efforts by several different entities** to identify and describe desirable safety culture (SC) traits and attributes.  The authors have also tried to make the language of SC less nuclear power specific, i.e., more general and thus helpful to other fields that deal with ionizing radiation, such as healthcare.  Below we list the 10 traits and highlight the associated attributes that we believe are most vital for a strong SC.  We also offer our suggestions for enhancing the attributes to broaden and strengthen the associated trait’s presence in the organization.

Individual Responsibility 


All individuals associated with an organization know and adhere to its standards and expectations.  Individuals promote safe behaviors in all situations, collaborate with other individuals and groups to ensure safety, and “accept the value of diverse thinking in optimizing safety.”

We applaud the positive mention of “diverse thinking.”  We also believe each individual should have the duty to report unsafe situations or behavior to the appropriate authority and this duty should be specified in the attributes.

Questioning Attitude 


Individuals watch for anomalies, conditions, behaviors or activities that can adversely impact safety.  They stop when they are uncertain and get advice or help.  They try to avoid complacency.  “They understand that the technologies are complex and may fail in unforeseen ways . . .” and speak up when they believe something is incorrect.

Acknowledging that technology may “fail in unforeseen ways” is important.  Probabilistic Risk Assessments and similar analyses do not identify all the possible ways bad things can happen. 

Communication

Individuals communicate openly and candidly throughout the organization.  Communication with external organizations and the public is accurate.  The reasons for decisions are communicated.  The expectation that safety is emphasized over competing goals is regularly reinforced.

Leader Responsibility

Leaders place safety above competing goals, model desired safety behaviors, frequently visit work areas, involve individuals at all levels in identifying and resolving issues, and ensure that resources are available and adequate.

“Leaders ensure rewards and sanctions encourage attitudes and behaviors that promote safety.”  An organization’s reward system is a hot button issue for us.  Previous SC framework documents have never addressed management compensation and this one doesn’t either.  If SC and safety performance are important then people from top executives to individual workers should be rewarded (by which we mean paid money) for doing it well.

Leaders should also address work backlogs.  Backlogs send a signal to the organization that sub-optimal conditions are tolerated and, if such conditions continue long enough,  are implicitly acceptable.  Backlogs encourage workarounds and lack of attention to detail, which will eventually create challenges to the safety management system.  

Decision-Making

“Individuals use a consistent, systematic approach to evaluate relevant factors, including risk, when making decisions.”  Organizations develop the ability to adapt in anticipation of unforeseen situations where no procedure or plan applies.

We believe the decision making process should be robust, i.e., different individuals or groups facing the same issue should come up with the same or an equally effective solution.  The organization’s approach to decision making (goals, priorities, steps, etc.) should be documented to the extent practical.  Robustness and transparency support efficient, effective communication of the reasons for decisions.

Work Environment 


“Trust and respect permeate the organization. . . . Differing opinions are encouraged, discussed, and thoughtfully considered.”

In addition, senior managers need to be trusted to tell the truth, do the right things, and not sacrifice subordinates to evade the managers’ own responsibilities.

Continuous Learning 


The organization uses multiple approaches to learn including independent and self-assessments, lessons learned from their own experience, and benchmarking other organizations.

Problem Identification and Resolution

“Issues are thoroughly evaluated to determine underlying causes and whether the issue exists in other areas. . . . The effectiveness of the actions is assessed to ensure issues are adequately addressed. . . . Issues are analysed to identify possible patterns and trends. A broad range of information is evaluated to obtain a holistic view of causes and results.”

This is good but could be stronger.  Leaders should ensure the most knowledgeable individuals, regardless of their role or rank, are involved in addressing an issue. Problem solvers should think about the systemic relationships of issues, e.g., is an issue caused by activity in or feedback from some other sub-system, the result of a built-in time delay, or performance drift that exceeded the system’s capacities?  Will the proposed fix permanently address the issue or is it just a band-aid?

Raising Concerns

The organization encourages personnel to raise safety concerns and does not tolerate harassment, intimidation, retaliation or discrimination for raising safety concerns. 

This is the essence of a Safety Conscious Work Environment and is sine qua non for any high hazard undertaking.

Work Planning 


“Work is planned and conducted such that safety margins are preserved.”

Our Perspective

We have never been shy about criticizing IAEA for some of its feckless efforts to get out in front of the SC parade and pretend to be the drum major.***  However, in this case the agency has been content, so far, to build on the work of others.  It’s difficult for any organization to develop, implement, and maintain a strong, robust SC and the existence of many different SC guidebooks has never been helpful.  This is one step in the right direction.  We’d like to see other high hazard industries, in particular healthcare organizations such as hospitals, take to heart SC lessons learned from the nuclear industry.

Bottom line: This concise paper is worth checking out.


*  IAEA Working Document, “A Harmonized Safety Culture Model” (May 5, 2020).  This document is not an official IAEA publication.

**  Including IAEA, WANO, INPO, and government institutions from the United States, Japan, and Finland.

***  See, for example, our August 1, 2016 post on IAEA’s document describing how to perform safety culture self-assessments.  Click on the IAEA label to see all posts related to IAEA.

Thursday, December 19, 2019

Requiescat in pace – Bob Cudlin

Robert L. Cudlin passed away on Nov. 23, 2019. Bob was a co-founder of Safetymatters and a life-long contributor to the nuclear industry. He started at the Nuclear Regulatory Commission where he was a member of the NRC response team at Three Mile Island after the 1979 accident. He later worked on Capitol Hill as the nuclear safety expert for a Senate committee. He spent the bulk of his career consulting to nuclear plant owners, board members, and senior managers. His consulting practice focused on helping clients improve their plants’ safety and reliability performance. Bob was a systems thinker who was constantly looking for new insights into organizational performance and evolution. He will be missed.

Wednesday, November 6, 2019

National Academies of Sciences, Engineering, and Medicine Systems Model of Medical Clinician Burnout, Including Culture Aspects

Source: Medical Academic S. Africa
We have been posting about preventable harm to health care patients, emphasizing how improved organizational mental models and attention to cultural attributes might reduce the incidence of such harm.  A new National Academies of Sciences, Engineering, and Medicine (NASEM) committee report* looks at one likely contributor to the patient harm problem: clinician burnout.**  The NASEM committee purports to use a systems model to analyze burnout and develop strategies for reducing burnout while fostering professional well-being and enhancing patient care.  This post summarizes the 300+ page report and offers our perspective on it.

The Burnout Problem and the Systems Model 


Clinician burnout is caused by stressors in the work environment; burnout can lead to behavioral and health issues for clinicians, clinicians prematurely leaving the healthcare field, and poorer treatment and outcomes for patients.  This widespread problem requires a “systemic approach to burnout that focuses on the structure, organization, and culture of health care.” (p. 3)

The NASEM committee’s systems model has three levels: frontline care delivery, the health care organization, and the external environment.  Frontline care delivery is the environment in which care is provided.  The health care organization includes the organizational culture, payment and reward systems, processes for managing human capital and human resources, the leadership and management style, and organizational policies. The external environment includes political, market, professional, and societal factors.

All three levels contribute to an individual clinician’s work environment, and ultimately boil down to a set of job demands and job resources for the clinician.

Recommendations

The report identifies multiple factors that need to be considered when developing interventions, including organizational values and leadership; a work system that provides adequate resources, facilitates team work, collaboration, communication, and professionalism; and an implementation approach that builds a learning organization, reward systems that align with organizational values, nurtures organizational culture, and uses human-centered design processes. (p. 7)

The report presents six recommendations for reducing clinician burnout and fostering professional well-being:

1. Create positive work environments,
2. Create positive learning environments,
3. Reduce administrative burdens,
4. Optimize the use of health information technologies,
5. Provide support to clinicians to prevent and alleviate burnout, and foster professional well-being, and
6. Invest in research on clinician professional well-being.

Our Perspective

We’ll ask and answer a few questions about this report.

Did the committee design an actual and satisfactory systems model?

We have promoted systems thinking since the inception of Safetymatters so we have some clear notions of what should be included in a systems model.  We see both positives and missing pieces in the NASEM committee’s approach.***

On the plus side, the tri-level model provides a useful and clear depiction of the health care system and leads naturally to an image of the work world each clinician faces.   We believe a model should address certain organizational realities—goal conflict, decision making, and compensation—and this model is minimally satisfactory in these areas.  A clinician’s potential goal conflicts, primarily maintaining a patient focus while satisfying the organization’s quality measures, managing limited resources, achieving economic goals, and complying with regulations, is mentioned once. (p. 54)  Decision making (DM) specifics are discussed in several areas, including evidenced-based DM (p. 25), the patient’s role in DM (p. 53), the burnout threat when clinicians lack input to DM (p. 101), the importance of participatory DM (pp. 134, 157, 288), and information technology as a contributor to DM (p. 201).  Compensation, which includes incentives, should align with organizational values (pp. 10, 278, 288), and should not be a stressor on the individual (p. 153).  Non-financial incentives such as awards and recognition are not mentioned.

On the downside, the model is static and two-dimensional.  The interrelationships and dynamics among model components are not discussed at all.  For example, the importance of trust in management is mentioned (p. 132) but the dynamics of trust are not discussed.  In our experience, “trust” is a multivariate function of, among other things, management’s decisions, follow-through, promise keeping, role modeling, and support of subordinates—all integrated over time.  In addition, model components feed back into one another, both positively and negatively.  In the report, the use of feedback is limited to clinicians’ experiences being fed back to the work designers (pp. 6, 82), continuous learning and improvement in the overall system (pp. 30, 47, 51, 157), and individual work performance recognition (pp. 103, 148).  It is the system dynamics that create homeostasis, fluctuations, and all levels of performance from superior to failure.

Does culture play an appropriate role in the model and recommendations?

We know that organizational culture affects performance.  And culture is mentioned throughout this report as a system component with the implication that it is an important factor, but it is not defined until a third of the way through the report.****  The NASEM committee apparently assumes everyone knows what culture is, and that’s a problem because groups, even in the same field, often do not share a common definition of culture.

But the lack of a definition doesn’t stop the authors from hanging all sorts of attributes on the culture tree.  For example, the recommendation details include “Nurture (establish and sustain) organizational culture that supports change management, psychological safety, vulnerability, and peer support.” (p. 7)  This is mostly related to getting clinicians to recognize their own burnout and seek help, and removing the social stigma associated with getting help.  There are a lot of moving parts in this recommendation, not the least of which is overcoming the long-held cultural ideal of the physician as a tough, all-knowing, powerful authority figure. 

Teamwork and participatory decision making are promoted (pp. 10, 51) but this can be a major change for organizations that traditionally have strong silos and value adherence to established procedures and protocols. 

There are bromides sprinkled through the report.  For example, “Leadership, policy, culture, and incentives are aligned at all system levels to achieve quality aims and promote integrity, stewardship, and accountability.” (p. 25)  That sounds worthy but is a huge task to specify and implement.  Same with calling for a culture of continuous learning and improvement, or in the committee’s words a “Leadership-instilled culture of learning—is stewarded by leadership committed to a culture of teamwork, collaboration, and adaptability in support of continuous learning as a core aim” (p. 51)

Are the recommendations useful?

We hope so.  We are not behavioral scientists but the recommendations appear to represent sensible actions.  They may help and probably won’t hurt—unless a health care organization makes promises that it cannot or will not keep.  That said, the recommendations are pretty vanilla and the NASEM committee cannot be accused of going out on any limbs.

Bottom line: Clinician burnout undoubtedly has a negative impact on patient care and outcomes.  Anything that can reduce burnout will improve the performance of the health care system.  However, this report does not appreciate the totality of cultural change required to implement the modest recommendations.


*  National Academies of Sciences, Engineering, and Medicine, “Taking Action Against Clinician Burnout: A Systems Approach to Professional Well-Being,” (Washington, DC: The National Academies Press, 2019). 


**  “Burnout is a syndrome characterized by high emotional exhaustion, high depersonalization (i.e., cynicism), and a low sense of personal accomplishment from work.” (p. 1)  “Clinician burnout is associated with an increased risk of patient safety incidents . . .” (p. 2)

***  As an aside, the word “systems” is mentioned over 700 times in the report.

****  “Organizational culture is defined by the fundamental artifacts, values, beliefs, and assumptions held by employees of an organization (Schein, 1992). An organization’s culture is manifested in its actions (e.g., decisions, resource allocation) and relayed through organizational structure, focus, mission and value alignment, and leadership behaviors” (p. 99)  This is good but it should have been presented earlier in the report.

Wednesday, October 9, 2019

More on Mental Models in Healthcare

Source: Clipart Panda
Our August 6, 2019 post discussed the appalling incidence of preventable harm in healthcare settings.  We suggested that a better mental model of healthcare delivery could contribute to reducing the incidence of preventable harm.  It will come as no surprise to Safetymatters readers that we are referring to a systems-oriented model.

We’ll use a 2014 article* by Nancy Leveson and Sidney Dekker to describe how a systems approach can lead to better understanding of why accidents and other negative outcomes occur.  The authors begin by noting that 70-90% of industrial accidents are blamed on individual workers.**  As a consequence, proposed fixes focus on disciplining, firing, or retraining individuals or, for groups, specifying their work practices in ever greater detail (the authors call this “rigidifying” work).  This is the Safety I mental model in a nutshell, limiting its view to the “what” and “who” of incidents.   

In contrast, systems thinking posits the behavior of individuals can only be understood by examining the context in which their behavior occurs.  The context includes management decision-making and priorities, regulatory requirements and deficiencies, and of course, organizational culture, especially safety culture.  Fixes that don’t consider the overall process almost guarantee that similar problems will arise in the future.  “. . . human error is a symptom of a system that needs to be redesigned.”  Systems thinking adds the “why” to incident analysis.

Every system has a designer, although they may not be identified as such and may not even be aware they’re “designing” when they specify work steps or flows, or define support processes, e.g., procurement or quality control.  Importantly, designers deal with an ideal system, not with the actual constructed system.  The actual system may differ from the designer's original specification because of inherent process variances, the need to address unforeseen conditions, or evolution over time.  Official procedures may be incomplete, e.g., missing unlikely but possible conditions or assume that certain conditions cannot occur.  However, the people doing the work must deal with the  constructed system, however imperfect, and the conditions that actually occur.

The official procedures present a doubled-edged threat to employees.  If they adapt procedures in the face of unanticipated conditions, and the adaptation turns out to be ineffective or leads to negative outcomes, employees can be blamed for not following the procedures.  On the other hand, if they stick to the procedures when conditions suggest they should be adapted and negative outcomes occur, the employees can be blamed for too rigidly following them.

Personal blame is a major problem in System I.  “Blame is the enemy of safety . . . it creates a culture where people are afraid to report mistakes . . . A safety culture that focuses on blame will never be very effective in preventing accidents.”

Our Perspective

How does the above relate to reducing preventable harm in healthcare?  We believe that structural and cultural factors impede the application of systems thinking in the healthcare field.  It keeps them stuck in a Safety I worldview no matter how much they pretend otherwise. 

The hospital as formal bureaucracy

When we say “healthcare” we are referring to a large organization that provides medical care, a hospital is the smallest unit of analysis.  A hospital is literally a textbook example of what organizational theorists call a formal bureaucracy.  It has specialized departments with an official division of authority among them—silos are deliberately created and maintained.  An administrative hierarchy mediates among the silos and attempts to guide them toward overall goals. The organization is deliberately impersonal to avoid favoritism and behavior is prescribed, proscribed and guided by formal rules and procedures.  It appears hospitals were deliberately designed to promote System I thinking and its inherent bias for blaming the individual for negative outcomes.

Employees have two major strategies for avoiding blame: strong occupational associations and plausible deniability. 

Powerful guilds and unions 


Medical personnel are protected by their silo and tribe.  Department heads defend their employees (and their turf) from outsiders.  The doctors effectively belong to a guild that jealously guards their professional authority; the nurses and other technical fields have their unions.  These unofficial and official organizations exist to protect their members and promote their interests.  They do not exist to protect patients although they certainly tout such interest when they are pushing for increased employee headcounts.  A key cultural value is members do not rat on other members of their tribe so problems may be observed but go unreported.

Hiding behind the procedures

In this environment, the actual primary goal is to conform to the rules, not to serve clients.  The safest course for the individual employee is to follow the rules and procedures, independent of the effect this may have on a patient.  The culture espouses a value of patient safety but what gets a higher value is plausible deniability, the ability to avoid personal responsibility, i.e., blame, by hiding behind the established practices and rules when negative outcomes occur.

An enabling environment 


The environment surrounding healthcare allows them to continue providing a level of service that literally kills patients.  Data opacity means it’s very difficult to get reliable information on patient outcomes.  Hospitals with high failure rates simply claim they are stuck with or choose to serve the sickest patients.  Weak malpractice laws are promoted by the doctors’ guild and maintained by the politicians they support.  Society in general is overly tolerant of bad medical outcomes.  Some families may make a fuss when a relative dies from inadequate care but settlements are paid, non-disclosure agreements are signed, and the enterprise moves on.

Bottom line: It will take powerful forces to get the healthcare industry to adopt true systems-oriented thinking and identify the real reasons why preventive harm occurs and what corrective actions could be effective.  Healthcare claims to promote evidence-based medicine; they need to add evidence-based harm reduction strategies.  Industry-wide adoption of the aviation industry’s confidential reporting system for errors would be a big step forward.    


*  N. Leveson and S. Dekker, “Get To The Root Of Accidents,” ChemicalProcessing.com (Feb 27, 2014).  Retrieved Oct. 7, 2019.  Leveson is an MIT professor and long-standing champion of systems thinking; Dekker has written extensively on Just Culture and Safety II concepts.  Click on their respective labels to pull up our other posts on their work.

**  The article is tailored for the process industry but the same thinking can be applied to service industries.

Tuesday, August 6, 2019

Safety II Lessons for Healthcare

Rod of Asclepius  Source: Wikipedia
We recently saw a journal article* about the incidence of preventable patient harm in medical care settings.  The rate of occurrence of harm is shocking, at least to someone new to the topic.  We wondered if healthcare providers and researchers being constrained by Safety I thinking could be part of the problem.  Below we provide a summary of the article, followed by our perspective on how Safety II thinking and practices might add value.

Incidence of preventable patient harm

The meta-analysis reviewed 70 studies and over 300,000 patients.  The overall incidence of patient harm (e.g., injury, suffering, disability or death) was 12% and half of that was deemed preventable.**  In other words, “Around one in 20 patients are exposed to preventable harm in medical care.”  12% of the preventable patient harm was severe or led to death.  25% of the preventable incidents were related to drugs and 24% to other treatments.  The authors did not observe any change in the preventable harm rate over the 19 years of data they reviewed.

Possible interventions

In fairness, the article’s focus was on calculating the incidence of preventable harm, not on identifying or fixing specific problems.  However, the authors do make several observations about possible ways to reduce the incidence rate.  The article had 11 authors so we assume these observations are not just one person’s to-do list but rather represent the collective thoughts of the author group.

The authors note “Key sources of preventable patient harm could include the actions of healthcare professionals (errors of omission or commission), healthcare system failures, or involve a combination of errors made by individuals, system failures, and patient characteristics.”  They believe occurrences could be avoided “by reasonable adaptation to a process, or adherence to guidelines, . . .” 

The authors suggest “A combination of individual-level measures (eg, educational interventions for practitioners), system-level*** measures (eg, human-centred design of healthcare tasks and work environments), and organisational-level measures (eg, introducing quality monitoring and improvement processes) are likely to be a promising strategy for mitigating preventable patient harm, . . .”

Our Perspective

Let’s get one thing out of the way: no other industry on the planet would be allowed to operate if it unnecessarily harmed people at the rate presented in this article.  As a global society, we accept, or at least tolerate, a surprising incidence of preventable harm to the people the healthcare system is supposed to be trying to serve.

We see a direct connection between this article and our Oct. 29, 2018 post where we reviewed Sydney Dekker’s analysis of patient harm in a health care facility.  Dekker’s report also highlighted the differences between the traditional Safety I approach to safety management and the more current Safety II approach.

As we stated in that post, in Safety I the root cause of imperfect results is the individual and constant efforts are necessary (e.g., training, monitoring, leadership, discipline) to create and maintain the individual’s compliance with work as designed.  In addition, the design of the work is subject to constant refinement (or “continuous improvement”).  In the preventable harm article, the authors’ observations look a lot like Safety I to us, with their emphasis on getting the individual to conform with work as designed, e.g, educational interventions (i.e., training), adherence to guidelines and quality monitoring, and improved design (i.e., specification) of healthcare tasks.

In contrast, in Safety II normal system functioning leads to mostly good and occasionally bad results.  The focus of Safety II interventions should be on activities that increase individual capacity to affect system performance and/or increase system robustness, i.e., error tolerance and an increased chance of recovery when errors inevitably occur.  When Dekker’s team reviewed cases with harm vs. cases with good outcomes, they observed that the good outcome cases “had more positive characteristics, including diversity of professional opinion and the possibility to voice dissent, keeping the discussion on risk alive and not taking past success as a guarantee for safety, deference to proven expertise, widely held authority to say “stop,” and pride of workmanship.”  We don’t see any evidence of this approach in the subject article.

Could Safety II thinking reduce the incidence of preventable harm in healthcare?  Possibly.  But what’s clear is that doing more of the same thing (more training, task specification and monitoring) has not improved the preventable harm rate over 19 years.  Maybe it’s time to think about the problems using a different mental model.

Afterword

In a subsequent interview,**** the lead author of the study said “providers and health-care systems need to “train and empower patients to be active partners” in their own care.”  This is a significant change in the model of the health care system, from the patient being the client of the system to an active component.  Such empowerment is especially important where the patient’s individual characteristics may make him/her more susceptible to harm.  The author’s advice to patients is tantamount to admitting that current approaches to diagnosing and treating patients are producing sub-standard results. 


*  M. Panagioti, K. Khan, R.N. Keers,  A. Abuzour, D. Phipps, E. Kontopantelis et al. “Prevalence, severity, and nature of preventable patient harm across medical care settings: systematic review and meta-analysis,” BMJ 2019; 366:l4185.  Retrieved July 30, 2019.

**  The goal for patient harm is not zero.  The authors accept that “some harms cannot be avoided in clinical practice.”

***  When the authors say “system” they are not referring to the term as we use it in Safetymatters, i.e., a complex collection of components, feedback loops and environmental interactions.  The authors appear to limit the “system” to the immediate context in which healthcare is provided.  They do offer a hint of a larger system when they comment about the “need to gain better insight about the systemic and cultural circumstances under which preventable patient harm occurs”.

****  M. Jagannathan, “In a review of 337,000 patient cases, this was the No. 1 most common preventable medical error,” MarketWatch (July 28, 2019).  Retrieved July 30, 2019.  This article included a list of specific steps patients can take to be more active, informed, and effective partners in obtaining health care.

Tuesday, May 28, 2019

The Study of Organizational Culture: History, Assessment Methods, and Insights

We came across an academic journal article* that purports to describe the current state of research into organizational culture (OC).  It’s interesting because it includes a history of OC research and practice, and a critique of several methods used to assess it.  Following is a summary of the article and our perspective on it, focusing on any applicability to nuclear safety culture (NSC).

History

In the late 1970s scholars studying large organizations began to consider culture as one component of organizational identity.  In the same time frame, practicing managers also began to show an interest in culture.  A key driver of their interest was Japan’s economic ascendance and descriptions of Japanese management practices that depended heavily on cultural factors.  The notion of a linkage between culture and organizational performance inspired non-Japanese managers to seek out assistance in developing culture as a competitive advantage for their own companies.  Because of the sense of urgency, practical applications (usually developed and delivered by consultants) were more important than developing a consistent, unified theory of OC.  Practitioners got ahead of researchers and the academic world has yet to fully catch up.

Consultant models only needed a plausible, saleable relationship between culture and organizational performance.  In academic terms, this meant that a consultant’s model relating culture to performance only needed some degree of predictive validity.  Such models did not have to exhibit construct validity, i.e., some proof that they described, measured, or assessed a client organization’s actual underlying culture.  A second important selling point was the consultants’ emphasis on the singular role of the senior leaders (i.e., the paying clients) in molding a new high-performance culture.

Over time, the emphasis on practice over theory and the fragmented efforts of OC researchers led to some distracting issues, including the definition of OC itself, the culture vs. climate debate, and qualitative vs. quantitative models of OC. 

Culture assessment methods 


The authors provide a detailed comparison of four quantitative approaches for assessing OC: the Denison Organizational Culture Survey (used by more than 5,000 companies), the Competing Values Framework (used in more than 10,000 organizations), the Organizational Culture Inventory (more than 2,000,000 individual respondents), and the Organizational Culture Profile (OCP, developed by the authors and used in a “large number” of research studies).  We’ll spare you the gory details but unsurprisingly, the authors find shortcomings in all the approaches, even their own. 

Some of this criticism is sour grapes over the more popular methods.  However, the authors mix their criticism with acknowledgement of functional usefulness in their overall conclusion about the methods: because they lack a “clear definition of the underlying construct, it is difficult to know what is being measured even though the measure itself has been shown to be reliable and to be correlated with organizational outcomes.” (p. 15)

Building on their OCP, the authors argue that OC researchers should start with the Schein three-level model (basic assumptions and beliefs, norms and values, and cultural artifacts) and “focus on the norms that can act as a social control system in organizations.” (p. 16)  As controllers, norms can be descriptive (“people look to others for information about how to act and feel in a given situation”) or injunctive (how the group reacts when someone violates a descriptive norm).  Attributes of norms include content, consensus (how widely they are held), and intensity (how deeply they are held).

Our Perspective

So what are we to make of all this?  For starters, it’s important to recognize that some of the topics the academics are still quibbling over have already been settled in the NSC space.  The Schein model of culture is accepted world-wide.  Most folks now recognize that a safety survey, by itself, only reflects respondents’ perceptions at a specific point in time, i.e., it is a snapshot of safety climate.  And a competent safety culture assessment includes both qualitative and quantitative data: surveys, focus groups, interviews, observations, and review of artifacts such as documents.

However, we may still make mistakes.  Our mental models of safety culture may be incomplete or misassembled, e.g., we may see a direct connection between culture and some specific behavior when, in reality, there are intervening variables.  We must acknowledge that OC can be a multidimensional sub-system with complex internal relationships interacting with a complicated socio-technical system surrounded by a larger legal-political environment.  At the end of the day, we will probably still have some unknown unknowns.

Even if we follow the authors’ advice and focus on norms, it remains complicated.  For example, it’s fairly easy to envision that safety could be a widely agreed upon, but not intensely held, norm; that would define a weak safety culture.  But how about safety and production and cost norms in a context with an intensely held norm about maintaining good relations with and among long-serving coworkers?  That could make it more difficult to predict specific behaviors.  However, people might be more likely to align their behavior around the safety norm if there was general consensus across the other norms.  Even if safety is the first among equals, consensus on other norms is key to a stronger overall safety culture that is more likely to sanction deviant behavior.
 
The authors claim culture, as defined by Schein, is not well-investigated.  Most work has focused on correlating perceptions about norms, systems, policies, procedures, practices and behavior (one’s own and others’) to organizational effectiveness with a purpose of identifying areas for improvement initiatives that will lead to increased effectiveness.  The manager in the field may not care if diagnostic instruments measure actual culture, or even what culture he has or needs; he just wants to get the mission accomplished while avoiding the opprobrium of regulators, owners, bosses, lawmakers, activists and tweeters. If your primary focus is on increasing performance, then maybe you don’t need to know what’s under the hood. 

Bottom line: This is an academic paper with over 200 citations but is quite readable although it contains some pedantic terms you probably don’t hear every day, e.g., the ipsative approach to ranking culture attributes (ordinary people call this “forced choice”) and Q factor analysis.**  Some of the one-sentence descriptions of other OC research contain useful food for thought and informed our commentary in this write-up.  There is a decent dose of academic sniping in the deconstruction of commercially popular “culture” assessment methods.  However, if you or your organization are considering using one of those methods, you should be aware of what it does, and doesn’t, incorporate. 


*  J.A. Chatman and C.A. O’Reilly, “Paradigm lost: Reinvigorating the study of organizational culture,” Research in Organizational Behavior (2016).  Retrieved May 28, 2019.

**  “Normal factor analysis, called "R method," involves finding correlations between variables (say, height and age) across a sample of subjects. Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which are claimed to represent shared ways of thinking.”  Wikipedia, “Q methodology.”   Retrieved May 28, 2019.

Monday, April 1, 2019

Culture Insights from The Speed of Trust by Stephen M.R. Covey

In The Speed of Trust,* Stephen M.R. Covey posits that trust is the key competency that allows individuals (especially leaders), groups, organizations, and societies to work at optimum speed and cost.  In his view, “Leadership is getting results in a way that inspires trust.” (p. 40)  We saw the book mentioned in an NRC personnel development memo** and figured it was worth a look. 

Covey presents a model of trust made up of a framework, language to describe the framework’s components, and a set of recommended behaviors.  The framework consists of self trust, relationship trust and stakeholder trust.  Self trust is about building personal credibility; relationship trust is built on one’s behavior with others; and stakeholder trust is built within organizations, in markets (i.e., with customers), and over the larger society.  His model is not overly complicated but it has a lot of parts, as shown in the following figure.


Figure by Safetymatters

4 Cores of credibility 


Covey begins by describing how the individual can learn to trust him or herself.  This is basically an internal process of developing the 4 Cores of credibility: character attributes (integrity and intent) and competence attributes (capabilities and results).  Improvement in these areas increases self-confidence and one’s ability to project a trust-inspiring strength of character.  Integrity includes clarifying values and following them.  Intent includes a transparent, as opposed to hidden, agenda that drives one’s behavior.  Capabilities include the talents, skills, and knowledge, coupled with continuous improvement, that enable excellent performance.  Results, e.g., achieving goals and keeping commitments, are sine qua non for establishing and maintaining credibility and trust.

13 Behaviors  

The next step is learning how to trust and be trusted by others.  This is a social process, i.e., it is created through individual behavior and interaction with others.  Covey details 13 types of behavior to which the individual must attend.  Some types flow primarily, but not exclusively, from character, others from competence, and still others from a combination of the two.  He notes that “. . . the quickest way to decrease trust is to violate a behavior of character, while the quickest way to increase trust is to demonstrate a behavior of competence.” (p. 133)  Covey provides examples of each desired behavior, its opposite, and its “counterfeit” version, i.e., where people are espousing the desired behavior but actually avoiding doing it.  He describes the problems associated with underdoing and overdoing each behavior (an illustration of the Goldilocks Principle).  Behavioral change is possible if the individual has a compelling sense of purpose.  Each behavior type is guided by a set of principles, different for each behavior, as shown in the following figure.


Figure by Safetymatters

Organizational alignment

The third step is establishing trust throughout an organization.  The primary mechanism for accomplishing this is alignment of the organization’s visible symbols, underlying structures, and systems with the ideals expressed in the 4 Cores and 13 Behaviors, e.g., making and keeping commitments and accounting for results.  He describes the “taxes” associated with a low-trust organization and the “dividends” associated with a high-trust organization.  Beyond that, there is nothing new in this section.

Market and societal trust

We’ll briefly address the final topics.  Market trust is about an entity’s brand or reputation in the outside world.  Building a strong brand involves using the 4 Cores to establish, maintain or strengthen one’s reputation.  Societal trust is built on contribution, the value an entity creates in the world through ethical behavior, win-win business dealings, philanthropy and other forms of corporate social responsibility.     

Our Perspective 


Covey provides a comprehensive model of how trust is integral to relationships at every level of complexity, from the self to global relations.
 
The fundamental importance of trust is not new news.  We have long said organization-wide trust is vital to a strong safety culture.  Trust is a lubricant for organizational friction which, like physical friction, slows down activities, and makes them more expensive.  In our Safetysim*** management simulator, trust was an input variable that affected speed and effectiveness of problem resolution and overall cost performance. 

Covey’s treatment of culture is incomplete.  While he connects some of his behaviors or principles to organizational culture,**** he never actually defines culture.  It appears he thinks culture is something that “just is” or, perhaps, a consequence or artifact of performing the behaviors he prescribes.  It’s reasonable to assume Covey believes motivated individuals can behave their way to a better culture, saying “. . . behave your way into the person you want to be.” (pp. 87, 130)  His view is consistent with culture change theorists who believe people will eventually develop desired values if they model desired behavior long enough.  His recipe for cultural change boils down to “Just do it.”  We prefer a more explicit definition of culture, something along the spectrum from the straightforward notion of culture as an underlying set of values to the idea of culture as an emergent property of a complex socio-technical system. 

Trust is not the only candidate for the primary leadership or organizational competence.  The same or similar arguments could also be made about respect.  (Covey mentions respect but only as one of his 13 behaviors.)  Two-way respect is also essential for organizational success.  This leads to an interesting question: Could you respect a leader without trusting him/her?  How about some of the famous hard-ass bosses of management lore, like Harold Geneen?  Or General Patton? 

Covey is obviously a true believer in his message and his presentation has a fervor one normally associates with religious zeal.  He also includes many examples of family situations and describes how his prescriptions can be applied to families.  (Helpful if you want to manage your family like a little factory.)  Covey is a devout Mormon and his faith comes through in his writing. 

The book is an easy read.  Like many books written by successful consultants, it is interspersed with endorsements and quotes from business and political notables.  Covey includes a couple of useful self-assessment surveys.  He also offers a valuable observation: “. . . people tend to judge others based on behavior and judge themselves based on intent.” (p. 301)

Bottom line: This book is worth your time if lack of trust is a problem in your organization.


*  Stephen M. R. Covey, The Speed of Trust (New York: Free Press, 2016).  If the author’s name sounds familiar, it may be because his father, Stephen R. Covey, wrote The 7 Habits of Highly Effective People, a popular self-help book.

**  “Fiscal Year (FY) 2018 FEORP Plan Accomplishments and Successful/Promising Practices at the U.S. Nuclear Regulatory Commission (NRC),” Dec. 17, 2018.  ADAMS ML18351A243.  The agency uses The Speed of Trust concepts in manager and employee training. 

***  Safetysim is a management training simulation tool developed by Safetymatters’ Bob Cudlin.

****  For example, “A transparent culture of learning and growing will generally create credibility and trust, . . .” (p. 117)