Thursday, December 19, 2019

Requiescat in pace – Bob Cudlin

Robert L. Cudlin passed away on Nov. 23, 2019. Bob was a co-founder of Safetymatters and a life-long contributor to the nuclear industry. He started at the Nuclear Regulatory Commission where he was a member of the NRC response team at Three Mile Island after the 1979 accident. He later worked on Capitol Hill as the nuclear safety expert for a Senate committee. He spent the bulk of his career consulting to nuclear plant owners, board members, and senior managers. His consulting practice focused on helping clients improve their plants’ safety and reliability performance. Bob was a systems thinker who was constantly looking for new insights into organizational performance and evolution. He will be missed.

Wednesday, November 6, 2019

National Academies of Sciences, Engineering, and Medicine Systems Model of Medical Clinician Burnout, Including Culture Aspects

Source: Medical Academic S. Africa
We have been posting about preventable harm to health care patients, emphasizing how improved organizational mental models and attention to cultural attributes might reduce the incidence of such harm.  A new National Academies of Sciences, Engineering, and Medicine (NASEM) committee report* looks at one likely contributor to the patient harm problem: clinician burnout.**  The NASEM committee purports to use a systems model to analyze burnout and develop strategies for reducing burnout while fostering professional well-being and enhancing patient care.  This post summarizes the 300+ page report and offers our perspective on it.

The Burnout Problem and the Systems Model 


Clinician burnout is caused by stressors in the work environment; burnout can lead to behavioral and health issues for clinicians, clinicians prematurely leaving the healthcare field, and poorer treatment and outcomes for patients.  This widespread problem requires a “systemic approach to burnout that focuses on the structure, organization, and culture of health care.” (p. 3)

The NASEM committee’s systems model has three levels: frontline care delivery, the health care organization, and the external environment.  Frontline care delivery is the environment in which care is provided.  The health care organization includes the organizational culture, payment and reward systems, processes for managing human capital and human resources, the leadership and management style, and organizational policies. The external environment includes political, market, professional, and societal factors.

All three levels contribute to an individual clinician’s work environment, and ultimately boil down to a set of job demands and job resources for the clinician.

Recommendations

The report identifies multiple factors that need to be considered when developing interventions, including organizational values and leadership; a work system that provides adequate resources, facilitates team work, collaboration, communication, and professionalism; and an implementation approach that builds a learning organization, reward systems that align with organizational values, nurtures organizational culture, and uses human-centered design processes. (p. 7)

The report presents six recommendations for reducing clinician burnout and fostering professional well-being:

1. Create positive work environments,
2. Create positive learning environments,
3. Reduce administrative burdens,
4. Optimize the use of health information technologies,
5. Provide support to clinicians to prevent and alleviate burnout, and foster professional well-being, and
6. Invest in research on clinician professional well-being.

Our Perspective

We’ll ask and answer a few questions about this report.

Did the committee design an actual and satisfactory systems model?

We have promoted systems thinking since the inception of Safetymatters so we have some clear notions of what should be included in a systems model.  We see both positives and missing pieces in the NASEM committee’s approach.***

On the plus side, the tri-level model provides a useful and clear depiction of the health care system and leads naturally to an image of the work world each clinician faces.   We believe a model should address certain organizational realities—goal conflict, decision making, and compensation—and this model is minimally satisfactory in these areas.  A clinician’s potential goal conflicts, primarily maintaining a patient focus while satisfying the organization’s quality measures, managing limited resources, achieving economic goals, and complying with regulations, is mentioned once. (p. 54)  Decision making (DM) specifics are discussed in several areas, including evidenced-based DM (p. 25), the patient’s role in DM (p. 53), the burnout threat when clinicians lack input to DM (p. 101), the importance of participatory DM (pp. 134, 157, 288), and information technology as a contributor to DM (p. 201).  Compensation, which includes incentives, should align with organizational values (pp. 10, 278, 288), and should not be a stressor on the individual (p. 153).  Non-financial incentives such as awards and recognition are not mentioned.

On the downside, the model is static and two-dimensional.  The interrelationships and dynamics among model components are not discussed at all.  For example, the importance of trust in management is mentioned (p. 132) but the dynamics of trust are not discussed.  In our experience, “trust” is a multivariate function of, among other things, management’s decisions, follow-through, promise keeping, role modeling, and support of subordinates—all integrated over time.  In addition, model components feed back into one another, both positively and negatively.  In the report, the use of feedback is limited to clinicians’ experiences being fed back to the work designers (pp. 6, 82), continuous learning and improvement in the overall system (pp. 30, 47, 51, 157), and individual work performance recognition (pp. 103, 148).  It is the system dynamics that create homeostasis, fluctuations, and all levels of performance from superior to failure.

Does culture play an appropriate role in the model and recommendations?

We know that organizational culture affects performance.  And culture is mentioned throughout this report as a system component with the implication that it is an important factor, but it is not defined until a third of the way through the report.****  The NASEM committee apparently assumes everyone knows what culture is, and that’s a problem because groups, even in the same field, often do not share a common definition of culture.

But the lack of a definition doesn’t stop the authors from hanging all sorts of attributes on the culture tree.  For example, the recommendation details include “Nurture (establish and sustain) organizational culture that supports change management, psychological safety, vulnerability, and peer support.” (p. 7)  This is mostly related to getting clinicians to recognize their own burnout and seek help, and removing the social stigma associated with getting help.  There are a lot of moving parts in this recommendation, not the least of which is overcoming the long-held cultural ideal of the physician as a tough, all-knowing, powerful authority figure. 

Teamwork and participatory decision making are promoted (pp. 10, 51) but this can be a major change for organizations that traditionally have strong silos and value adherence to established procedures and protocols. 

There are bromides sprinkled through the report.  For example, “Leadership, policy, culture, and incentives are aligned at all system levels to achieve quality aims and promote integrity, stewardship, and accountability.” (p. 25)  That sounds worthy but is a huge task to specify and implement.  Same with calling for a culture of continuous learning and improvement, or in the committee’s words a “Leadership-instilled culture of learning—is stewarded by leadership committed to a culture of teamwork, collaboration, and adaptability in support of continuous learning as a core aim” (p. 51)

Are the recommendations useful?

We hope so.  We are not behavioral scientists but the recommendations appear to represent sensible actions.  They may help and probably won’t hurt—unless a health care organization makes promises that it cannot or will not keep.  That said, the recommendations are pretty vanilla and the NASEM committee cannot be accused of going out on any limbs.

Bottom line: Clinician burnout undoubtedly has a negative impact on patient care and outcomes.  Anything that can reduce burnout will improve the performance of the health care system.  However, this report does not appreciate the totality of cultural change required to implement the modest recommendations.


*  National Academies of Sciences, Engineering, and Medicine, “Taking Action Against Clinician Burnout: A Systems Approach to Professional Well-Being,” (Washington, DC: The National Academies Press, 2019). 


**  “Burnout is a syndrome characterized by high emotional exhaustion, high depersonalization (i.e., cynicism), and a low sense of personal accomplishment from work.” (p. 1)  “Clinician burnout is associated with an increased risk of patient safety incidents . . .” (p. 2)

***  As an aside, the word “systems” is mentioned over 700 times in the report.

****  “Organizational culture is defined by the fundamental artifacts, values, beliefs, and assumptions held by employees of an organization (Schein, 1992). An organization’s culture is manifested in its actions (e.g., decisions, resource allocation) and relayed through organizational structure, focus, mission and value alignment, and leadership behaviors” (p. 99)  This is good but it should have been presented earlier in the report.

Wednesday, October 9, 2019

More on Mental Models in Healthcare

Source: Clipart Panda
Our August 6, 2019 post discussed the appalling incidence of preventable harm in healthcare settings.  We suggested that a better mental model of healthcare delivery could contribute to reducing the incidence of preventable harm.  It will come as no surprise to Safetymatters readers that we are referring to a systems-oriented model.

We’ll use a 2014 article* by Nancy Leveson and Sidney Dekker to describe how a systems approach can lead to better understanding of why accidents and other negative outcomes occur.  The authors begin by noting that 70-90% of industrial accidents are blamed on individual workers.**  As a consequence, proposed fixes focus on disciplining, firing, or retraining individuals or, for groups, specifying their work practices in ever greater detail (the authors call this “rigidifying” work).  This is the Safety I mental model in a nutshell, limiting its view to the “what” and “who” of incidents.   

In contrast, systems thinking posits the behavior of individuals can only be understood by examining the context in which their behavior occurs.  The context includes management decision-making and priorities, regulatory requirements and deficiencies, and of course, organizational culture, especially safety culture.  Fixes that don’t consider the overall process almost guarantee that similar problems will arise in the future.  “. . . human error is a symptom of a system that needs to be redesigned.”  Systems thinking adds the “why” to incident analysis.

Every system has a designer, although they may not be identified as such and may not even be aware they’re “designing” when they specify work steps or flows, or define support processes, e.g., procurement or quality control.  Importantly, designers deal with an ideal system, not with the actual constructed system.  The actual system may differ from the designer's original specification because of inherent process variances, the need to address unforeseen conditions, or evolution over time.  Official procedures may be incomplete, e.g., missing unlikely but possible conditions or assume that certain conditions cannot occur.  However, the people doing the work must deal with the  constructed system, however imperfect, and the conditions that actually occur.

The official procedures present a doubled-edged threat to employees.  If they adapt procedures in the face of unanticipated conditions, and the adaptation turns out to be ineffective or leads to negative outcomes, employees can be blamed for not following the procedures.  On the other hand, if they stick to the procedures when conditions suggest they should be adapted and negative outcomes occur, the employees can be blamed for too rigidly following them.

Personal blame is a major problem in System I.  “Blame is the enemy of safety . . . it creates a culture where people are afraid to report mistakes . . . A safety culture that focuses on blame will never be very effective in preventing accidents.”

Our Perspective

How does the above relate to reducing preventable harm in healthcare?  We believe that structural and cultural factors impede the application of systems thinking in the healthcare field.  It keeps them stuck in a Safety I worldview no matter how much they pretend otherwise. 

The hospital as formal bureaucracy

When we say “healthcare” we are referring to a large organization that provides medical care, a hospital is the smallest unit of analysis.  A hospital is literally a textbook example of what organizational theorists call a formal bureaucracy.  It has specialized departments with an official division of authority among them—silos are deliberately created and maintained.  An administrative hierarchy mediates among the silos and attempts to guide them toward overall goals. The organization is deliberately impersonal to avoid favoritism and behavior is prescribed, proscribed and guided by formal rules and procedures.  It appears hospitals were deliberately designed to promote System I thinking and its inherent bias for blaming the individual for negative outcomes.

Employees have two major strategies for avoiding blame: strong occupational associations and plausible deniability. 

Powerful guilds and unions 


Medical personnel are protected by their silo and tribe.  Department heads defend their employees (and their turf) from outsiders.  The doctors effectively belong to a guild that jealously guards their professional authority; the nurses and other technical fields have their unions.  These unofficial and official organizations exist to protect their members and promote their interests.  They do not exist to protect patients although they certainly tout such interest when they are pushing for increased employee headcounts.  A key cultural value is members do not rat on other members of their tribe so problems may be observed but go unreported.

Hiding behind the procedures

In this environment, the actual primary goal is to conform to the rules, not to serve clients.  The safest course for the individual employee is to follow the rules and procedures, independent of the effect this may have on a patient.  The culture espouses a value of patient safety but what gets a higher value is plausible deniability, the ability to avoid personal responsibility, i.e., blame, by hiding behind the established practices and rules when negative outcomes occur.

An enabling environment 


The environment surrounding healthcare allows them to continue providing a level of service that literally kills patients.  Data opacity means it’s very difficult to get reliable information on patient outcomes.  Hospitals with high failure rates simply claim they are stuck with or choose to serve the sickest patients.  Weak malpractice laws are promoted by the doctors’ guild and maintained by the politicians they support.  Society in general is overly tolerant of bad medical outcomes.  Some families may make a fuss when a relative dies from inadequate care but settlements are paid, non-disclosure agreements are signed, and the enterprise moves on.

Bottom line: It will take powerful forces to get the healthcare industry to adopt true systems-oriented thinking and identify the real reasons why preventive harm occurs and what corrective actions could be effective.  Healthcare claims to promote evidence-based medicine; they need to add evidence-based harm reduction strategies.  Industry-wide adoption of the aviation industry’s confidential reporting system for errors would be a big step forward.    


*  N. Leveson and S. Dekker, “Get To The Root Of Accidents,” ChemicalProcessing.com (Feb 27, 2014).  Retrieved Oct. 7, 2019.  Leveson is an MIT professor and long-standing champion of systems thinking; Dekker has written extensively on Just Culture and Safety II concepts.  Click on their respective labels to pull up our other posts on their work.

**  The article is tailored for the process industry but the same thinking can be applied to service industries.

Tuesday, August 6, 2019

Safety II Lessons for Healthcare

Rod of Asclepius  Source: Wikipedia
We recently saw a journal article* about the incidence of preventable patient harm in medical care settings.  The rate of occurrence of harm is shocking, at least to someone new to the topic.  We wondered if healthcare providers and researchers being constrained by Safety I thinking could be part of the problem.  Below we provide a summary of the article, followed by our perspective on how Safety II thinking and practices might add value.

Incidence of preventable patient harm

The meta-analysis reviewed 70 studies and over 300,000 patients.  The overall incidence of patient harm (e.g., injury, suffering, disability or death) was 12% and half of that was deemed preventable.**  In other words, “Around one in 20 patients are exposed to preventable harm in medical care.”  12% of the preventable patient harm was severe or led to death.  25% of the preventable incidents were related to drugs and 24% to other treatments.  The authors did not observe any change in the preventable harm rate over the 19 years of data they reviewed.

Possible interventions

In fairness, the article’s focus was on calculating the incidence of preventable harm, not on identifying or fixing specific problems.  However, the authors do make several observations about possible ways to reduce the incidence rate.  The article had 11 authors so we assume these observations are not just one person’s to-do list but rather represent the collective thoughts of the author group.

The authors note “Key sources of preventable patient harm could include the actions of healthcare professionals (errors of omission or commission), healthcare system failures, or involve a combination of errors made by individuals, system failures, and patient characteristics.”  They believe occurrences could be avoided “by reasonable adaptation to a process, or adherence to guidelines, . . .” 

The authors suggest “A combination of individual-level measures (eg, educational interventions for practitioners), system-level*** measures (eg, human-centred design of healthcare tasks and work environments), and organisational-level measures (eg, introducing quality monitoring and improvement processes) are likely to be a promising strategy for mitigating preventable patient harm, . . .”

Our Perspective

Let’s get one thing out of the way: no other industry on the planet would be allowed to operate if it unnecessarily harmed people at the rate presented in this article.  As a global society, we accept, or at least tolerate, a surprising incidence of preventable harm to the people the healthcare system is supposed to be trying to serve.

We see a direct connection between this article and our Oct. 29, 2018 post where we reviewed Sydney Dekker’s analysis of patient harm in a health care facility.  Dekker’s report also highlighted the differences between the traditional Safety I approach to safety management and the more current Safety II approach.

As we stated in that post, in Safety I the root cause of imperfect results is the individual and constant efforts are necessary (e.g., training, monitoring, leadership, discipline) to create and maintain the individual’s compliance with work as designed.  In addition, the design of the work is subject to constant refinement (or “continuous improvement”).  In the preventable harm article, the authors’ observations look a lot like Safety I to us, with their emphasis on getting the individual to conform with work as designed, e.g, educational interventions (i.e., training), adherence to guidelines and quality monitoring, and improved design (i.e., specification) of healthcare tasks.

In contrast, in Safety II normal system functioning leads to mostly good and occasionally bad results.  The focus of Safety II interventions should be on activities that increase individual capacity to affect system performance and/or increase system robustness, i.e., error tolerance and an increased chance of recovery when errors inevitably occur.  When Dekker’s team reviewed cases with harm vs. cases with good outcomes, they observed that the good outcome cases “had more positive characteristics, including diversity of professional opinion and the possibility to voice dissent, keeping the discussion on risk alive and not taking past success as a guarantee for safety, deference to proven expertise, widely held authority to say “stop,” and pride of workmanship.”  We don’t see any evidence of this approach in the subject article.

Could Safety II thinking reduce the incidence of preventable harm in healthcare?  Possibly.  But what’s clear is that doing more of the same thing (more training, task specification and monitoring) has not improved the preventable harm rate over 19 years.  Maybe it’s time to think about the problems using a different mental model.

Afterword

In a subsequent interview,**** the lead author of the study said “providers and health-care systems need to “train and empower patients to be active partners” in their own care.”  This is a significant change in the model of the health care system, from the patient being the client of the system to an active component.  Such empowerment is especially important where the patient’s individual characteristics may make him/her more susceptible to harm.  The author’s advice to patients is tantamount to admitting that current approaches to diagnosing and treating patients are producing sub-standard results. 


*  M. Panagioti, K. Khan, R.N. Keers,  A. Abuzour, D. Phipps, E. Kontopantelis et al. “Prevalence, severity, and nature of preventable patient harm across medical care settings: systematic review and meta-analysis,” BMJ 2019; 366:l4185.  Retrieved July 30, 2019.

**  The goal for patient harm is not zero.  The authors accept that “some harms cannot be avoided in clinical practice.”

***  When the authors say “system” they are not referring to the term as we use it in Safetymatters, i.e., a complex collection of components, feedback loops and environmental interactions.  The authors appear to limit the “system” to the immediate context in which healthcare is provided.  They do offer a hint of a larger system when they comment about the “need to gain better insight about the systemic and cultural circumstances under which preventable patient harm occurs”.

****  M. Jagannathan, “In a review of 337,000 patient cases, this was the No. 1 most common preventable medical error,” MarketWatch (July 28, 2019).  Retrieved July 30, 2019.  This article included a list of specific steps patients can take to be more active, informed, and effective partners in obtaining health care.

Tuesday, May 28, 2019

The Study of Organizational Culture: History, Assessment Methods, and Insights

We came across an academic journal article* that purports to describe the current state of research into organizational culture (OC).  It’s interesting because it includes a history of OC research and practice, and a critique of several methods used to assess it.  Following is a summary of the article and our perspective on it, focusing on any applicability to nuclear safety culture (NSC).

History

In the late 1970s scholars studying large organizations began to consider culture as one component of organizational identity.  In the same time frame, practicing managers also began to show an interest in culture.  A key driver of their interest was Japan’s economic ascendance and descriptions of Japanese management practices that depended heavily on cultural factors.  The notion of a linkage between culture and organizational performance inspired non-Japanese managers to seek out assistance in developing culture as a competitive advantage for their own companies.  Because of the sense of urgency, practical applications (usually developed and delivered by consultants) were more important than developing a consistent, unified theory of OC.  Practitioners got ahead of researchers and the academic world has yet to fully catch up.

Consultant models only needed a plausible, saleable relationship between culture and organizational performance.  In academic terms, this meant that a consultant’s model relating culture to performance only needed some degree of predictive validity.  Such models did not have to exhibit construct validity, i.e., some proof that they described, measured, or assessed a client organization’s actual underlying culture.  A second important selling point was the consultants’ emphasis on the singular role of the senior leaders (i.e., the paying clients) in molding a new high-performance culture.

Over time, the emphasis on practice over theory and the fragmented efforts of OC researchers led to some distracting issues, including the definition of OC itself, the culture vs. climate debate, and qualitative vs. quantitative models of OC. 

Culture assessment methods 


The authors provide a detailed comparison of four quantitative approaches for assessing OC: the Denison Organizational Culture Survey (used by more than 5,000 companies), the Competing Values Framework (used in more than 10,000 organizations), the Organizational Culture Inventory (more than 2,000,000 individual respondents), and the Organizational Culture Profile (OCP, developed by the authors and used in a “large number” of research studies).  We’ll spare you the gory details but unsurprisingly, the authors find shortcomings in all the approaches, even their own. 

Some of this criticism is sour grapes over the more popular methods.  However, the authors mix their criticism with acknowledgement of functional usefulness in their overall conclusion about the methods: because they lack a “clear definition of the underlying construct, it is difficult to know what is being measured even though the measure itself has been shown to be reliable and to be correlated with organizational outcomes.” (p. 15)

Building on their OCP, the authors argue that OC researchers should start with the Schein three-level model (basic assumptions and beliefs, norms and values, and cultural artifacts) and “focus on the norms that can act as a social control system in organizations.” (p. 16)  As controllers, norms can be descriptive (“people look to others for information about how to act and feel in a given situation”) or injunctive (how the group reacts when someone violates a descriptive norm).  Attributes of norms include content, consensus (how widely they are held), and intensity (how deeply they are held).

Our Perspective

So what are we to make of all this?  For starters, it’s important to recognize that some of the topics the academics are still quibbling over have already been settled in the NSC space.  The Schein model of culture is accepted world-wide.  Most folks now recognize that a safety survey, by itself, only reflects respondents’ perceptions at a specific point in time, i.e., it is a snapshot of safety climate.  And a competent safety culture assessment includes both qualitative and quantitative data: surveys, focus groups, interviews, observations, and review of artifacts such as documents.

However, we may still make mistakes.  Our mental models of safety culture may be incomplete or misassembled, e.g., we may see a direct connection between culture and some specific behavior when, in reality, there are intervening variables.  We must acknowledge that OC can be a multidimensional sub-system with complex internal relationships interacting with a complicated socio-technical system surrounded by a larger legal-political environment.  At the end of the day, we will probably still have some unknown unknowns.

Even if we follow the authors’ advice and focus on norms, it remains complicated.  For example, it’s fairly easy to envision that safety could be a widely agreed upon, but not intensely held, norm; that would define a weak safety culture.  But how about safety and production and cost norms in a context with an intensely held norm about maintaining good relations with and among long-serving coworkers?  That could make it more difficult to predict specific behaviors.  However, people might be more likely to align their behavior around the safety norm if there was general consensus across the other norms.  Even if safety is the first among equals, consensus on other norms is key to a stronger overall safety culture that is more likely to sanction deviant behavior.
 
The authors claim culture, as defined by Schein, is not well-investigated.  Most work has focused on correlating perceptions about norms, systems, policies, procedures, practices and behavior (one’s own and others’) to organizational effectiveness with a purpose of identifying areas for improvement initiatives that will lead to increased effectiveness.  The manager in the field may not care if diagnostic instruments measure actual culture, or even what culture he has or needs; he just wants to get the mission accomplished while avoiding the opprobrium of regulators, owners, bosses, lawmakers, activists and tweeters. If your primary focus is on increasing performance, then maybe you don’t need to know what’s under the hood. 

Bottom line: This is an academic paper with over 200 citations but is quite readable although it contains some pedantic terms you probably don’t hear every day, e.g., the ipsative approach to ranking culture attributes (ordinary people call this “forced choice”) and Q factor analysis.**  Some of the one-sentence descriptions of other OC research contain useful food for thought and informed our commentary in this write-up.  There is a decent dose of academic sniping in the deconstruction of commercially popular “culture” assessment methods.  However, if you or your organization are considering using one of those methods, you should be aware of what it does, and doesn’t, incorporate. 


*  J.A. Chatman and C.A. O’Reilly, “Paradigm lost: Reinvigorating the study of organizational culture,” Research in Organizational Behavior (2016).  Retrieved May 28, 2019.

**  “Normal factor analysis, called "R method," involves finding correlations between variables (say, height and age) across a sample of subjects. Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which are claimed to represent shared ways of thinking.”  Wikipedia, “Q methodology.”   Retrieved May 28, 2019.

Monday, April 1, 2019

Culture Insights from The Speed of Trust by Stephen M.R. Covey

In The Speed of Trust,* Stephen M.R. Covey posits that trust is the key competency that allows individuals (especially leaders), groups, organizations, and societies to work at optimum speed and cost.  In his view, “Leadership is getting results in a way that inspires trust.” (p. 40)  We saw the book mentioned in an NRC personnel development memo** and figured it was worth a look. 

Covey presents a model of trust made up of a framework, language to describe the framework’s components, and a set of recommended behaviors.  The framework consists of self trust, relationship trust and stakeholder trust.  Self trust is about building personal credibility; relationship trust is built on one’s behavior with others; and stakeholder trust is built within organizations, in markets (i.e., with customers), and over the larger society.  His model is not overly complicated but it has a lot of parts, as shown in the following figure.


Figure by Safetymatters

4 Cores of credibility 


Covey begins by describing how the individual can learn to trust him or herself.  This is basically an internal process of developing the 4 Cores of credibility: character attributes (integrity and intent) and competence attributes (capabilities and results).  Improvement in these areas increases self-confidence and one’s ability to project a trust-inspiring strength of character.  Integrity includes clarifying values and following them.  Intent includes a transparent, as opposed to hidden, agenda that drives one’s behavior.  Capabilities include the talents, skills, and knowledge, coupled with continuous improvement, that enable excellent performance.  Results, e.g., achieving goals and keeping commitments, are sine qua non for establishing and maintaining credibility and trust.

13 Behaviors  

The next step is learning how to trust and be trusted by others.  This is a social process, i.e., it is created through individual behavior and interaction with others.  Covey details 13 types of behavior to which the individual must attend.  Some types flow primarily, but not exclusively, from character, others from competence, and still others from a combination of the two.  He notes that “. . . the quickest way to decrease trust is to violate a behavior of character, while the quickest way to increase trust is to demonstrate a behavior of competence.” (p. 133)  Covey provides examples of each desired behavior, its opposite, and its “counterfeit” version, i.e., where people are espousing the desired behavior but actually avoiding doing it.  He describes the problems associated with underdoing and overdoing each behavior (an illustration of the Goldilocks Principle).  Behavioral change is possible if the individual has a compelling sense of purpose.  Each behavior type is guided by a set of principles, different for each behavior, as shown in the following figure.


Figure by Safetymatters

Organizational alignment

The third step is establishing trust throughout an organization.  The primary mechanism for accomplishing this is alignment of the organization’s visible symbols, underlying structures, and systems with the ideals expressed in the 4 Cores and 13 Behaviors, e.g., making and keeping commitments and accounting for results.  He describes the “taxes” associated with a low-trust organization and the “dividends” associated with a high-trust organization.  Beyond that, there is nothing new in this section.

Market and societal trust

We’ll briefly address the final topics.  Market trust is about an entity’s brand or reputation in the outside world.  Building a strong brand involves using the 4 Cores to establish, maintain or strengthen one’s reputation.  Societal trust is built on contribution, the value an entity creates in the world through ethical behavior, win-win business dealings, philanthropy and other forms of corporate social responsibility.     

Our Perspective 


Covey provides a comprehensive model of how trust is integral to relationships at every level of complexity, from the self to global relations.
 
The fundamental importance of trust is not new news.  We have long said organization-wide trust is vital to a strong safety culture.  Trust is a lubricant for organizational friction which, like physical friction, slows down activities, and makes them more expensive.  In our Safetysim*** management simulator, trust was an input variable that affected speed and effectiveness of problem resolution and overall cost performance. 

Covey’s treatment of culture is incomplete.  While he connects some of his behaviors or principles to organizational culture,**** he never actually defines culture.  It appears he thinks culture is something that “just is” or, perhaps, a consequence or artifact of performing the behaviors he prescribes.  It’s reasonable to assume Covey believes motivated individuals can behave their way to a better culture, saying “. . . behave your way into the person you want to be.” (pp. 87, 130)  His view is consistent with culture change theorists who believe people will eventually develop desired values if they model desired behavior long enough.  His recipe for cultural change boils down to “Just do it.”  We prefer a more explicit definition of culture, something along the spectrum from the straightforward notion of culture as an underlying set of values to the idea of culture as an emergent property of a complex socio-technical system. 

Trust is not the only candidate for the primary leadership or organizational competence.  The same or similar arguments could also be made about respect.  (Covey mentions respect but only as one of his 13 behaviors.)  Two-way respect is also essential for organizational success.  This leads to an interesting question: Could you respect a leader without trusting him/her?  How about some of the famous hard-ass bosses of management lore, like Harold Geneen?  Or General Patton? 

Covey is obviously a true believer in his message and his presentation has a fervor one normally associates with religious zeal.  He also includes many examples of family situations and describes how his prescriptions can be applied to families.  (Helpful if you want to manage your family like a little factory.)  Covey is a devout Mormon and his faith comes through in his writing. 

The book is an easy read.  Like many books written by successful consultants, it is interspersed with endorsements and quotes from business and political notables.  Covey includes a couple of useful self-assessment surveys.  He also offers a valuable observation: “. . . people tend to judge others based on behavior and judge themselves based on intent.” (p. 301)

Bottom line: This book is worth your time if lack of trust is a problem in your organization.


*  Stephen M. R. Covey, The Speed of Trust (New York: Free Press, 2016).  If the author’s name sounds familiar, it may be because his father, Stephen R. Covey, wrote The 7 Habits of Highly Effective People, a popular self-help book.

**  “Fiscal Year (FY) 2018 FEORP Plan Accomplishments and Successful/Promising Practices at the U.S. Nuclear Regulatory Commission (NRC),” Dec. 17, 2018.  ADAMS ML18351A243.  The agency uses The Speed of Trust concepts in manager and employee training. 

***  Safetysim is a management training simulation tool developed by Safetymatters’ Bob Cudlin.

****  For example, “A transparent culture of learning and growing will generally create credibility and trust, . . .” (p. 117)

Friday, March 8, 2019

Decision Making, Values, and Culture Change

Typical New Yorker cover
In the nuclear industry, most decisions are at least arguably “hard,” i.e., decision makers can agree on the facts and identify areas where there is risk or uncertainty.  A recent New Yorker article* on making an indisputably “soft” decision got us wondering if the methods and philosophy described in the article might provide some insight into qualitative personal decisions in the nuclear space.

Author Joshua Rothman’s interest in decision making was piqued by the impending birth of his first child.  When exactly did he decide that he wanted children (after not wanting them) and then participate with his wife to make it happen?  As he says, “If I made a decision, it wasn’t a very decisive one.”  Thus began his research into decision making methods and philosophy.

Rothman opens with a quick review of several decision making techniques.  He describes Benjamin Franklin’s “prudential algebra,” Charles Darwin’s lists of pros and cons, Leo Tolstoy’s expositions in War and Peace (where it appears the biggest decisions basically make themselves), and modern decision science processes that develop decisions through iterative activities performed by groups, scenario planning and war games. 

Eventually the author gets to decision theory, which holds that sound decisions flow from values.  Decision makers ask what they value and then seek to maximize it.  But what if “we’re unsure what we care about, or when we anticipate that what we care about might shift”?  What if we opt to change our values? 

The focus on values leads to philosophy.  Rothman draws heavily on the work of Agnes Callard, a philosopher at the University of Chicago, who believes that life-altering decisions are not made suddenly but through a more gradual process: “Old Person aspires to become New Person.”  Callard emphasizes that aspiration is different from ambition.  Ambitious people know exactly why they’re doing something, e.g., taking a class to get a good grade or modeling different behavior to satisfy regulatory scrutiny.  Aspirants, on the other hand, have a harder time because they have a less clear sense of their current activities’ value and can only hope their future selves can understand and appreciate it.  “To aspire, Callard writes, is to judge one’s present-day self by the standards of a future self who doesn’t yet exist.”

Our Perspective

We can consider the change of an organization’s culture as the integration over time of the changes in all its members’ behaviors and values.  We know that values underlie culture and significant cultural change requires shifting the actual (as opposed to the espoused) values of the organization.  This is not easy.  The organization’s more ambitious members will find it easier to get with the program; they know change is essential and are willing to adapt to keep their jobs or improve their standing.  The merely aspiring will have a harder time.  Because they lack a clear picture of the future organizational culture, they may be troubled by unexplored options, i.e., some different path or future that might be equally good or even better.  They may learn that no matter how deeply they study the experience of others, they still don’t really know what they’re getting into.  They don’t understand what the change experience will be like and how it will affect them.  They may be frustrated to discover that modeling desired new behaviors does not help because they still feel like the same people in the old culture.  Since personal change is not instantaneous, they may even get stuck somewhere between the old culture and the new culture.

Bottom line: Cultural change is harder for some people than others.  This article is an easy read that offers an introduction to the personal dynamics associated with changing one’s outlook or values.

*  J. Rothman, “The Art of Decision-Making,” The New Yorker (Jan. 21, 2019).  Retrieved March 1, 2019.