Showing posts with label Mental Model. Show all posts
Showing posts with label Mental Model. Show all posts

Wednesday, November 6, 2019

National Academies of Sciences, Engineering, and Medicine Systems Model of Medical Clinician Burnout, Including Culture Aspects

Source: Medical Academic S. Africa
We have been posting about preventable harm to health care patients, emphasizing how improved organizational mental models and attention to cultural attributes might reduce the incidence of such harm.  A new National Academies of Sciences, Engineering, and Medicine (NASEM) committee report* looks at one likely contributor to the patient harm problem: clinician burnout.**  The NASEM committee purports to use a systems model to analyze burnout and develop strategies for reducing burnout while fostering professional well-being and enhancing patient care.  This post summarizes the 300+ page report and offers our perspective on it.

The Burnout Problem and the Systems Model 


Clinician burnout is caused by stressors in the work environment; burnout can lead to behavioral and health issues for clinicians, clinicians prematurely leaving the healthcare field, and poorer treatment and outcomes for patients.  This widespread problem requires a “systemic approach to burnout that focuses on the structure, organization, and culture of health care.” (p. 3)

The NASEM committee’s systems model has three levels: frontline care delivery, the health care organization, and the external environment.  Frontline care delivery is the environment in which care is provided.  The health care organization includes the organizational culture, payment and reward systems, processes for managing human capital and human resources, the leadership and management style, and organizational policies. The external environment includes political, market, professional, and societal factors.

All three levels contribute to an individual clinician’s work environment, and ultimately boil down to a set of job demands and job resources for the clinician.

Recommendations

The report identifies multiple factors that need to be considered when developing interventions, including organizational values and leadership; a work system that provides adequate resources, facilitates team work, collaboration, communication, and professionalism; and an implementation approach that builds a learning organization, reward systems that align with organizational values, nurtures organizational culture, and uses human-centered design processes. (p. 7)

The report presents six recommendations for reducing clinician burnout and fostering professional well-being:

1. Create positive work environments,
2. Create positive learning environments,
3. Reduce administrative burdens,
4. Optimize the use of health information technologies,
5. Provide support to clinicians to prevent and alleviate burnout, and foster professional well-being, and
6. Invest in research on clinician professional well-being.

Our Perspective

We’ll ask and answer a few questions about this report.

Did the committee design an actual and satisfactory systems model?

We have promoted systems thinking since the inception of Safetymatters so we have some clear notions of what should be included in a systems model.  We see both positives and missing pieces in the NASEM committee’s approach.***

On the plus side, the tri-level model provides a useful and clear depiction of the health care system and leads naturally to an image of the work world each clinician faces.   We believe a model should address certain organizational realities—goal conflict, decision making, and compensation—and this model is minimally satisfactory in these areas.  A clinician’s potential goal conflicts, primarily maintaining a patient focus while satisfying the organization’s quality measures, managing limited resources, achieving economic goals, and complying with regulations, is mentioned once. (p. 54)  Decision making (DM) specifics are discussed in several areas, including evidenced-based DM (p. 25), the patient’s role in DM (p. 53), the burnout threat when clinicians lack input to DM (p. 101), the importance of participatory DM (pp. 134, 157, 288), and information technology as a contributor to DM (p. 201).  Compensation, which includes incentives, should align with organizational values (pp. 10, 278, 288), and should not be a stressor on the individual (p. 153).  Non-financial incentives such as awards and recognition are not mentioned.

On the downside, the model is static and two-dimensional.  The interrelationships and dynamics among model components are not discussed at all.  For example, the importance of trust in management is mentioned (p. 132) but the dynamics of trust are not discussed.  In our experience, “trust” is a multivariate function of, among other things, management’s decisions, follow-through, promise keeping, role modeling, and support of subordinates—all integrated over time.  In addition, model components feed back into one another, both positively and negatively.  In the report, the use of feedback is limited to clinicians’ experiences being fed back to the work designers (pp. 6, 82), continuous learning and improvement in the overall system (pp. 30, 47, 51, 157), and individual work performance recognition (pp. 103, 148).  It is the system dynamics that create homeostasis, fluctuations, and all levels of performance from superior to failure.

Does culture play an appropriate role in the model and recommendations?

We know that organizational culture affects performance.  And culture is mentioned throughout this report as a system component with the implication that it is an important factor, but it is not defined until a third of the way through the report.****  The NASEM committee apparently assumes everyone knows what culture is, and that’s a problem because groups, even in the same field, often do not share a common definition of culture.

But the lack of a definition doesn’t stop the authors from hanging all sorts of attributes on the culture tree.  For example, the recommendation details include “Nurture (establish and sustain) organizational culture that supports change management, psychological safety, vulnerability, and peer support.” (p. 7)  This is mostly related to getting clinicians to recognize their own burnout and seek help, and removing the social stigma associated with getting help.  There are a lot of moving parts in this recommendation, not the least of which is overcoming the long-held cultural ideal of the physician as a tough, all-knowing, powerful authority figure. 

Teamwork and participatory decision making are promoted (pp. 10, 51) but this can be a major change for organizations that traditionally have strong silos and value adherence to established procedures and protocols. 

There are bromides sprinkled through the report.  For example, “Leadership, policy, culture, and incentives are aligned at all system levels to achieve quality aims and promote integrity, stewardship, and accountability.” (p. 25)  That sounds worthy but is a huge task to specify and implement.  Same with calling for a culture of continuous learning and improvement, or in the committee’s words a “Leadership-instilled culture of learning—is stewarded by leadership committed to a culture of teamwork, collaboration, and adaptability in support of continuous learning as a core aim” (p. 51)

Are the recommendations useful?

We hope so.  We are not behavioral scientists but the recommendations appear to represent sensible actions.  They may help and probably won’t hurt—unless a health care organization makes promises that it cannot or will not keep.  That said, the recommendations are pretty vanilla and the NASEM committee cannot be accused of going out on any limbs.

Bottom line: Clinician burnout undoubtedly has a negative impact on patient care and outcomes.  Anything that can reduce burnout will improve the performance of the health care system.  However, this report does not appreciate the totality of cultural change required to implement the modest recommendations.


*  National Academies of Sciences, Engineering, and Medicine, “Taking Action Against Clinician Burnout: A Systems Approach to Professional Well-Being,” (Washington, DC: The National Academies Press, 2019). 


**  “Burnout is a syndrome characterized by high emotional exhaustion, high depersonalization (i.e., cynicism), and a low sense of personal accomplishment from work.” (p. 1)  “Clinician burnout is associated with an increased risk of patient safety incidents . . .” (p. 2)

***  As an aside, the word “systems” is mentioned over 700 times in the report.

****  “Organizational culture is defined by the fundamental artifacts, values, beliefs, and assumptions held by employees of an organization (Schein, 1992). An organization’s culture is manifested in its actions (e.g., decisions, resource allocation) and relayed through organizational structure, focus, mission and value alignment, and leadership behaviors” (p. 99)  This is good but it should have been presented earlier in the report.

Wednesday, October 9, 2019

More on Mental Models in Healthcare

Source: Clipart Panda
Our August 6, 2019 post discussed the appalling incidence of preventable harm in healthcare settings.  We suggested that a better mental model of healthcare delivery could contribute to reducing the incidence of preventable harm.  It will come as no surprise to Safetymatters readers that we are referring to a systems-oriented model.

We’ll use a 2014 article* by Nancy Leveson and Sidney Dekker to describe how a systems approach can lead to better understanding of why accidents and other negative outcomes occur.  The authors begin by noting that 70-90% of industrial accidents are blamed on individual workers.**  As a consequence, proposed fixes focus on disciplining, firing, or retraining individuals or, for groups, specifying their work practices in ever greater detail (the authors call this “rigidifying” work).  This is the Safety I mental model in a nutshell, limiting its view to the “what” and “who” of incidents.   

In contrast, systems thinking posits the behavior of individuals can only be understood by examining the context in which their behavior occurs.  The context includes management decision-making and priorities, regulatory requirements and deficiencies, and of course, organizational culture, especially safety culture.  Fixes that don’t consider the overall process almost guarantee that similar problems will arise in the future.  “. . . human error is a symptom of a system that needs to be redesigned.”  Systems thinking adds the “why” to incident analysis.

Every system has a designer, although they may not be identified as such and may not even be aware they’re “designing” when they specify work steps or flows, or define support processes, e.g., procurement or quality control.  Importantly, designers deal with an ideal system, not with the actual constructed system.  The actual system may differ from the designer's original specification because of inherent process variances, the need to address unforeseen conditions, or evolution over time.  Official procedures may be incomplete, e.g., missing unlikely but possible conditions or assume that certain conditions cannot occur.  However, the people doing the work must deal with the  constructed system, however imperfect, and the conditions that actually occur.

The official procedures present a doubled-edged threat to employees.  If they adapt procedures in the face of unanticipated conditions, and the adaptation turns out to be ineffective or leads to negative outcomes, employees can be blamed for not following the procedures.  On the other hand, if they stick to the procedures when conditions suggest they should be adapted and negative outcomes occur, the employees can be blamed for too rigidly following them.

Personal blame is a major problem in System I.  “Blame is the enemy of safety . . . it creates a culture where people are afraid to report mistakes . . . A safety culture that focuses on blame will never be very effective in preventing accidents.”

Our Perspective

How does the above relate to reducing preventable harm in healthcare?  We believe that structural and cultural factors impede the application of systems thinking in the healthcare field.  It keeps them stuck in a Safety I worldview no matter how much they pretend otherwise. 

The hospital as formal bureaucracy

When we say “healthcare” we are referring to a large organization that provides medical care, a hospital is the smallest unit of analysis.  A hospital is literally a textbook example of what organizational theorists call a formal bureaucracy.  It has specialized departments with an official division of authority among them—silos are deliberately created and maintained.  An administrative hierarchy mediates among the silos and attempts to guide them toward overall goals. The organization is deliberately impersonal to avoid favoritism and behavior is prescribed, proscribed and guided by formal rules and procedures.  It appears hospitals were deliberately designed to promote System I thinking and its inherent bias for blaming the individual for negative outcomes.

Employees have two major strategies for avoiding blame: strong occupational associations and plausible deniability. 

Powerful guilds and unions 


Medical personnel are protected by their silo and tribe.  Department heads defend their employees (and their turf) from outsiders.  The doctors effectively belong to a guild that jealously guards their professional authority; the nurses and other technical fields have their unions.  These unofficial and official organizations exist to protect their members and promote their interests.  They do not exist to protect patients although they certainly tout such interest when they are pushing for increased employee headcounts.  A key cultural value is members do not rat on other members of their tribe so problems may be observed but go unreported.

Hiding behind the procedures

In this environment, the actual primary goal is to conform to the rules, not to serve clients.  The safest course for the individual employee is to follow the rules and procedures, independent of the effect this may have on a patient.  The culture espouses a value of patient safety but what gets a higher value is plausible deniability, the ability to avoid personal responsibility, i.e., blame, by hiding behind the established practices and rules when negative outcomes occur.

An enabling environment 


The environment surrounding healthcare allows them to continue providing a level of service that literally kills patients.  Data opacity means it’s very difficult to get reliable information on patient outcomes.  Hospitals with high failure rates simply claim they are stuck with or choose to serve the sickest patients.  Weak malpractice laws are promoted by the doctors’ guild and maintained by the politicians they support.  Society in general is overly tolerant of bad medical outcomes.  Some families may make a fuss when a relative dies from inadequate care but settlements are paid, non-disclosure agreements are signed, and the enterprise moves on.

Bottom line: It will take powerful forces to get the healthcare industry to adopt true systems-oriented thinking and identify the real reasons why preventive harm occurs and what corrective actions could be effective.  Healthcare claims to promote evidence-based medicine; they need to add evidence-based harm reduction strategies.  Industry-wide adoption of the aviation industry’s confidential reporting system for errors would be a big step forward.    


*  N. Leveson and S. Dekker, “Get To The Root Of Accidents,” ChemicalProcessing.com (Feb 27, 2014).  Retrieved Oct. 7, 2019.  Leveson is an MIT professor and long-standing champion of systems thinking; Dekker has written extensively on Just Culture and Safety II concepts.  Click on their respective labels to pull up our other posts on their work.

**  The article is tailored for the process industry but the same thinking can be applied to service industries.

Tuesday, August 6, 2019

Safety II Lessons for Healthcare

Rod of Asclepius  Source: Wikipedia
We recently saw a journal article* about the incidence of preventable patient harm in medical care settings.  The rate of occurrence of harm is shocking, at least to someone new to the topic.  We wondered if healthcare providers and researchers being constrained by Safety I thinking could be part of the problem.  Below we provide a summary of the article, followed by our perspective on how Safety II thinking and practices might add value.

Incidence of preventable patient harm

The meta-analysis reviewed 70 studies and over 300,000 patients.  The overall incidence of patient harm (e.g., injury, suffering, disability or death) was 12% and half of that was deemed preventable.**  In other words, “Around one in 20 patients are exposed to preventable harm in medical care.”  12% of the preventable patient harm was severe or led to death.  25% of the preventable incidents were related to drugs and 24% to other treatments.  The authors did not observe any change in the preventable harm rate over the 19 years of data they reviewed.

Possible interventions

In fairness, the article’s focus was on calculating the incidence of preventable harm, not on identifying or fixing specific problems.  However, the authors do make several observations about possible ways to reduce the incidence rate.  The article had 11 authors so we assume these observations are not just one person’s to-do list but rather represent the collective thoughts of the author group.

The authors note “Key sources of preventable patient harm could include the actions of healthcare professionals (errors of omission or commission), healthcare system failures, or involve a combination of errors made by individuals, system failures, and patient characteristics.”  They believe occurrences could be avoided “by reasonable adaptation to a process, or adherence to guidelines, . . .” 

The authors suggest “A combination of individual-level measures (eg, educational interventions for practitioners), system-level*** measures (eg, human-centred design of healthcare tasks and work environments), and organisational-level measures (eg, introducing quality monitoring and improvement processes) are likely to be a promising strategy for mitigating preventable patient harm, . . .”

Our Perspective

Let’s get one thing out of the way: no other industry on the planet would be allowed to operate if it unnecessarily harmed people at the rate presented in this article.  As a global society, we accept, or at least tolerate, a surprising incidence of preventable harm to the people the healthcare system is supposed to be trying to serve.

We see a direct connection between this article and our Oct. 29, 2018 post where we reviewed Sydney Dekker’s analysis of patient harm in a health care facility.  Dekker’s report also highlighted the differences between the traditional Safety I approach to safety management and the more current Safety II approach.

As we stated in that post, in Safety I the root cause of imperfect results is the individual and constant efforts are necessary (e.g., training, monitoring, leadership, discipline) to create and maintain the individual’s compliance with work as designed.  In addition, the design of the work is subject to constant refinement (or “continuous improvement”).  In the preventable harm article, the authors’ observations look a lot like Safety I to us, with their emphasis on getting the individual to conform with work as designed, e.g, educational interventions (i.e., training), adherence to guidelines and quality monitoring, and improved design (i.e., specification) of healthcare tasks.

In contrast, in Safety II normal system functioning leads to mostly good and occasionally bad results.  The focus of Safety II interventions should be on activities that increase individual capacity to affect system performance and/or increase system robustness, i.e., error tolerance and an increased chance of recovery when errors inevitably occur.  When Dekker’s team reviewed cases with harm vs. cases with good outcomes, they observed that the good outcome cases “had more positive characteristics, including diversity of professional opinion and the possibility to voice dissent, keeping the discussion on risk alive and not taking past success as a guarantee for safety, deference to proven expertise, widely held authority to say “stop,” and pride of workmanship.”  We don’t see any evidence of this approach in the subject article.

Could Safety II thinking reduce the incidence of preventable harm in healthcare?  Possibly.  But what’s clear is that doing more of the same thing (more training, task specification and monitoring) has not improved the preventable harm rate over 19 years.  Maybe it’s time to think about the problems using a different mental model.

Afterword

In a subsequent interview,**** the lead author of the study said “providers and health-care systems need to “train and empower patients to be active partners” in their own care.”  This is a significant change in the model of the health care system, from the patient being the client of the system to an active component.  Such empowerment is especially important where the patient’s individual characteristics may make him/her more susceptible to harm.  The author’s advice to patients is tantamount to admitting that current approaches to diagnosing and treating patients are producing sub-standard results. 


*  M. Panagioti, K. Khan, R.N. Keers,  A. Abuzour, D. Phipps, E. Kontopantelis et al. “Prevalence, severity, and nature of preventable patient harm across medical care settings: systematic review and meta-analysis,” BMJ 2019; 366:l4185.  Retrieved July 30, 2019.

**  The goal for patient harm is not zero.  The authors accept that “some harms cannot be avoided in clinical practice.”

***  When the authors say “system” they are not referring to the term as we use it in Safetymatters, i.e., a complex collection of components, feedback loops and environmental interactions.  The authors appear to limit the “system” to the immediate context in which healthcare is provided.  They do offer a hint of a larger system when they comment about the “need to gain better insight about the systemic and cultural circumstances under which preventable patient harm occurs”.

****  M. Jagannathan, “In a review of 337,000 patient cases, this was the No. 1 most common preventable medical error,” MarketWatch (July 28, 2019).  Retrieved July 30, 2019.  This article included a list of specific steps patients can take to be more active, informed, and effective partners in obtaining health care.

Tuesday, May 28, 2019

The Study of Organizational Culture: History, Assessment Methods, and Insights

We came across an academic journal article* that purports to describe the current state of research into organizational culture (OC).  It’s interesting because it includes a history of OC research and practice, and a critique of several methods used to assess it.  Following is a summary of the article and our perspective on it, focusing on any applicability to nuclear safety culture (NSC).

History

In the late 1970s scholars studying large organizations began to consider culture as one component of organizational identity.  In the same time frame, practicing managers also began to show an interest in culture.  A key driver of their interest was Japan’s economic ascendance and descriptions of Japanese management practices that depended heavily on cultural factors.  The notion of a linkage between culture and organizational performance inspired non-Japanese managers to seek out assistance in developing culture as a competitive advantage for their own companies.  Because of the sense of urgency, practical applications (usually developed and delivered by consultants) were more important than developing a consistent, unified theory of OC.  Practitioners got ahead of researchers and the academic world has yet to fully catch up.

Consultant models only needed a plausible, saleable relationship between culture and organizational performance.  In academic terms, this meant that a consultant’s model relating culture to performance only needed some degree of predictive validity.  Such models did not have to exhibit construct validity, i.e., some proof that they described, measured, or assessed a client organization’s actual underlying culture.  A second important selling point was the consultants’ emphasis on the singular role of the senior leaders (i.e., the paying clients) in molding a new high-performance culture.

Over time, the emphasis on practice over theory and the fragmented efforts of OC researchers led to some distracting issues, including the definition of OC itself, the culture vs. climate debate, and qualitative vs. quantitative models of OC. 

Culture assessment methods 


The authors provide a detailed comparison of four quantitative approaches for assessing OC: the Denison Organizational Culture Survey (used by more than 5,000 companies), the Competing Values Framework (used in more than 10,000 organizations), the Organizational Culture Inventory (more than 2,000,000 individual respondents), and the Organizational Culture Profile (OCP, developed by the authors and used in a “large number” of research studies).  We’ll spare you the gory details but unsurprisingly, the authors find shortcomings in all the approaches, even their own. 

Some of this criticism is sour grapes over the more popular methods.  However, the authors mix their criticism with acknowledgement of functional usefulness in their overall conclusion about the methods: because they lack a “clear definition of the underlying construct, it is difficult to know what is being measured even though the measure itself has been shown to be reliable and to be correlated with organizational outcomes.” (p. 15)

Building on their OCP, the authors argue that OC researchers should start with the Schein three-level model (basic assumptions and beliefs, norms and values, and cultural artifacts) and “focus on the norms that can act as a social control system in organizations.” (p. 16)  As controllers, norms can be descriptive (“people look to others for information about how to act and feel in a given situation”) or injunctive (how the group reacts when someone violates a descriptive norm).  Attributes of norms include content, consensus (how widely they are held), and intensity (how deeply they are held).

Our Perspective

So what are we to make of all this?  For starters, it’s important to recognize that some of the topics the academics are still quibbling over have already been settled in the NSC space.  The Schein model of culture is accepted world-wide.  Most folks now recognize that a safety survey, by itself, only reflects respondents’ perceptions at a specific point in time, i.e., it is a snapshot of safety climate.  And a competent safety culture assessment includes both qualitative and quantitative data: surveys, focus groups, interviews, observations, and review of artifacts such as documents.

However, we may still make mistakes.  Our mental models of safety culture may be incomplete or misassembled, e.g., we may see a direct connection between culture and some specific behavior when, in reality, there are intervening variables.  We must acknowledge that OC can be a multidimensional sub-system with complex internal relationships interacting with a complicated socio-technical system surrounded by a larger legal-political environment.  At the end of the day, we will probably still have some unknown unknowns.

Even if we follow the authors’ advice and focus on norms, it remains complicated.  For example, it’s fairly easy to envision that safety could be a widely agreed upon, but not intensely held, norm; that would define a weak safety culture.  But how about safety and production and cost norms in a context with an intensely held norm about maintaining good relations with and among long-serving coworkers?  That could make it more difficult to predict specific behaviors.  However, people might be more likely to align their behavior around the safety norm if there was general consensus across the other norms.  Even if safety is the first among equals, consensus on other norms is key to a stronger overall safety culture that is more likely to sanction deviant behavior.
 
The authors claim culture, as defined by Schein, is not well-investigated.  Most work has focused on correlating perceptions about norms, systems, policies, procedures, practices and behavior (one’s own and others’) to organizational effectiveness with a purpose of identifying areas for improvement initiatives that will lead to increased effectiveness.  The manager in the field may not care if diagnostic instruments measure actual culture, or even what culture he has or needs; he just wants to get the mission accomplished while avoiding the opprobrium of regulators, owners, bosses, lawmakers, activists and tweeters. If your primary focus is on increasing performance, then maybe you don’t need to know what’s under the hood. 

Bottom line: This is an academic paper with over 200 citations but is quite readable although it contains some pedantic terms you probably don’t hear every day, e.g., the ipsative approach to ranking culture attributes (ordinary people call this “forced choice”) and Q factor analysis.**  Some of the one-sentence descriptions of other OC research contain useful food for thought and informed our commentary in this write-up.  There is a decent dose of academic sniping in the deconstruction of commercially popular “culture” assessment methods.  However, if you or your organization are considering using one of those methods, you should be aware of what it does, and doesn’t, incorporate. 


*  J.A. Chatman and C.A. O’Reilly, “Paradigm lost: Reinvigorating the study of organizational culture,” Research in Organizational Behavior (2016).  Retrieved May 28, 2019.

**  “Normal factor analysis, called "R method," involves finding correlations between variables (say, height and age) across a sample of subjects. Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which are claimed to represent shared ways of thinking.”  Wikipedia, “Q methodology.”   Retrieved May 28, 2019.

Monday, April 1, 2019

Culture Insights from The Speed of Trust by Stephen M.R. Covey

In The Speed of Trust,* Stephen M.R. Covey posits that trust is the key competency that allows individuals (especially leaders), groups, organizations, and societies to work at optimum speed and cost.  In his view, “Leadership is getting results in a way that inspires trust.” (p. 40)  We saw the book mentioned in an NRC personnel development memo** and figured it was worth a look. 

Covey presents a model of trust made up of a framework, language to describe the framework’s components, and a set of recommended behaviors.  The framework consists of self trust, relationship trust and stakeholder trust.  Self trust is about building personal credibility; relationship trust is built on one’s behavior with others; and stakeholder trust is built within organizations, in markets (i.e., with customers), and over the larger society.  His model is not overly complicated but it has a lot of parts, as shown in the following figure.


Figure by Safetymatters

4 Cores of credibility 


Covey begins by describing how the individual can learn to trust him or herself.  This is basically an internal process of developing the 4 Cores of credibility: character attributes (integrity and intent) and competence attributes (capabilities and results).  Improvement in these areas increases self-confidence and one’s ability to project a trust-inspiring strength of character.  Integrity includes clarifying values and following them.  Intent includes a transparent, as opposed to hidden, agenda that drives one’s behavior.  Capabilities include the talents, skills, and knowledge, coupled with continuous improvement, that enable excellent performance.  Results, e.g., achieving goals and keeping commitments, are sine qua non for establishing and maintaining credibility and trust.

13 Behaviors  

The next step is learning how to trust and be trusted by others.  This is a social process, i.e., it is created through individual behavior and interaction with others.  Covey details 13 types of behavior to which the individual must attend.  Some types flow primarily, but not exclusively, from character, others from competence, and still others from a combination of the two.  He notes that “. . . the quickest way to decrease trust is to violate a behavior of character, while the quickest way to increase trust is to demonstrate a behavior of competence.” (p. 133)  Covey provides examples of each desired behavior, its opposite, and its “counterfeit” version, i.e., where people are espousing the desired behavior but actually avoiding doing it.  He describes the problems associated with underdoing and overdoing each behavior (an illustration of the Goldilocks Principle).  Behavioral change is possible if the individual has a compelling sense of purpose.  Each behavior type is guided by a set of principles, different for each behavior, as shown in the following figure.


Figure by Safetymatters

Organizational alignment

The third step is establishing trust throughout an organization.  The primary mechanism for accomplishing this is alignment of the organization’s visible symbols, underlying structures, and systems with the ideals expressed in the 4 Cores and 13 Behaviors, e.g., making and keeping commitments and accounting for results.  He describes the “taxes” associated with a low-trust organization and the “dividends” associated with a high-trust organization.  Beyond that, there is nothing new in this section.

Market and societal trust

We’ll briefly address the final topics.  Market trust is about an entity’s brand or reputation in the outside world.  Building a strong brand involves using the 4 Cores to establish, maintain or strengthen one’s reputation.  Societal trust is built on contribution, the value an entity creates in the world through ethical behavior, win-win business dealings, philanthropy and other forms of corporate social responsibility.     

Our Perspective 


Covey provides a comprehensive model of how trust is integral to relationships at every level of complexity, from the self to global relations.
 
The fundamental importance of trust is not new news.  We have long said organization-wide trust is vital to a strong safety culture.  Trust is a lubricant for organizational friction which, like physical friction, slows down activities, and makes them more expensive.  In our Safetysim*** management simulator, trust was an input variable that affected speed and effectiveness of problem resolution and overall cost performance. 

Covey’s treatment of culture is incomplete.  While he connects some of his behaviors or principles to organizational culture,**** he never actually defines culture.  It appears he thinks culture is something that “just is” or, perhaps, a consequence or artifact of performing the behaviors he prescribes.  It’s reasonable to assume Covey believes motivated individuals can behave their way to a better culture, saying “. . . behave your way into the person you want to be.” (pp. 87, 130)  His view is consistent with culture change theorists who believe people will eventually develop desired values if they model desired behavior long enough.  His recipe for cultural change boils down to “Just do it.”  We prefer a more explicit definition of culture, something along the spectrum from the straightforward notion of culture as an underlying set of values to the idea of culture as an emergent property of a complex socio-technical system. 

Trust is not the only candidate for the primary leadership or organizational competence.  The same or similar arguments could also be made about respect.  (Covey mentions respect but only as one of his 13 behaviors.)  Two-way respect is also essential for organizational success.  This leads to an interesting question: Could you respect a leader without trusting him/her?  How about some of the famous hard-ass bosses of management lore, like Harold Geneen?  Or General Patton? 

Covey is obviously a true believer in his message and his presentation has a fervor one normally associates with religious zeal.  He also includes many examples of family situations and describes how his prescriptions can be applied to families.  (Helpful if you want to manage your family like a little factory.)  Covey is a devout Mormon and his faith comes through in his writing. 

The book is an easy read.  Like many books written by successful consultants, it is interspersed with endorsements and quotes from business and political notables.  Covey includes a couple of useful self-assessment surveys.  He also offers a valuable observation: “. . . people tend to judge others based on behavior and judge themselves based on intent.” (p. 301)

Bottom line: This book is worth your time if lack of trust is a problem in your organization.


*  Stephen M. R. Covey, The Speed of Trust (New York: Free Press, 2016).  If the author’s name sounds familiar, it may be because his father, Stephen R. Covey, wrote The 7 Habits of Highly Effective People, a popular self-help book.

**  “Fiscal Year (FY) 2018 FEORP Plan Accomplishments and Successful/Promising Practices at the U.S. Nuclear Regulatory Commission (NRC),” Dec. 17, 2018.  ADAMS ML18351A243.  The agency uses The Speed of Trust concepts in manager and employee training. 

***  Safetysim is a management training simulation tool developed by Safetymatters’ Bob Cudlin.

****  For example, “A transparent culture of learning and growing will generally create credibility and trust, . . .” (p. 117)

Monday, December 3, 2018

Nuclear Safety Culture: Lessons from Factfulness by Hans Rosling

This book* is about biases that prevent us from making fact-based decisions.  It is based on the author’s world-wide work as a doctor and public health researcher.  We saw it on Bill Gates’ 2018 summer reading list.

Rosling discusses ten instincts (or reasons) why our individual worldviews (or mental models) are systematically wrong and prevent us from seeing situations are they truly are and making fact-based decisions about them.

Rosling mostly addresses global issues but the same instincts can affect our approach to work-related decision making from the enterprise level down to the individual.  We briefly discuss each instinct and highlight how it may hinder us from making good decisions during everyday work and one-off investigations.

The gap instinct

This is “that irresistible temptation we have to divide all kinds of things into two distinct and often conflicting groups, with an imagined gap—a huge chasm of injustice—in between.” (p. 26)  This is reinforced by our “strong dramatic instinct toward binary thinking . . .” (p. 42)  The gap instinct can apply to our thinking about safety, e.g., in the Safety I mental model there is acceptable performance and intolerable performance, with no middle ground and no normal transitions back and forth.  Rosling notes that usually there is no clear cleavage between two groups, even if it seems like that from the averages.  We saw this in Dekker's analysis of health provider data (reviewed Oct. 29, 2018) where both favorable and unfavorable patient outcomes exhibited the same negative work process traits.

The negativity instinct

This is “our tendency to notice the bad more than the good.” (p. 51)  We do not perceive  improvements that are “too slow, too fragmented, or too small one-by-one to ever qualify as news.” (p. 54)  “There are three things going on here: the misremembering of the past [erroneously glorifying the “good old days”]; selective reporting by journalists and activists; and the feeling that as long as things are bad it’s heartless to say they are getting better.” (p. 70)  To tell the truth, we don’t see this instinct inside the nuclear world where facilities with long-standing cultural problems (i.e., bad) are constantly reporting progress (i.e., getting better) while their cultural conditions still remain unacceptable.

The straight line instinct

This is the expectation that a line of data will continue straight into the future.  Most of you have technical training or exposure and know that accurate extrapolations can take many shapes including straight, s-bends, asymptotes, humps or exponential growth. 

The fear instinct

“[F]ears are hardwired deep in our brains for obvious evolutionary reasons.” (p. 105)  “The media cannot resist tapping into our fear instinct. It is such an easy way to grab our attention.” (p. 106)  Rosling observes that hundreds of elderly people who fled Fukushima to escape radiation ended up dying “because of the mental and physical stresses of the evacuation itself or of life in evacuation shelters.” (p. 114)  In other words, they fled something frightening (a perceived risk) and ended up in danger (a real risk).  How often does fear, e.g., fear of bad press, enter into your organization’s decision making?

The size instinct 


We overweight things that look big to us.  “It is instinctive to look at a lonely number and misjudge its importance.  It is also instinctive . . . to misjudge the importance of a single instance or an identifiable victim.” (p. 125)  Does the nuclear industry overreact to some single instances?

The generalization instinct

“[T]he generalization instinct makes “us” think of “them” as all the same.” (p. 140)  At the macro level, this is where the bad “isms” exist: racism, sexism, ageism, classism, etc.  But your coworkers may practice generalization on a more subtle, micro level.  How many people do you work with who think the root cause of most incidents is human error?  Or somewhat more generously, human error, inadequate procedures and/or equipment malfunctions— but not the larger socio-technical system?  Do people jump to conclusions based on an inadequate or incorrect categorization of a problem?  Are categories, rather than facts, used as explanations?  Are vivid examples used to over-glamorize alleged progress or over-dramatize poor outcomes?

The destiny instinct

“The destiny instinct is the idea that innate characteristics determine the destinies of people, countries, religions, or cultures.” (p. 158)  Culture includes deep-seated beliefs, where feelings can be disguised as facts.  Does your work culture assume that some people are naturally bad apples?

The single perspective instinct

This is preference for single causes and single solutions.  It is the fundamental weakness of Safety I where the underlying attitude is that problems arise from individuals who need to be better controlled.  Rosling advises us to “Beware of simple ideas and simple solutions. . . . Welcome complexity.” (p. 189)  We agree.

The blame instinct

“The blame instinct is the instinct to find a clear, simple reason for why something bad has happened. . . . when things go wrong, it must be because of some bad individual with bad intentions. . . . This undermines our ability to solve the problem, or prevent it from happening again, . . . To understand most of the world’s significant problems we have to look beyond a guilty individual and to the system.” (p. 192)  “Look for causes, not villains. When something goes wrong don’t look for an individual or a group to blame. Accept that bad things can happen without anyone intending them to.  Instead spend your energy on understanding the multiple interacting causes, or system, that created the situation.  Look for systems, not heroes.” (p. 204)  We totally agree with Rosling’s endorsement of a systems approach.

The urgency instinct

“The call to action makes you think less critically, decide more quickly, and act now.” (p. 209)  In a true emergency, people will fall back on their training (if any) and hope for the best.  However, in most situations, you should seek more information.  Beware of data that is relevant but inaccurate, or accurate but irrelevant.  Be wary of predictions that fail to acknowledge that the future is uncertain.

Our Perspective

The series of decisions an organization makes is a visible artifact of its culture and its decision making process internalizes culture.  Because of this linkage, we have long been interested in how organizations and individuals can make better decisions, where “better” means fact- and reality-based and consistent with the organization’s mission and espoused values.

We have reviewed many works that deal with decision making.  This book adds value because it is based on the author’s research and observations around the world; it is not based on controlled studies in a laboratory or observations in a single organization.  It uses very good graphics to illustrate various data sets, including changes, e.g., progress, over time.

Rosling believed “it has never been easier or more important for business leaders and employees to act on a fact-based worldview.” (p. 228)   His book is engagingly written and easy to read.  It is Rosling’s swan song; he died in 2017.

Bottom line: Rosling advocates for robust decision making, accurate mental models, and a systems approach.  We like it.


*  H. Rosling, O. Rosling and A.R. Rönnlund, Factfulness, 1st ed. ebook (New York: Flatiron, 2018).

Friday, November 9, 2018

Nuclear Safety Culture: Lessons from Turn the Ship Around! by L. David Marquet

Turn the Ship Around!* was written by a U.S. Navy officer who was assigned to command a submarine with a poor performance history.  He adopted a management approach that was radically different from the traditional top-down, leader-follower, “I say, you do” Navy model for controlling people.  The new captain’s primary method was to push decision making down to the lowest practical organizational levels; he supported his crew’s new authorities (and maintained control of the overall situation) with strategies to increase their competence and provide clarity on the organization’s purpose and goals.

Specific management practices were implemented or enhanced to support the overall approach.  For example, decision making guidelines were developed and disseminated.  Attaining goals was stressed over unconsciously following procedures.  Crew members were instructed to “think out loud” before initiating action; this practice communicated intent and increased organizational resilience because it created opportunities for others to identify potential errors before they could occur and propagate.  Pre-job briefs were changed from the supervisor reciting the procedure to asking participants questions about their roles and preparation.

As a result, several organizational characteristics that we have long promoted became more evident, including deferring to expertise (getting the most informed, capable people involved with a decision), increased trust, and a shared mental model of vision, purpose and organizational functioning.

As you can surmise, his approach worked.  (If it hadn’t, Marquet would have had a foreshortened career and there would be no book.)  All significant operational and personnel metrics improved under his command.  His subordinates and other crew members became highly promotable.  Importantly, the boat’s performance continued at a high level after he completed his tour; in other words, he established a system for success that could live on without his personal involvement.

Our Perspective 


This book provides a sharp contrast to nuclear industry folklore that promotes strong, omniscient leadership as the answer to every problem situation.  Marquet did not act out the role of the lone hero, instead he built a management system that created superior performance while he was in command and after he moved on.  There can be valuable lessons here for nuclear managers but one has to appreciate the particular requirements for undertaking this type of approach.

The manager’s attitude

You have to be willing to share some (maybe a lot) of your authority with your subordinates, their subordinates and so forth on down the line while still being held to account by your bosses for your unit’s performance.  Not everyone can do this.  It requires faith in the new system and your people and a certain detachment from short-term concerns about your own career.  You also need to have sufficient self-awareness to learn from mistakes as you move forward and recognize when you are failing to walk the talk with your subordinates.

In Marquet’s case, there were two important precursors to his grand experiment.  First, he had seen on previous assignments how demoralizing top-down micromanagement could be vs. how liberating and motivating it was for him (as a subordinate officer) to actually be allowed to make decisions.  Second, he had been training for a year on how to command a sub of a design different from the boat to which he was eventually assigned; he couldn’t go in and micromanage everyone from the get-go, he didn’t have sufficient technical knowledge.

The work environment

Marquet had one tremendous advantage: from a social perspective, a submarine is largely a self-contained world.  He did not have to worry about what people in the department next door were doing; he only had to get his remote boss to go along with his plan.  If you’re a nuclear plant department head and you want to adopt this approach but the rest of the organization runs top-down, it may be rough sledding unless you do lots of prep work to educate your superiors and get them to support you, perhaps for a pilot or trial project.

The book is easy reading, with short chapters, lots of illustrative examples (including some interesting information on how the Navy and nuclear submarines work), sufficient how-to lists, and discussion questions at the end of chapters.  Marquet did not invent his approach or techniques out of thin air.  As an example, some of his ideas and prescriptions, including rejecting the traditional Navy top-down leadership model, setting clear goals, providing principles for guiding decision making, enforcing reflection after making mistakes, giving people tools and advantages but holding them accountable, and culling people who can’t get with the program** are similar to points in Ray Dalio’s Principles, which we reviewed on April 17, 2018.  This is not surprising.  Effective, self-aware leaders should share some common managerial insights.

Bottom line: Read this book to see a real-world example of how authentic employee empowerment can work.


*  L.D. Marquet, Turn the Ship Around! (New York: Penguin, 2012).  This book was recommended to us by a Safetymatters reader.  Please contact us if you have any material you would like us to review.

**  People have different levels of appetite for empowerment or other forms of participatory management.  Not everyone wants to be fully empowered, highly self-motivated or expected to show lots of initiative.  You may end up with employees who never buy into your new program and, in the worst case, you won’t be allowed to get rid of them.

Monday, October 29, 2018

Safety Culture: What are the Contributors to “Bad” Outcomes Versus “Good” Outcomes and Why Don’t Some Interventions Lead to Improved Safety Performance?

Why?
Sidney Dekker recently revisited* some interesting research he led at a large health care authority.  The authority’s track record was not atypical for health care: 1 out of 13 (7%) patients was hurt in the process of receiving care.  The authority investigated the problem cases and identified a familiar cluster of negative factors, including workarounds, shortcuts, violations, guidelines not followed, errors and miscalculations—the list goes on.  The interventions will also be familiar to you—identify who did what wrong, add more rules, try harder and get rid of bad apples—but were not reducing the adverse event rate.

Dekker’s team took a different perspective and looked at the 93% of patients who were not harmed.  What was going on in their cases?  To their surprise, the team found the same factors: workarounds, shortcuts, violations, guidelines not followed, errors and miscalculations, etc.** 

Dekker uses this research to highlight a key difference between the traditional view of safety management, Safety I, and the more contemporary view, Safety II.  At its heart, Safety I believes the source of problems lies with the individual so interventions focus on ways to make the individual’s work behavior more reliable, i.e., less likely to deviate from the idealized form specified by work designers.  Safety I ignores the fact that the same imperfections exist in work with both successful and problematic outcomes.

In contrast, Safety II sees the source of problems in the system, the dynamic combination of technology, environmental factors, organizational aspects, and individual cognition and choices.  Referencing the work of Diane Vaughan, Dekker says “the interior life of organizations is always messy, only partially well-coordinated and full of adaptations, nuances, sacrifices and work that is done in ways that is quite different from any idealized image of it.”

Revisiting the data revealed that the work with good outcomes was different.  This work had more positive characteristics, including diversity of professional opinion and the possibility to voice dissent, keeping the discussion on risk alive and not taking past success as a guarantee for safety, deference to proven expertise, widely held authority to say “stop,” and pride of workmanship.  As you know, these are important characteristics of a strong safety culture.

Our Perspective

Dekker’s essay is a good introduction to the differences between Safety I and Safety II thinking, most importantly their differing mental models of the way work is actually performed in organizations.  In Safety I, the root cause of imperfect results is the individual and constant efforts are necessary (e.g., training, monitoring, leadership, discipline) to create and maintain the individual’s compliance with work as designed.  In  Safety II, normal system functioning leads to mostly good and occasionally bad results.  The focus of Safety II interventions should be on activities that increase individual capacity to affect system performance and/or increase system robustness, i.e., error tolerance and an increased chance of recovery when errors occur.

If one applies Safety I thinking to a “bad” outcome then the most likely result from an effective intervention is that the exact same problem will not happen again.  This thinking sustains a robust cottage industry in root-cause analysis because new problems will always arise and no changes are made to the system itself.

We like Dekker’s (and Vaughan’s) work and have reported on it several times in Safetymatters (click on the Dekker and Vaughan labels to bring up related posts).  We have been emphasizing some of the same points, especially the need for a systems view, since we started Safetymatters almost ten years ago.

Individual Exercise: Again drawing on Vaughan, Dekker says “there is often no discernable difference between the organization that is about to have an accident or adverse event, and the one that won’t, or the one that just had one.”  Look around your organization and review your career experience; is that true?


*  S. Dekker, “Why Do Things Go Right?,” SafetyDifferently website (Sept. 28, 2018).  Retrieved Oct. 25, 2018.

**  This is actually rational.  People operate on feedback and if the shortcuts, workarounds and disregarding the guidelines did not lead to acceptable (or at least tolerable) results most of the time, folks would stop using them.

Tuesday, April 17, 2018

Nuclear Safety Culture: Insights from Principles by Ray Dalio

Book cover
Ray Dalio is the billionaire founder/builder of Bridgewater Associates, an investment management firm.  Principles* catalogs his policies, practices and lessons-learned for understanding reality and making decisions for achieving goals in that reality.  The book appears to cover every possible aspect of managerial and organizational behavior.  Our plan is to focus on two topics near and dear to us—decision making and culture—for ideas that could help strengthen nuclear safety culture (NSC).  We will then briefly summarize some of Dalio’s other thoughts on management.  Key concepts are shown in italics.

Decision Making

We’ll begin with Dalio’s mental model of reality.  Reality is a system of universal cause-effect relationships that repeat and evolve like a perpetual motion machine.  The system dynamic is driven by evolution (“the single greatest force in the universe” (p. 142)) which is the process of adaptation.

Because many situations repeat themselves, principles (policies or rules) advance the goal of making decisions in a systematic, repeatable way.  Any decision situation has two major steps: learning (obtaining and synthesizing data about the current situation) and deciding what to do.  Logic, reason and common sense are the primary decision making mechanisms, supported by applicable existing principles and tools, e.g., expected value calculations or evidence-based decision making tools.  The lessons learned from each decision situation can be incorporated into existing or new principles.  Practicing the principles develops good habits, i.e., automatic, reflexive behavior in the specified situations.  Ultimately, the principles can be converted into algorithms that can be computerized and used to support the human decision makers.

Believability weighting can be applied during the decision making process to obtain data or opinions about solutions.  Believable people can be anyone in the organization but are limited to those “who 1) have repeatedly and successfully accomplished the thing in question, and 2) . . . can logically explain the cause-effect relationships behind their conclusions.” (p. 371)  Believability weighting supplements and challenges responsible decision makers but does not overrule them.  Decision makers can also make use of thoughtful disagreement where they seek out brilliant people who disagree with them to gain a deeper understanding of decision situations.

The organization needs a process to get beyond disagreement.  After all discussion, the responsible party exercises his/her decision making authority.  Ultimately, those who disagree have to get on board (“get in sync”) and support the decision or leave the organization.

The two biggest barriers to good decision making are ego and blind spots.  Radical open-mindedness recognizes the search for what’s true and the best answer is more important than the need for any specific person, no matter their position in the organization, to be right.

Culture

Organizations and the individuals who populate them should also be viewed as machines.  Both are imperfect but capable of improving. The organization is a machine made up of culture and people that produces outcomes that provide feedback from which learning can occur.  Mistakes are natural but it is unacceptable to not learn from them.  Every problem is an opportunity to improve the machine.  

People are generally imperfect machines.  People are more emotional than logical.   They suffer from ego (subconscious drivers of thoughts) and blind spots (failure to see weaknesses in themselves).  They have different character attributes.  In short, people are all “wired” differently.  A strong culture with clear principles is needed to get and keep everyone in sync with each other and in pursuit of the organization’s goals.

Mutual adjustment takes place when people interact with culture.  Because people are different and the potential to change their wiring is low** it is imperative to select new employees who will embrace the existing culture.  If they can’t or won’t, or lack ability, they have to go.  Even with its stringent hiring practices, about a third of Bridgewater’s new hires are gone by the end of eighteen months.

Human relations are built on meaningful relationships, radical truth and tough love.  Meaningful relationships means people give more consideration to others than themselves and exhibit genuine caring for each other.  Radical truth means you are “transparent with your thoughts and open-mindedly accepting the feedback of others.” (p. 268)  Tough love recognizes that criticism is essential for improvement towards excellence; everyone in the organization is free to criticize any other member, no matter their position in the hierarchy.  People have an obligation to speak up if they disagree. 

“Great cultures bring problems and disagreements to the surface and solve them well . . .” (p. 299)  The culture should support a five-step management approach: Have clear goals, don’t tolerate problems, diagnose problems when they occur, design plans to correct the problems, and do what’s necessary to implement the plans, even if the decisions are unpopular.  The culture strives for excellence so it’s intolerant of folks who aren’t excellent and goal achievement is more important than pleasing others in the organization.

More on Management 


Dalio’s vision for Bridgewater is “an idea meritocracy in which meaningful work and meaningful relationships are the goals and radical truth and radical transparency are the ways of achieving them . . .” (p. 539)  An idea meritocracy is “a system that brings together smart, independent thinkers and has them productively disagree to come up with the best possible thinking and resolve their disagreements in a believability-weighted way . . .” (p. 308)  Radical truth means “not filtering one’s thoughts and one’s questions, especially the critical ones.” (ibid.)  Radical transparency means “giving most everyone the ability to see most everything.” (ibid.)

A person is a machine operating within a machine.  One must be one’s own machine designer and manager.  In managing people and oneself, take advantage of strengths and compensate for weaknesses via guardrails and soliciting help from others.  An example of a guardrail is assigning a team member whose strengths balance another member’s weaknesses.  People must learn from their own bad decisions so self-reflection after making a mistake is essential.  Managers must ascertain if mistakes are evidence of a weakness and whether compensatory action is required or, if the weakness is intolerable, termination.  Because values, abilities and skills are the drivers of behavior management should have a full profile for each employee.

Governance is the system of checks and balances in an organization.  No one is above the system, including the founder-owner.  In other words, senior managers like Dalio can be subject to the same criticism as any other employee.

Leadership in the traditional sense (“I say, you do”) is not so important in an idea meritocracy because the optimal decisions arise from a group process.  Managers are seen as decision makers, system designers and shapers who can visualize a better future and then build it.   Leaders “must be willing to recruit individuals who are willing to do the work that success requires.” (p. 520)

Our Perspective

We recognize international investment management is way different from nuclear power management so some of Dalio’s principles can only be applied to the nuclear industry in a limited way, if at all.  One obvious example of a lack of fit is the area of risk management.  The investing environment is extremely competitive with players evolving rapidly and searching for any edge.  Timely bets (investments) must be made under conditions where the risk of failure is many orders of magnitude greater than what acceptable in the nuclear industry.  Other examples include the relentless, somewhat ruthless, pursuit of goals and a willingness to jettison people that is foreign to the utility world.

But we shouldn’t throw the baby out with the bath.  While Dalio’s approach may be too extreme for wholesale application in your environment it does provide a comparison (note we don’t say “standard”) for your organization’s performance.  Does your decision making process measure up to Dalio’s in terms of robustness, transparency and the pursuit of truth?  Does your culture really strive for excellence (and eliminate those who don’t share that vision) or is it an effort constrained by hierarchical, policy or political realities?

This is a long book but it’s easy to read and key points are repeated often.  Not all of it is novel; many of the principles are based on observations or techniques that have been around for awhile and should be familiar to you.  For example, ideas about how human minds work are drawn, in part, from Daniel Kahneman; an integrated hierarchy of goals looks like Management by Objectives; and a culture that doesn’t automatically punish people for making mistakes or tolerable errors sounds like a “just culture” albeit with some mandatory individual learning attached.

Bottom line: Give this book a quick look.  It can’t hurt and might help you get a clearer picture of how your own organization actually operates.



*  R. Dalio, Principles (New York: Simon & Schuster, 2017).  This book was recommended to us by a Safetymatters reader.  Please contact us if you have any material you would like us to review.

**  A person’s basic values and abilities are relatively fixed, although skills may be improved through training.

Thursday, August 10, 2017

Nuclear Safety Culture: The Threat of Bureaucratization

We recently read Sidney Dekker’s 2014 paper* on the bureaucratization of safety in organizations.  It’s interesting because it describes a very common evolution of organizational practices, including those that affect safety, as an organization or industry becomes more complicated and formal over time.  Such evolution can affect many types of organizations, including nuclear ones.  Dekker’s paper is summarized below, followed by our perspective on it. 

The process of bureaucratization is straightforward; it involves hierarchy (creating additional layers of organizational structure), specialized roles focusing on “safety related” activities, and the application of rules for defining safety requirements and the programs to meet them.  In the safety space, the process has been driven by multiple factors, including legislation and regulation, contracting and the need for a uniform approach to managing large groups of organizations, and increased technological capabilities for collection and analysis of data.

In a nutshell, bureaucracy means greater control over the context and content of work by people who don’t actually have to perform it.  The risk is that as bureaucracy grows, technical expertise and operational experience may be held in less value.

This doesn’t mean bureaucracy is a bad thing.  In many environments, bureaucratization has led to visible benefits, primarily a reduction in harmful incidents.  But it can lead to unintended, negative consequences including:

  • Myopic focus on formal performance measures (often quantitative) and “numbers games” to achieve the metrics and, in some cases, earn financial bonuses,
  • An increasing inability to imagine, much less plan for, truly novel events because of the assumption that everything bad that might happen has already been considered in the PRA or the emergency plan.  (Of course, these analyses/documents are created by siloed specialists who may lack a complete understanding of how the socio-technical system works or what might actually be required in an emergency.  Fukushima anyone?),
  • Constraints on organizational members’ creativity and innovation, and a lack of freedom that can erode problem ownership, and
  • Interest, effort and investment in sustaining, growing and protecting the bureaucracy itself.
Our Perspective

We realize reading about bureaucracy is about as exciting as watching a frog get boiled.  However, Dekker does a good job of explaining how the process of bureaucratization takes root and grows and the benefits that can result.  He also spells out the shortcomings and unintended consequences that can accompany it.

The commercial nuclear world is not immune to this process.  Consider all the actors who have their fingers in the safety pot and realize how few of them are actually responsible for designing, maintaining or operating a plant.  Think about the NRC’s Reactor Oversight Process (ROP) and the licensees’ myopic focus on keeping a green scorecard.  Importantly, the Safety Culture Policy Statement (SCPS) being an “expectation” resists the bureaucratic imperative to over-specify.  Instead, the SCPS is an adjustable cudgel the NRC uses to tap or bludgeon wayward licensees into compliance.  Foreign interest in regulating nuclear safety culture will almost certainly lead to its increased bureaucratization.  

Bureaucratization is clearly evident in the public nuclear sector (looking at you, Department of Energy) where contractors perform the work and government overseers attempt to steer the contractors toward meeting production goals and safety standards.  As Dekker points out, managing, monitoring and controlling operations across an organizational network of contractors and sub-contractors tends to be so difficult that bureaucratized accountability becomes the accepted means to do so.

We have presented Dekker’s work before, primarily his discussion of a “just culture” (reviewed Aug. 3, 2009) that tries to learn from mishaps rather than simply isolating and perhaps punishing the human actor(s) and “drift into failure” (reviewed Dec. 5, 2012) where a socio-technical system can experience unacceptable performance caused by systemic interactions while functioning normally.  Stakeholders can mistakenly believe the system is completely safe because no errors have occurred while in reality the system can be slipping toward an incident.  Both of these attributes should be considered in your mental model of how your organization operates.

Bottom line: This is an academic paper in a somewhat scholarly journal, in other words, not a quick and easy read.  But it’s worth a look to get a sense of how the tentacles of formality can wrap themselves around an organization.  In the worse case, they can stifle the capabilities the organization needs to successfully react to unexpected events and environmental changes.


*  S.W.A. Dekker, “The bureaucratization of safety,” Safety Science 70 (2014), pp. 348–357.  We saw this paper on Safety Differently, a website that publishes essays on safety.  Most of the site’s content appears related to industries with major industrial safety challenges, e.g., mining.

Wednesday, March 8, 2017

Nuclear Safety Culture at the Department of Energy—An Update

We haven’t reported on the U.S. Department of Energy’s (DOE) safety culture (SC) in awhile.  Although there hasn’t been any big news lately, we can look at some individual facts and then connect the dots to say something about SC.

Let’s start with some high-level good news.  In late 2016 DOE announced it had conducted its 100th SC training class for senior leaders of both federal and contractor entities across the DOE complex.*  The class focuses on teaching leaders the why and how of maintaining a collaborative workplace and Safety Conscious Work Environment (SCWE), and fostering trust in the work environment. 

Now let’s turn to a more localized situation.  In Feb 2014, a storage drum burst at the DOE’s Waste Isolation Pilot Plant (WIPP) in New Mexico, resulting in a small release of radioactive material.  The drum burst because a sorbent added to the waste had been changed without considering the difference in chemical properties.**  This has been an expensive incident.  The plant has been closed for over three years; it was authorized to reopen in Jan 2017 and shipments are scheduled to resume in April 2017.*** 

The drum that burst came from the Los Alamos National Laboratory (LANL).  The WIPP Recovery Plan envisions continuing the pre-incident practice of the waste generators being responsible for correctly packing their waste: “All waste generators will have rigorous characterization, treatment, and packaging processes and procedures in place to ensure compliance with WIPP Waste Acceptance Criteria [WAC].”****  As we said in our May 3, 2016 post: “For this approach to work, WAC compliance by the waste generators . . . must be completely effective and 100% reliable.”  In the same post, we reported the Defense Nuclear Facilities Safety Board (DNFSB) had recognized this weak link in the chain.  However, because DNFSB cannot force changes it could only recommend that DOE “explore defense-in-depth measures that enhance WIPP’s capability to detect and respond to problems caused by unexpected failures in the WAC compliance program.”

As described in the current WAC, WIPP’s “defense-in-depth” appears to be limited to the local DOE office and the WIPP contractor performing Generator Site Technical Reviews, which cover sites’ implementation of WIPP requirements.*****  These reviews are supposed to assure that deficiencies are detected and noncompliant shipments are avoided but it’s not clear if any physical surveillance is involved or if this is strictly a paperwork exercise.

The foregoing is important because it ties to SC.  Firstly, WIPP has had SC issues, in fact, a deficient SC was identified as contributing to shortcomings in the handling of the aftermath of the drum explosion.  (We reviewed this in detail on May 3 and May 5, 2014.)  WIPP SC is supposedly better now: “NWP [the WIPP contractor] has made continuous improvements in their safety culture and has really embraced the recommendations provided in the 2015 review, as well as subsequent reviews and surveys.”^  Secondly, other SC problems, too myriad to even list here, have arisen throughout the DOE complex over the years.  (Click on the DOE label to see our reports on such problems.)

Finally, we present a recent data point for LANL.  In DOE’s report on criticality safety infractions and program non-compliances for FY 2016, LANL had the most such incidents, by far, of the DOE’s 24 sites and projects.^^  Most of the non-compliances were self-identified.  Now does this evidence a strong SC that recognizes and reports its problems or a weak SC that allows the problems to occur in the first place?  You be the judge.

Our Perspective

Through initiatives such as SC training, it appears that at the macro level, DOE is (finally) communicating that minimally complying with basic regulations for how organizations should treat employees is not enough; establishing trust, mainly through showing respect for employees’ efforts to raise safety questions and point out safety problems, is essential.  That’s a good thing.

But we see signs of weakness at the operational level, viz., between WIPP and its constellation of waste generators.  Although we are not fans of “Normal Accident” theory which says accidents are inevitable in tightly coupled, low slack environments, e.g., a nuclear power plant, we can appreciate the application of that mental model in the case of WIPP.  Historically, one feature of the DOE complex that has limited problems to specific locations is the weak coupling between facilities.  When every facility with bomb-making waste is shipping it to WIPP, tighter coupling is created in the overall waste management system.  Every waste generator’s SC can have an impact on WIPP’s safety performance.  The system does need more defense-in-depth.  At a minimum, WIPP should station resident inspectors at every waste generator site to verify compliance with the WAC.

Bottom line: DOE is trying harder in the SC space but their history does not inspire huge confidence going forward. 


*  “DOE Conducts 100th Safety Culture Training Class” (Dec. 29, 2016).

**  Organic kitty litter had been substituted for inorganic kitty litter.  See this Jan. 10, 2017 Forbes article for a good summary of the WIPP incident.

***  “WIPP Road Show Early Stops Planned in Carlsbad & Hobbs,” WIPP website (Feb. 27, 2017).  Retrieved March 7, 2017. 

****  DOE, “Waste Isolation Pilot Plant Recovery Plan,” Rev 0 (Sept. 30, 2014), p. 24.

*****  DOE, “Transuranic Waste Acceptance Criteria for the Waste Isolation Pilot Plant,” Rev 8.0 (July 5, 2016), pp. 20-21.

^  DOE, “Department of Energy Operational Readiness Review for the Waste Isolation Pilot Plant” (Dec. 2016), p. 33.

^^   DOE, “2016 Annual Metrics Report to the Defense Nuclear Facilities Safety Board – Nuclear Criticality Safety Programs” (Jan. 2017), p. 3.

Friday, January 6, 2017

Reflections on Nuclear Safety Culture for the New Year

©iStockphoto.com
The start of a new year is an opportunity to take stock of the current situation in the U.S. nuclear industry and reiterate what we believe with respect to nuclear safety culture (NSC).

For us, the big news at the end of 2016 was Entergy’s announcement that Palisades will be shutting down on Oct. 1, 2018.*  Palisades has been our poster child for a couple of things: (1) Entergy’s unwillingness or inability to keep its nose clean on NSC issues and (2) the NRC’s inscrutable decision making on when the plant’s NSC was either unsatisfactory or apparently “good enough.”

We will have to find someone else to pick on but don’t worry, there’s always some new issue popping up in NSC space.  Perhaps we will go to France and focus on the current AREVA and Électricité de France imbroglio which was cogently summarized in a Power magazine editorial: “At the heart of France’s nuclear crisis are two problems.  One concerns the carbon content of critical steel parts . . . manufactured or supplied by AREVA . . . The second problem concerns forged, falsified, or incomplete quality control reports about the critical components themselves.”**  Anytime the adjectives “forged” or “falsified” appear alongside nuclear records, the NSC police will soon be on the scene.  

Why do NSC issues keep arising in the nuclear industry?  If NSC is so important, why do organizations still fail to fix known problems or create new problems for themselves?  One possible answer is that such issues are the occasional result of the natural functioning of a low-tolerance, complex socio-technical system.  In other words, performance may drift out of bounds in the normal course of events.  We may not be able to predict where such issues will arise (although the missed warning signals will be obvious in retrospect) but we cannot reasonably expect they can be permanently eliminated from the system.  In this view, an NSC can be acceptably strong but not 100% effective.

If they are intellectually honest, this is the implicit mental model that most NSC practitioners and “experts” utilize even though they continue to espouse the dogma that more engineering, management, leadership, oversight, training and sanctions can and will create an actual NSC that matches some ideal NSC.  But we’ve known for years what an ideal NSC should look like, i.e., its attributes, and how responsibilities for creating and maintaining such a culture should be spread across a nuclear organization.***  And we’re still playing Whac-A-Mole.

At Safetymatters, we have promoted a systems view of NSC, a view that we believe provides a more nuanced and realistic view of how NSC actually works.  Where does NSC live in our nuclear socio-technical system?  Well, it doesn’t “live” anywhere.  NSC is, to some degree, an emergent property of the system, i.e., it is visible because of the ongoing functioning of other system components.  But that does not mean that NSC is only an effect or consequence.  NSC is both a consequence and a cause of system behavior.  NSC is a cause through the way it affects the processes that create hard artifacts, such as management decisions or the corrective action program (CAP), softer artifacts like the leadership exhibited throughout an organization, and squishy organizational attributes like the quality of hierarchical and interpersonal trust that permeates the organization like an ether or miasma. 

Interrelationships and feedback loops tie NSC to other organizational variables.  For example, if an organization fixes its problems, its NSC will appear stronger and the perception of a strong NSC will influence other organizational dynamics.  This particular feedback loop is generally reinforcing but it’s not some superpower, as can be seen in a couple of problems nuclear organizations may face: 

Why is a CAP ineffective?  The NSC establishes the boundaries between the desirable, acceptable, tolerable and unacceptable in terms of problem recognition, analysis and resolution.  But the strongest SC cannot compensate for inadequate resources from a plant owner, a systemic bias in favor of continued production****, a myopic focus on programmatic aspects (following the rules instead of searching for a true answer) or incompetence in plant staff. 

Why are plant records falsified?  An organization’s party line usually pledges that the staff will always be truthful with customers, regulators and each other.  The local culture, including its NSC, should reinforce that view.  But fear is always trying to slip in through the cracks—fear of angering the boss, fear of missing performance targets, fear of appearing weak or incompetent, or fear of endangering a plant’s future in an environment that includes the plant’s perceived enemies.  Fear can overcome even a strong NSC.

Our Perspective

NSC is real and complicated but it is not mysterious.  Most importantly, NSC is not some red herring that keeps us from seeing the true causes of underlying organizational performance problems.  Safetymatters will continue to offer you the information and insights you need to be more successful in your efforts to understand NSC and use it as a force for better performance in your organization.

Your organization will not increase its performance in the safety dimension if it continues to apply and reprocess the same thinking that the nuclear industry has been promoting for years.  NSC is not something that can be directly managed or even influenced independent of other organizational variables.  “Leadership” alone will not fix your organization’s problems.  You may protect your career by parroting the industry’s adages but you will not move the ball down the field without exercising some critical and independent thought.

We wish you a safe and prosperous 2017.


*  “Palisades Power Purchase Agreement to End Early,” Entergy press release (Dec. 8,2016).

**  L. Buchsbaum, “France’s Nuclear Storm: Many Power Plants Down Due to Quality Concerns,” Power (Dec. 1, 2016).  Retrieved Jan. 4, 2017.

***  For example, take a look back at INSAG-4 and NUREG-1756 (which we reviewed on May 26, 2015).

****  We can call that the Nuclear Production Culture (NPC).