Tuesday, May 28, 2019

The Study of Organizational Culture: History, Assessment Methods, and Insights

We came across an academic journal article* that purports to describe the current state of research into organizational culture (OC).  It’s interesting because it includes a history of OC research and practice, and a critique of several methods used to assess it.  Following is a summary of the article and our perspective on it, focusing on any applicability to nuclear safety culture (NSC).

History

In the late 1970s scholars studying large organizations began to consider culture as one component of organizational identity.  In the same time frame, practicing managers also began to show an interest in culture.  A key driver of their interest was Japan’s economic ascendance and descriptions of Japanese management practices that depended heavily on cultural factors.  The notion of a linkage between culture and organizational performance inspired non-Japanese managers to seek out assistance in developing culture as a competitive advantage for their own companies.  Because of the sense of urgency, practical applications (usually developed and delivered by consultants) were more important than developing a consistent, unified theory of OC.  Practitioners got ahead of researchers and the academic world has yet to fully catch up.

Consultant models only needed a plausible, saleable relationship between culture and organizational performance.  In academic terms, this meant that a consultant’s model relating culture to performance only needed some degree of predictive validity.  Such models did not have to exhibit construct validity, i.e., some proof that they described, measured, or assessed a client organization’s actual underlying culture.  A second important selling point was the consultants’ emphasis on the singular role of the senior leaders (i.e., the paying clients) in molding a new high-performance culture.

Over time, the emphasis on practice over theory and the fragmented efforts of OC researchers led to some distracting issues, including the definition of OC itself, the culture vs. climate debate, and qualitative vs. quantitative models of OC. 

Culture assessment methods 


The authors provide a detailed comparison of four quantitative approaches for assessing OC: the Denison Organizational Culture Survey (used by more than 5,000 companies), the Competing Values Framework (used in more than 10,000 organizations), the Organizational Culture Inventory (more than 2,000,000 individual respondents), and the Organizational Culture Profile (OCP, developed by the authors and used in a “large number” of research studies).  We’ll spare you the gory details but unsurprisingly, the authors find shortcomings in all the approaches, even their own. 

Some of this criticism is sour grapes over the more popular methods.  However, the authors mix their criticism with acknowledgement of functional usefulness in their overall conclusion about the methods: because they lack a “clear definition of the underlying construct, it is difficult to know what is being measured even though the measure itself has been shown to be reliable and to be correlated with organizational outcomes.” (p. 15)

Building on their OCP, the authors argue that OC researchers should start with the Schein three-level model (basic assumptions and beliefs, norms and values, and cultural artifacts) and “focus on the norms that can act as a social control system in organizations.” (p. 16)  As controllers, norms can be descriptive (“people look to others for information about how to act and feel in a given situation”) or injunctive (how the group reacts when someone violates a descriptive norm).  Attributes of norms include content, consensus (how widely they are held), and intensity (how deeply they are held).

Our Perspective

So what are we to make of all this?  For starters, it’s important to recognize that some of the topics the academics are still quibbling over have already been settled in the NSC space.  The Schein model of culture is accepted world-wide.  Most folks now recognize that a safety survey, by itself, only reflects respondents’ perceptions at a specific point in time, i.e., it is a snapshot of safety climate.  And a competent safety culture assessment includes both qualitative and quantitative data: surveys, focus groups, interviews, observations, and review of artifacts such as documents.

However, we may still make mistakes.  Our mental models of safety culture may be incomplete or misassembled, e.g., we may see a direct connection between culture and some specific behavior when, in reality, there are intervening variables.  We must acknowledge that OC can be a multidimensional sub-system with complex internal relationships interacting with a complicated socio-technical system surrounded by a larger legal-political environment.  At the end of the day, we will probably still have some unknown unknowns.

Even if we follow the authors’ advice and focus on norms, it remains complicated.  For example, it’s fairly easy to envision that safety could be a widely agreed upon, but not intensely held, norm; that would define a weak safety culture.  But how about safety and production and cost norms in a context with an intensely held norm about maintaining good relations with and among long-serving coworkers?  That could make it more difficult to predict specific behaviors.  However, people might be more likely to align their behavior around the safety norm if there was general consensus across the other norms.  Even if safety is the first among equals, consensus on other norms is key to a stronger overall safety culture that is more likely to sanction deviant behavior.
 
The authors claim culture, as defined by Schein, is not well-investigated.  Most work has focused on correlating perceptions about norms, systems, policies, procedures, practices and behavior (one’s own and others’) to organizational effectiveness with a purpose of identifying areas for improvement initiatives that will lead to increased effectiveness.  The manager in the field may not care if diagnostic instruments measure actual culture, or even what culture he has or needs; he just wants to get the mission accomplished while avoiding the opprobrium of regulators, owners, bosses, lawmakers, activists and tweeters. If your primary focus is on increasing performance, then maybe you don’t need to know what’s under the hood. 

Bottom line: This is an academic paper with over 200 citations but is quite readable although it contains some pedantic terms you probably don’t hear every day, e.g., the ipsative approach to ranking culture attributes (ordinary people call this “forced choice”) and Q factor analysis.**  Some of the one-sentence descriptions of other OC research contain useful food for thought and informed our commentary in this write-up.  There is a decent dose of academic sniping in the deconstruction of commercially popular “culture” assessment methods.  However, if you or your organization are considering using one of those methods, you should be aware of what it does, and doesn’t, incorporate. 


*  J.A. Chatman and C.A. O’Reilly, “Paradigm lost: Reinvigorating the study of organizational culture,” Research in Organizational Behavior (2016).  Retrieved May 28, 2019.

**  “Normal factor analysis, called "R method," involves finding correlations between variables (say, height and age) across a sample of subjects. Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which are claimed to represent shared ways of thinking.”  Wikipedia, “Q methodology.”   Retrieved May 28, 2019.

Monday, April 1, 2019

Culture Insights from The Speed of Trust by Stephen M.R. Covey

In The Speed of Trust,* Stephen M.R. Covey posits that trust is the key competency that allows individuals (especially leaders), groups, organizations, and societies to work at optimum speed and cost.  In his view, “Leadership is getting results in a way that inspires trust.” (p. 40)  We saw the book mentioned in an NRC personnel development memo** and figured it was worth a look. 

Covey presents a model of trust made up of a framework, language to describe the framework’s components, and a set of recommended behaviors.  The framework consists of self trust, relationship trust and stakeholder trust.  Self trust is about building personal credibility; relationship trust is built on one’s behavior with others; and stakeholder trust is built within organizations, in markets (i.e., with customers), and over the larger society.  His model is not overly complicated but it has a lot of parts, as shown in the following figure.


Figure by Safetymatters

4 Cores of credibility 


Covey begins by describing how the individual can learn to trust him or herself.  This is basically an internal process of developing the 4 Cores of credibility: character attributes (integrity and intent) and competence attributes (capabilities and results).  Improvement in these areas increases self-confidence and one’s ability to project a trust-inspiring strength of character.  Integrity includes clarifying values and following them.  Intent includes a transparent, as opposed to hidden, agenda that drives one’s behavior.  Capabilities include the talents, skills, and knowledge, coupled with continuous improvement, that enable excellent performance.  Results, e.g., achieving goals and keeping commitments, are sine qua non for establishing and maintaining credibility and trust.

13 Behaviors  

The next step is learning how to trust and be trusted by others.  This is a social process, i.e., it is created through individual behavior and interaction with others.  Covey details 13 types of behavior to which the individual must attend.  Some types flow primarily, but not exclusively, from character, others from competence, and still others from a combination of the two.  He notes that “. . . the quickest way to decrease trust is to violate a behavior of character, while the quickest way to increase trust is to demonstrate a behavior of competence.” (p. 133)  Covey provides examples of each desired behavior, its opposite, and its “counterfeit” version, i.e., where people are espousing the desired behavior but actually avoiding doing it.  He describes the problems associated with underdoing and overdoing each behavior (an illustration of the Goldilocks Principle).  Behavioral change is possible if the individual has a compelling sense of purpose.  Each behavior type is guided by a set of principles, different for each behavior, as shown in the following figure.


Figure by Safetymatters

Organizational alignment

The third step is establishing trust throughout an organization.  The primary mechanism for accomplishing this is alignment of the organization’s visible symbols, underlying structures, and systems with the ideals expressed in the 4 Cores and 13 Behaviors, e.g., making and keeping commitments and accounting for results.  He describes the “taxes” associated with a low-trust organization and the “dividends” associated with a high-trust organization.  Beyond that, there is nothing new in this section.

Market and societal trust

We’ll briefly address the final topics.  Market trust is about an entity’s brand or reputation in the outside world.  Building a strong brand involves using the 4 Cores to establish, maintain or strengthen one’s reputation.  Societal trust is built on contribution, the value an entity creates in the world through ethical behavior, win-win business dealings, philanthropy and other forms of corporate social responsibility.     

Our Perspective 


Covey provides a comprehensive model of how trust is integral to relationships at every level of complexity, from the self to global relations.
 
The fundamental importance of trust is not new news.  We have long said organization-wide trust is vital to a strong safety culture.  Trust is a lubricant for organizational friction which, like physical friction, slows down activities, and makes them more expensive.  In our Safetysim*** management simulator, trust was an input variable that affected speed and effectiveness of problem resolution and overall cost performance. 

Covey’s treatment of culture is incomplete.  While he connects some of his behaviors or principles to organizational culture,**** he never actually defines culture.  It appears he thinks culture is something that “just is” or, perhaps, a consequence or artifact of performing the behaviors he prescribes.  It’s reasonable to assume Covey believes motivated individuals can behave their way to a better culture, saying “. . . behave your way into the person you want to be.” (pp. 87, 130)  His view is consistent with culture change theorists who believe people will eventually develop desired values if they model desired behavior long enough.  His recipe for cultural change boils down to “Just do it.”  We prefer a more explicit definition of culture, something along the spectrum from the straightforward notion of culture as an underlying set of values to the idea of culture as an emergent property of a complex socio-technical system. 

Trust is not the only candidate for the primary leadership or organizational competence.  The same or similar arguments could also be made about respect.  (Covey mentions respect but only as one of his 13 behaviors.)  Two-way respect is also essential for organizational success.  This leads to an interesting question: Could you respect a leader without trusting him/her?  How about some of the famous hard-ass bosses of management lore, like Harold Geneen?  Or General Patton? 

Covey is obviously a true believer in his message and his presentation has a fervor one normally associates with religious zeal.  He also includes many examples of family situations and describes how his prescriptions can be applied to families.  (Helpful if you want to manage your family like a little factory.)  Covey is a devout Mormon and his faith comes through in his writing. 

The book is an easy read.  Like many books written by successful consultants, it is interspersed with endorsements and quotes from business and political notables.  Covey includes a couple of useful self-assessment surveys.  He also offers a valuable observation: “. . . people tend to judge others based on behavior and judge themselves based on intent.” (p. 301)

Bottom line: This book is worth your time if lack of trust is a problem in your organization.


*  Stephen M. R. Covey, The Speed of Trust (New York: Free Press, 2016).  If the author’s name sounds familiar, it may be because his father, Stephen R. Covey, wrote The 7 Habits of Highly Effective People, a popular self-help book.

**  “Fiscal Year (FY) 2018 FEORP Plan Accomplishments and Successful/Promising Practices at the U.S. Nuclear Regulatory Commission (NRC),” Dec. 17, 2018.  ADAMS ML18351A243.  The agency uses The Speed of Trust concepts in manager and employee training. 

***  Safetysim is a management training simulation tool developed by Safetymatters’ Bob Cudlin.

****  For example, “A transparent culture of learning and growing will generally create credibility and trust, . . .” (p. 117)

Friday, March 8, 2019

Decision Making, Values, and Culture Change

Typical New Yorker cover
In the nuclear industry, most decisions are at least arguably “hard,” i.e., decision makers can agree on the facts and identify areas where there is risk or uncertainty.  A recent New Yorker article* on making an indisputably “soft” decision got us wondering if the methods and philosophy described in the article might provide some insight into qualitative personal decisions in the nuclear space.

Author Joshua Rothman’s interest in decision making was piqued by the impending birth of his first child.  When exactly did he decide that he wanted children (after not wanting them) and then participate with his wife to make it happen?  As he says, “If I made a decision, it wasn’t a very decisive one.”  Thus began his research into decision making methods and philosophy.

Rothman opens with a quick review of several decision making techniques.  He describes Benjamin Franklin’s “prudential algebra,” Charles Darwin’s lists of pros and cons, Leo Tolstoy’s expositions in War and Peace (where it appears the biggest decisions basically make themselves), and modern decision science processes that develop decisions through iterative activities performed by groups, scenario planning and war games. 

Eventually the author gets to decision theory, which holds that sound decisions flow from values.  Decision makers ask what they value and then seek to maximize it.  But what if “we’re unsure what we care about, or when we anticipate that what we care about might shift”?  What if we opt to change our values? 

The focus on values leads to philosophy.  Rothman draws heavily on the work of Agnes Callard, a philosopher at the University of Chicago, who believes that life-altering decisions are not made suddenly but through a more gradual process: “Old Person aspires to become New Person.”  Callard emphasizes that aspiration is different from ambition.  Ambitious people know exactly why they’re doing something, e.g., taking a class to get a good grade or modeling different behavior to satisfy regulatory scrutiny.  Aspirants, on the other hand, have a harder time because they have a less clear sense of their current activities’ value and can only hope their future selves can understand and appreciate it.  “To aspire, Callard writes, is to judge one’s present-day self by the standards of a future self who doesn’t yet exist.”

Our Perspective

We can consider the change of an organization’s culture as the integration over time of the changes in all its members’ behaviors and values.  We know that values underlie culture and significant cultural change requires shifting the actual (as opposed to the espoused) values of the organization.  This is not easy.  The organization’s more ambitious members will find it easier to get with the program; they know change is essential and are willing to adapt to keep their jobs or improve their standing.  The merely aspiring will have a harder time.  Because they lack a clear picture of the future organizational culture, they may be troubled by unexplored options, i.e., some different path or future that might be equally good or even better.  They may learn that no matter how deeply they study the experience of others, they still don’t really know what they’re getting into.  They don’t understand what the change experience will be like and how it will affect them.  They may be frustrated to discover that modeling desired new behaviors does not help because they still feel like the same people in the old culture.  Since personal change is not instantaneous, they may even get stuck somewhere between the old culture and the new culture.

Bottom line: Cultural change is harder for some people than others.  This article is an easy read that offers an introduction to the personal dynamics associated with changing one’s outlook or values.

*  J. Rothman, “The Art of Decision-Making,” The New Yorker (Jan. 21, 2019).  Retrieved March 1, 2019.

Monday, December 3, 2018

Nuclear Safety Culture: Lessons from Factfulness by Hans Rosling

This book* is about biases that prevent us from making fact-based decisions.  It is based on the author’s world-wide work as a doctor and public health researcher.  We saw it on Bill Gates’ 2018 summer reading list.

Rosling discusses ten instincts (or reasons) why our individual worldviews (or mental models) are systematically wrong and prevent us from seeing situations are they truly are and making fact-based decisions about them.

Rosling mostly addresses global issues but the same instincts can affect our approach to work-related decision making from the enterprise level down to the individual.  We briefly discuss each instinct and highlight how it may hinder us from making good decisions during everyday work and one-off investigations.

The gap instinct

This is “that irresistible temptation we have to divide all kinds of things into two distinct and often conflicting groups, with an imagined gap—a huge chasm of injustice—in between.” (p. 26)  This is reinforced by our “strong dramatic instinct toward binary thinking . . .” (p. 42)  The gap instinct can apply to our thinking about safety, e.g., in the Safety I mental model there is acceptable performance and intolerable performance, with no middle ground and no normal transitions back and forth.  Rosling notes that usually there is no clear cleavage between two groups, even if it seems like that from the averages.  We saw this in Dekker's analysis of health provider data (reviewed Oct. 29, 2018) where both favorable and unfavorable patient outcomes exhibited the same negative work process traits.

The negativity instinct

This is “our tendency to notice the bad more than the good.” (p. 51)  We do not perceive  improvements that are “too slow, too fragmented, or too small one-by-one to ever qualify as news.” (p. 54)  “There are three things going on here: the misremembering of the past [erroneously glorifying the “good old days”]; selective reporting by journalists and activists; and the feeling that as long as things are bad it’s heartless to say they are getting better.” (p. 70)  To tell the truth, we don’t see this instinct inside the nuclear world where facilities with long-standing cultural problems (i.e., bad) are constantly reporting progress (i.e., getting better) while their cultural conditions still remain unacceptable.

The straight line instinct

This is the expectation that a line of data will continue straight into the future.  Most of you have technical training or exposure and know that accurate extrapolations can take many shapes including straight, s-bends, asymptotes, humps or exponential growth. 

The fear instinct

“[F]ears are hardwired deep in our brains for obvious evolutionary reasons.” (p. 105)  “The media cannot resist tapping into our fear instinct. It is such an easy way to grab our attention.” (p. 106)  Rosling observes that hundreds of elderly people who fled Fukushima to escape radiation ended up dying “because of the mental and physical stresses of the evacuation itself or of life in evacuation shelters.” (p. 114)  In other words, they fled something frightening (a perceived risk) and ended up in danger (a real risk).  How often does fear, e.g., fear of bad press, enter into your organization’s decision making?

The size instinct 


We overweight things that look big to us.  “It is instinctive to look at a lonely number and misjudge its importance.  It is also instinctive . . . to misjudge the importance of a single instance or an identifiable victim.” (p. 125)  Does the nuclear industry overreact to some single instances?

The generalization instinct

“[T]he generalization instinct makes “us” think of “them” as all the same.” (p. 140)  At the macro level, this is where the bad “isms” exist: racism, sexism, ageism, classism, etc.  But your coworkers may practice generalization on a more subtle, micro level.  How many people do you work with who think the root cause of most incidents is human error?  Or somewhat more generously, human error, inadequate procedures and/or equipment malfunctions— but not the larger socio-technical system?  Do people jump to conclusions based on an inadequate or incorrect categorization of a problem?  Are categories, rather than facts, used as explanations?  Are vivid examples used to over-glamorize alleged progress or over-dramatize poor outcomes?

The destiny instinct

“The destiny instinct is the idea that innate characteristics determine the destinies of people, countries, religions, or cultures.” (p. 158)  Culture includes deep-seated beliefs, where feelings can be disguised as facts.  Does your work culture assume that some people are naturally bad apples?

The single perspective instinct

This is preference for single causes and single solutions.  It is the fundamental weakness of Safety I where the underlying attitude is that problems arise from individuals who need to be better controlled.  Rosling advises us to “Beware of simple ideas and simple solutions. . . . Welcome complexity.” (p. 189)  We agree.

The blame instinct

“The blame instinct is the instinct to find a clear, simple reason for why something bad has happened. . . . when things go wrong, it must be because of some bad individual with bad intentions. . . . This undermines our ability to solve the problem, or prevent it from happening again, . . . To understand most of the world’s significant problems we have to look beyond a guilty individual and to the system.” (p. 192)  “Look for causes, not villains. When something goes wrong don’t look for an individual or a group to blame. Accept that bad things can happen without anyone intending them to.  Instead spend your energy on understanding the multiple interacting causes, or system, that created the situation.  Look for systems, not heroes.” (p. 204)  We totally agree with Rosling’s endorsement of a systems approach.

The urgency instinct

“The call to action makes you think less critically, decide more quickly, and act now.” (p. 209)  In a true emergency, people will fall back on their training (if any) and hope for the best.  However, in most situations, you should seek more information.  Beware of data that is relevant but inaccurate, or accurate but irrelevant.  Be wary of predictions that fail to acknowledge that the future is uncertain.

Our Perspective

The series of decisions an organization makes is a visible artifact of its culture and its decision making process internalizes culture.  Because of this linkage, we have long been interested in how organizations and individuals can make better decisions, where “better” means fact- and reality-based and consistent with the organization’s mission and espoused values.

We have reviewed many works that deal with decision making.  This book adds value because it is based on the author’s research and observations around the world; it is not based on controlled studies in a laboratory or observations in a single organization.  It uses very good graphics to illustrate various data sets, including changes, e.g., progress, over time.

Rosling believed “it has never been easier or more important for business leaders and employees to act on a fact-based worldview.” (p. 228)   His book is engagingly written and easy to read.  It is Rosling’s swan song; he died in 2017.

Bottom line: Rosling advocates for robust decision making, accurate mental models, and a systems approach.  We like it.


*  H. Rosling, O. Rosling and A.R. Rönnlund, Factfulness, 1st ed. ebook (New York: Flatiron, 2018).

Friday, November 9, 2018

Nuclear Safety Culture: Lessons from Turn the Ship Around! by L. David Marquet

Turn the Ship Around!* was written by a U.S. Navy officer who was assigned to command a submarine with a poor performance history.  He adopted a management approach that was radically different from the traditional top-down, leader-follower, “I say, you do” Navy model for controlling people.  The new captain’s primary method was to push decision making down to the lowest practical organizational levels; he supported his crew’s new authorities (and maintained control of the overall situation) with strategies to increase their competence and provide clarity on the organization’s purpose and goals.

Specific management practices were implemented or enhanced to support the overall approach.  For example, decision making guidelines were developed and disseminated.  Attaining goals was stressed over unconsciously following procedures.  Crew members were instructed to “think out loud” before initiating action; this practice communicated intent and increased organizational resilience because it created opportunities for others to identify potential errors before they could occur and propagate.  Pre-job briefs were changed from the supervisor reciting the procedure to asking participants questions about their roles and preparation.

As a result, several organizational characteristics that we have long promoted became more evident, including deferring to expertise (getting the most informed, capable people involved with a decision), increased trust, and a shared mental model of vision, purpose and organizational functioning.

As you can surmise, his approach worked.  (If it hadn’t, Marquet would have had a foreshortened career and there would be no book.)  All significant operational and personnel metrics improved under his command.  His subordinates and other crew members became highly promotable.  Importantly, the boat’s performance continued at a high level after he completed his tour; in other words, he established a system for success that could live on without his personal involvement.

Our Perspective 


This book provides a sharp contrast to nuclear industry folklore that promotes strong, omniscient leadership as the answer to every problem situation.  Marquet did not act out the role of the lone hero, instead he built a management system that created superior performance while he was in command and after he moved on.  There can be valuable lessons here for nuclear managers but one has to appreciate the particular requirements for undertaking this type of approach.

The manager’s attitude

You have to be willing to share some (maybe a lot) of your authority with your subordinates, their subordinates and so forth on down the line while still being held to account by your bosses for your unit’s performance.  Not everyone can do this.  It requires faith in the new system and your people and a certain detachment from short-term concerns about your own career.  You also need to have sufficient self-awareness to learn from mistakes as you move forward and recognize when you are failing to walk the talk with your subordinates.

In Marquet’s case, there were two important precursors to his grand experiment.  First, he had seen on previous assignments how demoralizing top-down micromanagement could be vs. how liberating and motivating it was for him (as a subordinate officer) to actually be allowed to make decisions.  Second, he had been training for a year on how to command a sub of a design different from the boat to which he was eventually assigned; he couldn’t go in and micromanage everyone from the get-go, he didn’t have sufficient technical knowledge.

The work environment

Marquet had one tremendous advantage: from a social perspective, a submarine is largely a self-contained world.  He did not have to worry about what people in the department next door were doing; he only had to get his remote boss to go along with his plan.  If you’re a nuclear plant department head and you want to adopt this approach but the rest of the organization runs top-down, it may be rough sledding unless you do lots of prep work to educate your superiors and get them to support you, perhaps for a pilot or trial project.

The book is easy reading, with short chapters, lots of illustrative examples (including some interesting information on how the Navy and nuclear submarines work), sufficient how-to lists, and discussion questions at the end of chapters.  Marquet did not invent his approach or techniques out of thin air.  As an example, some of his ideas and prescriptions, including rejecting the traditional Navy top-down leadership model, setting clear goals, providing principles for guiding decision making, enforcing reflection after making mistakes, giving people tools and advantages but holding them accountable, and culling people who can’t get with the program** are similar to points in Ray Dalio’s Principles, which we reviewed on April 17, 2018.  This is not surprising.  Effective, self-aware leaders should share some common managerial insights.

Bottom line: Read this book to see a real-world example of how authentic employee empowerment can work.


*  L.D. Marquet, Turn the Ship Around! (New York: Penguin, 2012).  This book was recommended to us by a Safetymatters reader.  Please contact us if you have any material you would like us to review.

**  People have different levels of appetite for empowerment or other forms of participatory management.  Not everyone wants to be fully empowered, highly self-motivated or expected to show lots of initiative.  You may end up with employees who never buy into your new program and, in the worst case, you won’t be allowed to get rid of them.

Monday, October 29, 2018

Safety Culture: What are the Contributors to “Bad” Outcomes Versus “Good” Outcomes and Why Don’t Some Interventions Lead to Improved Safety Performance?

Why?
Sidney Dekker recently revisited* some interesting research he led at a large health care authority.  The authority’s track record was not atypical for health care: 1 out of 13 (7%) patients was hurt in the process of receiving care.  The authority investigated the problem cases and identified a familiar cluster of negative factors, including workarounds, shortcuts, violations, guidelines not followed, errors and miscalculations—the list goes on.  The interventions will also be familiar to you—identify who did what wrong, add more rules, try harder and get rid of bad apples—but were not reducing the adverse event rate.

Dekker’s team took a different perspective and looked at the 93% of patients who were not harmed.  What was going on in their cases?  To their surprise, the team found the same factors: workarounds, shortcuts, violations, guidelines not followed, errors and miscalculations, etc.** 

Dekker uses this research to highlight a key difference between the traditional view of safety management, Safety I, and the more contemporary view, Safety II.  At its heart, Safety I believes the source of problems lies with the individual so interventions focus on ways to make the individual’s work behavior more reliable, i.e., less likely to deviate from the idealized form specified by work designers.  Safety I ignores the fact that the same imperfections exist in work with both successful and problematic outcomes.

In contrast, Safety II sees the source of problems in the system, the dynamic combination of technology, environmental factors, organizational aspects, and individual cognition and choices.  Referencing the work of Diane Vaughan, Dekker says “the interior life of organizations is always messy, only partially well-coordinated and full of adaptations, nuances, sacrifices and work that is done in ways that is quite different from any idealized image of it.”

Revisiting the data revealed that the work with good outcomes was different.  This work had more positive characteristics, including diversity of professional opinion and the possibility to voice dissent, keeping the discussion on risk alive and not taking past success as a guarantee for safety, deference to proven expertise, widely held authority to say “stop,” and pride of workmanship.  As you know, these are important characteristics of a strong safety culture.

Our Perspective

Dekker’s essay is a good introduction to the differences between Safety I and Safety II thinking, most importantly their differing mental models of the way work is actually performed in organizations.  In Safety I, the root cause of imperfect results is the individual and constant efforts are necessary (e.g., training, monitoring, leadership, discipline) to create and maintain the individual’s compliance with work as designed.  In  Safety II, normal system functioning leads to mostly good and occasionally bad results.  The focus of Safety II interventions should be on activities that increase individual capacity to affect system performance and/or increase system robustness, i.e., error tolerance and an increased chance of recovery when errors occur.

If one applies Safety I thinking to a “bad” outcome then the most likely result from an effective intervention is that the exact same problem will not happen again.  This thinking sustains a robust cottage industry in root-cause analysis because new problems will always arise and no changes are made to the system itself.

We like Dekker’s (and Vaughan’s) work and have reported on it several times in Safetymatters (click on the Dekker and Vaughan labels to bring up related posts).  We have been emphasizing some of the same points, especially the need for a systems view, since we started Safetymatters almost ten years ago.

Individual Exercise: Again drawing on Vaughan, Dekker says “there is often no discernable difference between the organization that is about to have an accident or adverse event, and the one that won’t, or the one that just had one.”  Look around your organization and review your career experience; is that true?


*  S. Dekker, “Why Do Things Go Right?,” SafetyDifferently website (Sept. 28, 2018).  Retrieved Oct. 25, 2018.

**  This is actually rational.  People operate on feedback and if the shortcuts, workarounds and disregarding the guidelines did not lead to acceptable (or at least tolerable) results most of the time, folks would stop using them.

Friday, July 6, 2018

WANO Publicizes Projects That Promote Safety But Short-Changes Nuclear Safety Culture

NOT WANO's world headquarters
The World Association of Nuclear Operators (WANO) recently announced* the completion and delivery of 12 post-Fukushima projects intended to enhance safety in the world’s commercial nuclear power plants.  It appears the projects were accomplished by a combination of WANO and member personnel.  An addendum to the press release describes how WANO has revised its own practices to more effectively deliver its services in the 12 project areas to members.  The projects address emergency preparedness, emergency support plan, severe accident management, early event notification, onsite fuel storage, design safety fundamentals, peer review frequency and equivalency, corporate peer reviews, WANO assessment, transparency and visibility, and WANO internal assessment. 

Our Perspective

We usually don’t waste time with WANO because it has never developed or promoted any insight into the systemic interactions of the management and cultural variables that create ongoing nuclear organizational performance.  And the results they are touting are based on their familiar, inadequate worldview, viz. promoting more development for leaders and more detail to functional areas.

That said, we recognize that incremental improvements in the project areas might add some modest value and hopefully do not hurt performance.  (Performance may be “hurt” when personnel punctiliously and mindlessly follow policies, rules and procedures without considering if they are actually appropriate for the situation at hand.)

Most of WANO’s claims for improving its own services are typical chest-thumping but a few items perpetuate long-standing industry shortcomings, especially excessive secrecy.  For example, under design safety fundamentals WANO peer reviews assess whether safety-related design features are appropriately managed but “WANO does not make design-change recommendations or evaluate the design of the plant itself.”  WANO assessments of utility/plant performance are confidential to the subject CEOs.  And WANO’s concept of improving transparency means “effectively sharing information and best practices within the membership.”  Looks like WANO’s prime directive is to shield the dues-paying members from any hard questions or external criticism.

Our biggest gripe is WANO’s treatment, or lack thereof, of nuclear safety culture (NSC).  In the press release, culture is mentioned once: Mid-to-senior level “managers at nuclear power plants play a vital part in delivering excellence and a strong nuclear safety culture, due to their positional influence throughout the organisation.”  That’s true, but culture is much more pervasive, systemic and important than that.

We find it surreal that WANO has been busy organizing worldwide resources to polish the bowling ball** and then claim they have made the industry safer post-Fukushima.  Linking their putative progress to Fukushima ignores a fundamental truth: while weaknesses in various functional areas were causal factors that made a bad situation worse, the root cause of the Fukushima disaster was the deep-seated, value-driven unwillingness of people who knew to speak truth to power about the tsunami design inadequacies.  It was culture that killed the plant.


*  WANO press release, “WANO calls on industry to build on progress after post-Fukushima improvements” (June 26, 2018).  Retrieved July 5, 2018.

**  “polish a bowling ball” - A phrase we use to describe activities that make an existing construct shinier but have no impact on its fundamental nature or effectiveness.

Wednesday, June 20, 2018

Catching Up with Nuclear Safety Culture’s Bad Boys: Entergy and TVA

Entergy Headquarters
TVA Headquarters
We haven’t reported for awhile on the activities of the two plant operators who dominate the negative news in the Nuclear Safety Culture (NSC) space, viz., Entergy and TVA.  Spoiler alert: there is nothing novel or unexpected to report, only the latest chapters in their respective ongoing sagas.

Entergy

On March 12, 2018 the NRC issued a Confirmatory Order* (CO) to Entergy for violations at the Grand Gulf plant: (1) an examination proctor provided assistance to trainees and (2) nonlicensed operators did not tour all required watch station areas and entered inaccurate information into the operator logs.  The NRC characterized these as willful violations.  As has become customary, Entergy requested Alternative Dispute Resolution (ADR).  Entergy agreed to communicate fleet-wide the company’s intolerance for willful misconduct, evaluate why prior CO-driven corrective actions failed to prevent the current violations, conduct periodic effectiveness reviews of corrective actions, and conduct periodic “organizational health surveys” to identify NSC concerns that could contribute to willful misconduct.

On March 29, 2018 the NRC reported** on Arkansas Nuclear One’s (ANO’s) progress in implementing actions required by a June 17, 2016 Confirmatory Action Letter (CAL).  (We reported at length on ANO’s problems on June 25, 2015 and June 16, 2016.)  A weak NSC has been a major contributor to ANO’s woes.  The NRC inspection team concluded that all but one corrective actions were implemented and effective and closed those items.  The NRC also concluded that actions taken to address two inspection focus areas and two Yellow findings were also satisfactory.

On April 20, 2018 the NRC reported*** on ANO’s actions to address a White inspection finding.  They concluded the actions were satisfactory and noted that ANO’s root cause evaluation had identified nine NSC aspects with weaknesses.  Is that good news because they identified the weaknesses or bad news because they found so many?  You be the judge.


On June 18, 2018 the NRC closed**** ANO's CAL and moved the plant into column 1 of the Reactor Oversight Process Action Matrix.

TVA

The International Atomic Energy Agency (IAEA) conducted an Operational Safety Review Team (OSART) review***** of Sequoyah during August 14-31, 2017.  The team reviewed plant operational safety performance
vis-Ă -vis IAEA safety standards and made appropriate recommendations and suggestions.  Two of the three significant recommendations have an NSC component: (1) “improve the performance of management and staff in challenging inappropriate behaviours” and “improve the effectiveness of event investigation and corrective action implementation . . .” (p. 2)

Focusing on NSC, the team observed: “The procedure for nuclear safety culture self-assessments does not include a sufficiently diverse range of tools necessary to gather all the information required for effective analysis. The previous periodic safety culture self-assessment results were based on surveys but other tools, such as interviews, focus groups and observations, were only used if the survey revealed any gaps.” (p. 60)

On March 14, 2018 the NRC reported^ on Watts Bar’s progress in addressing NRC CO EA-17-022 and Chilling Effect Letter (CEL) EA-16-061, and licensee action to establish and maintain a safety-conscious work environment (SCWE).  (We discussed the CEL on March 25, 2016 and NSC/SCWE problems on Nov. 14, 2016.)  Licensee actions with NSC-related components were noted throughout the report including the discussions on plant communications, training, work processes and independent oversight.  The sections on assessing NSC/SCWE and “Safety Over Production” included inspection team observations (aka opportunities for improvement) which were shared with the licensee. (pp. 10-11, 17, 24-27)  One TVA corrective action was to establish a Fleet Safety Culture Peer Team, which has been done.  The overall good news is the report had no significant NSC-related negative findings.  Focus group participants were generally positive about NSC and SCWE but expressed concern about “falling back into old patterns” and “declaring success too soon.” (p. 27)

Our Perspective

For Entergy, it looks like business as usual, i.e., NSC
Whac-A-Mole.  They get caught or self-report an infraction, go to ADR, and promise to do better at the affected site and fleet-wide.  Eventually a new problem arises somewhere else.  The strength of their overall NSC appears to be floating in a performance band below satisfactory but above intolerable.

We are a bit more optimistic with respect to TVA.  It would be good if TVA could replicate some of Sequoyah’s (which has managed to keep its nose generally clean) values and practices at Browns Ferry and Watts Bar.  Perhaps their fleet wide initiative will be a mechanism for making that happen.

We applaud the NRC inspection team for providing specific information to Watts Bar on actions the plant could take to strengthen its NSC.

Bottom line: The Sequoyah OSART report is worth reviewing for its detailed reporting of the team’s observations of unsafe (or at least questionable) employee work behaviors.


*  K.M. Kennedy (NRC) to J.A. Ventosa (Entergy), “Confirmatory Order, NRC Inspection Report 05000416/2017014, and NRC Investigation Reports 4-2016-004 AND 4-2017-021” (Mar. 12, 2018).  ADAMS ML18072A191.

**  N.F. O’Keefe (NRC) to R.L. Anderson (Entergy), “Arkansas Nuclear One – NRC Confirmatory Action Letter (EA-16-124) Follow-up Inspection Report 05000313/2018012 AND 05000368/2018012” (Mar. 29, 2018).  ADAMS ML18092A005.

***  N.F. O’Keefe (NRC) to R.L. Anderson (Entergy), “Arkansas Nuclear One, Unit 2 – NRC Supplemental Inspection Report 05000368/2018040” (Apr. 20, 2018).  ADAMS ML18110A304.


****  K.M. Kennedy (NRC) to R.L. Anderson (Entergy), "Arkansas Nuclear One – NRC Confirmatory Action Letter (EA-16-124) Follow-up Inspection Report 05000313/2018013 AND 05000368/2018013 and Assessment Follow-up Letter" (Jun. 18, 2018)  ADAMS ML18165A206.

 *****  IAEA Operational Safety Review Team (OSART), Report of the Mission to the Sequoyah Nuclear Power Plant Aug. 14-31, 2017, IAEA-NSNI/OSART/195/2017.  ADAMS ML18061A036. The document date in the NRC library is Mar. 2, 2018.

^  A.D. Masters (NRC) to J.W. Shea “Watts Bar Nuclear Plant – Follow-up for NRC Confirmatory Order EA-17-022 and Chilled Work Environment Letter EA-16-061; NRC INSPECTION REPORT 05000390/2017009, 05000391/2017009” (Mar. 14, 2018).  ADAMS ML18073A202.