Showing posts with label References. Show all posts
Showing posts with label References. Show all posts

Thursday, July 1, 2021

Making Better Decisions: Lessons from Noise by Daniel Kahneman, Oliver Sibony, and Cass R. Sunstein


The authors of Noise: A Flaw in Human Judgment* examine the random variations that occur in judgmental decisions and recommend ways to make more consistent judgments.  Variability is observed when two or more qualified decision makers review the same data or face the same situation and come to different judgments or conclusions.  (Variability can also occur when the same decision maker revisits a previous decision situation and arrives at a different judgment.)  The decision makers may be doctors making diagnoses, engineers designing structures, judges sentencing convicted criminals, or any other situation involving professional judgment.**  Judgments can vary because of two factors: bias and noise.

Bias is systematic, a consistent source of error in judgments.  It creates an observable average difference between actual judgments and theoretical judgments that would reflect a system’s actual or espoused goals and values.  Bias may be exhibited by an individual or a group, e.g., when the criminal justice system treats members of a certain race or class differently from others.

Noise is random scatter, a separate, independent cause of variability in decisions involving judgment.  It is similar to the residual error in a statistical equation, i.e., noise may have a zero average (because higher judgments are balanced by lower ones) but noise can create large variability in individual judgments.  Such inconsistency damages the credibility of the system.  Noise has three components: level, pattern, and occasion. 

Level refers to the difference in the average judgment made by different individuals, e.g., a magistrate may be tough or lenient. 

Pattern refers to the idiosyncrasies of individual judges, e.g., one magistrate may be severe with drunk drivers but easy on minor traffic offenses.  These idiosyncrasies include the internal values, principles, memories, and rules a judge brings to every case, consciously or not. 

Occasion refers to a random instability, e.g., where a fingerprint examiner looking at the same prints finds a match one day and no match on another day.  Occasion noise can be influenced by many factors including a judge’s mood, fatigue, and recent experience with other cases. 

Based on a review of the available literature and their own research, the authors suggest that noise can be a larger contributor to judgment variability than bias, with stable pattern noise larger than level noise or occasion noise.

Ways to reduce noise

Noise can be reduced through interventions at the individual or group level. 

For the individual, interventions include training to help people who make judgments realize how different psychological biases can influence decision making.  The long list of psychological biases in Noise builds on Kahneman’s work in Thinking, Fast and Slow which we reviewed on Dec. 18, 2013.  Such biases include overconfidence; denial of ignorance, which means not acknowledging that important relevant data isn’t known; base rate neglect, where outcomes in other similar cases are ignored; availability, which means the first solutions that come to mind are favored, with no further analysis; and anchoring of subsequent values to an initial offer.  Noise reduction techniques include active open-mindedness, which is the search for information that contradicts one’s initial hypothesis, or positing alternative interpretations of the available evidence; and the use of rankings and anchored scales rather than individual ratings based on vague, open-ended criteria.  Shared professional norms can also contribute to more consistent judgments.

At the group level, noise can be reduced through techniques the authors call decision hygiene.  The underlying belief is that obtaining multiple, independent judgments can increase accuracy, i.e., lead to an answer that is closer to the true or best answer.  For example, a complicated decision can be broken down into multiple dimensions, and each dimension assessed individually and independently.  Group members share their judgments for each dimension, then discus them, and only then combine their findings (and their intuition) into a final decision.  Trained decision observers can be used to watch for signs that familiar biases are affecting someone’s decisions or group dynamics involving position, power, politics, ambition and the like are contaminating the decision process and negating actual independence.

Noise can also be reduced or eliminated by the use of rules, guidelines, or standards. 

Rules are inflexible, thus noiseless.  However, rules (or algorithms) may also have biases coded into them or only apply to their original data set.  They may also drive discretion underground, e.g., where decision makers game the process to obtain the results they prefer.

Guidelines, such as sentencing guidelines for convicted criminals or templates for diagnosing common health problems, are less rigid but still reduce noise.  Guidelines decompose complex decisions into easier sub-judgments on predefined dimensions.  However, judges and doctors push back against mandatory guidelines that reduce their ability to deal with the unique factors of individual cases before them.

Standards are the least rigid noise reduction technique; they delegate power to professionals and are inherently qualitative.  Standards generally require that professionals make decisions that are “reasonable” or “prudent” or “feasible.”  They are related to the shared professional norms previously mentioned.  Judgments based on standards can invite controversy, disagreement, confrontation, and lawsuits.

The authors recognize that in some areas, it is infeasible, too costly, or even undesirable to eliminate noise.  One particular fear is a noise-free system might freeze existing values.  Rules and guidelines need to be flexible to adapt to changing social values or new data.

Our Perspective

We have long promoted the view that decision making (the process) and decisions (the artifacts) are crucial components of a socio-technical system, and have a significant two-way influence relationship with the organization’s culture.  Decision making should be guided by an organization’s policies and priorities, and the process should be robust, i.e., different decision makers should arrive at acceptably similar decisions. 

Many organizations examine (and excoriate) bad decisions and the “bad apples” who made them.  Organizations also need to look at “good” decisions to appreciate how much their professionals disagree when making generally acceptable judgments.  Does the process for making judgments develop the answer best supported by the facts, and then adjust it for preferences (e.g., cost) and values (e.g., safety), or do the fingers of the judges go on the scale at earlier steps?

You may be surprised at the amount of noise in your organization’s professional judgments.  On the other hand, is your organization’s decision making too rigid in some areas?  Decisions made using rules can be quicker and cheaper than prolonged analysis, but may lead to costly errors. which approach has a higher cost for errors?  Operators (or nurses or whoever) may follow the rules punctiliously but sometimes the train may go off the tracks. 

Bottom line: This is an important book that provides a powerful mental model for considering the many factors that influence individual professional judgments.


*  D. Kahneman, O. Sibony, and C.R. Sunstein, Noise: A Flaw in Human Judgment (New York: Little, Brown Spark) 2021.

**  “Professional judgment” implies some uncertainty about the answer, and judges may disagree, but there is a limit on how much disagreement is tolerable.


Monday, June 29, 2020

A Culture that Supports Dissent: Lessons from In Defense of Troublemakers by Charlan Nemeth

Charlan Nemeth is a psychology professor at the University of California, Berkeley.  Her research and practical experience inform her conclusion that the presence of authentic dissent during the decision making process leads to better informed and more creative decisions.  This post presents highlights from her 2018 book* and provides our perspective on her views.

Going along to get along

Most people are inclined to go along with the majority in a decision making situation, even when they believe the majority is wrong.  Why?  Because the majority has power and status, most organizational cultures value consensus and cohesion, and most people want to avoid conflict. (179)

An organization’s leader(s) may create a culture of agreement but consensus, aka the tyranny of the majority, gives the culture its power over members.  People consider decisions from the perspective of the consensus, and they seek and analyze information selectively to support the majority opinion.  The overall effect is sub-optimal decision making; following the majority requires no independent information gathering, no creativity, and no real thinking. (36,81,87-88)

Truth matters less than group cohesion.  People will shape and distort reality to support the consensus—they are complicit in their own brainwashing.  They will willingly “unknow” their beliefs, i.e., deny something they know to be true, to go along.  They live in information bubbles that reinforce the consensus, and are less likely to pay attention to other information or a different problem that may arise.  To get along, most employees don’t speak up when they see problems. (32,42,98,198)

“Groupthink” is an extreme form of consensus, enabled by a norm of cohesion, a strong leader, situational stress, and no real expectation that a better idea than the leader’s is possible.  The group dynamic creates a feedback loop where people repeat and reinforce the information they have in common, leading to more extreme views and eventually the impetus to take action.  Nemeth’s illustrative example is the decision by President John Kennedy and his advisors to authorize the disastrous Bay of Pigs invasion.** (140-142)

Dissent adds value to the decision making process

Dissent breaks the blind following of the majority and stimulates thought that is more independent and divergent, i.e., creates more alternatives and considers facts on all sides of the issue.  Importantly, the decision making process is improved even when the dissenter is wrong because it increases the group’s chances of identifying correct solutions. (7-8,12,18,116,180) 

Dissent takes courage but can be contagious; a single dissenter can encourage others to speak up.  Anonymous dissent can help protect the dissenter from the group. (37,47) 

Dissent must be authentic, i.e., it must reflect the true beliefs of the dissenter.  To persuade others, the dissenter must remain consistent in his position.  He can only change because of new or changing information.  Only authentic, persistent dissent will force others to confront the possibility that they may be wrong.  At the end of the day, getting a deal may require the dissenter to compromise, but changing the minds of others requires consistency. (58,63-64,67,115,190)

Alternatives to dissent

Other, less antagonistic, approaches to improving decision making have been promoted.  Nemeth finds them lacking.

Training is the go to solution in many organizations but is not very effective in addressing biases or getting people to speak up to realities of power and hierarchies.   Dissent is superior to training because it prompts reconsidering positions and contemplating alternatives. (101,107)

Classical brainstorming incorporates several rules for generating ideas, including withholding criticism of ideas that have been put forth.  However, Nemeth found in her research that allowing (but not mandating) criticism led to more ideas being generated.   In her view, it’s the “combat between different positions that provides the benefits to decision making.” (131,136)

Demographic diversity is promoted as a way to get more input into decisions.  But demographics such as race or gender are not as helpful as diversity of skills, knowledge, and backgrounds (and a willingness to speak up), along with leaders who genuinely welcome different viewpoints. (173,175,200)

The devil’s advocate approach can be better than nothing, but it generally leads to considering the negatives of the original position, i.e., the group focuses on better defenses for that position rather than alternatives to it.  Group members believe the approach is fake or acting (even when the advocate really believes it) so it doesn’t promote alternative thinking or force participants to confront the possibility that they may be wrong.  The approach is contrived to stimulate divergent thinking but it actually creates an illusion that all sides have been considered while preserving group cohesion. (182-190,203-04)

Dissent is not free for the individual or the group

Dissenters are disliked, ridiculed, punished, or worse.  Dissent definitely increases conflict and sometimes lowers morale in the group.  It requires a culture where people feel safe in expressing dissent, and it’s even better if dissent is welcomed.  The culture should expect that everyone will be treated with respect. (197-98,209)

Our Perspective

We have long argued that leaders should get the most qualified people, regardless of rank or role, to participate in decision making and that alternative positions should be encouraged and considered.  Nemeth’s work strengthens and extends our belief in the value of different views.

If dissent is perceived as an honest effort to attain the truth of a situation, it should be encouraged by management and tolerated, if not embraced, by peers.  Dissent may dissuade the group from linear cause-effect, path of least resistance thinking.  We see a similar practice in Ray Dalio’s concepts of an idea meritocracy and radical open-mindedness, described in our April 17, 2018 review of his book Principles.  In Dalio’s firm, employees are expected to engage in lively debate, intellectual combat even, over key decisions.  His people have an obligation to speak up if they disagree.  Not everyone can do this; a third of Dalio’s new hires are gone within eighteen months.

On the other hand, if dissent is perceived as self-serving or tattling, then the group will reject it like a foreign virus.  Let’s face it: nobody likes a rat.

We agree with Nemeth’s observation that training is not likely to improve the quality of an organization’s decision making.  Training can give people skills or techniques for better decision making but training does not address the underlying values that steer group decision making dynamics. 

Much academic research of this sort is done using students as test subjects.***  They are readily available, willing to participate, and follow directions.  Some folks think the results don’t apply to older adults in formal organizations.  We disagree.  It’s easier to form stranger groups with students who don’t have to worry about power and personal relationships than people in work situations; underlying psychological mechanisms can be clearly and cleanly exposed.

Bottom line: This is a lucid book written for popular consumption, not an academic journal, and is worth a read. 


(Give me the liberty to know, to utter, and to argue freely according to conscience. — John Milton)


*  C. Nemeth, In Defense of Troublemakers (New York: Basic Books, 2018).

**  Kennedy learned from the Bay of Pigs fiasco.  He used a much more open and inclusive decision making process during the Cuban Missile Crisis.

***  For example, Daniel Kahneman’s research reported in Thinking, Fast and Slow, which we reviewed Dec. 18, 2013.

Monday, June 15, 2020

IAEA Working Paper on Safety Culture Traits and Attributes

Working paper cover
The International Atomic Energy Agency (IAEA) has released a working paper* that attempts to integrate (“harmonize”) the efforts by several different entities** to identify and describe desirable safety culture (SC) traits and attributes.  The authors have also tried to make the language of SC less nuclear power specific, i.e., more general and thus helpful to other fields that deal with ionizing radiation, such as healthcare.  Below we list the 10 traits and highlight the associated attributes that we believe are most vital for a strong SC.  We also offer our suggestions for enhancing the attributes to broaden and strengthen the associated trait’s presence in the organization.

Individual Responsibility 


All individuals associated with an organization know and adhere to its standards and expectations.  Individuals promote safe behaviors in all situations, collaborate with other individuals and groups to ensure safety, and “accept the value of diverse thinking in optimizing safety.”

We applaud the positive mention of “diverse thinking.”  We also believe each individual should have the duty to report unsafe situations or behavior to the appropriate authority and this duty should be specified in the attributes.

Questioning Attitude 


Individuals watch for anomalies, conditions, behaviors or activities that can adversely impact safety.  They stop when they are uncertain and get advice or help.  They try to avoid complacency.  “They understand that the technologies are complex and may fail in unforeseen ways . . .” and speak up when they believe something is incorrect.

Acknowledging that technology may “fail in unforeseen ways” is important.  Probabilistic Risk Assessments and similar analyses do not identify all the possible ways bad things can happen. 

Communication

Individuals communicate openly and candidly throughout the organization.  Communication with external organizations and the public is accurate.  The reasons for decisions are communicated.  The expectation that safety is emphasized over competing goals is regularly reinforced.

Leader Responsibility

Leaders place safety above competing goals, model desired safety behaviors, frequently visit work areas, involve individuals at all levels in identifying and resolving issues, and ensure that resources are available and adequate.

“Leaders ensure rewards and sanctions encourage attitudes and behaviors that promote safety.”  An organization’s reward system is a hot button issue for us.  Previous SC framework documents have never addressed management compensation and this one doesn’t either.  If SC and safety performance are important then people from top executives to individual workers should be rewarded (by which we mean paid money) for doing it well.

Leaders should also address work backlogs.  Backlogs send a signal to the organization that sub-optimal conditions are tolerated and, if such conditions continue long enough,  are implicitly acceptable.  Backlogs encourage workarounds and lack of attention to detail, which will eventually create challenges to the safety management system.  

Decision-Making

“Individuals use a consistent, systematic approach to evaluate relevant factors, including risk, when making decisions.”  Organizations develop the ability to adapt in anticipation of unforeseen situations where no procedure or plan applies.

We believe the decision making process should be robust, i.e., different individuals or groups facing the same issue should come up with the same or an equally effective solution.  The organization’s approach to decision making (goals, priorities, steps, etc.) should be documented to the extent practical.  Robustness and transparency support efficient, effective communication of the reasons for decisions.

Work Environment 


“Trust and respect permeate the organization. . . . Differing opinions are encouraged, discussed, and thoughtfully considered.”

In addition, senior managers need to be trusted to tell the truth, do the right things, and not sacrifice subordinates to evade the managers’ own responsibilities.

Continuous Learning 


The organization uses multiple approaches to learn including independent and self-assessments, lessons learned from their own experience, and benchmarking other organizations.

Problem Identification and Resolution

“Issues are thoroughly evaluated to determine underlying causes and whether the issue exists in other areas. . . . The effectiveness of the actions is assessed to ensure issues are adequately addressed. . . . Issues are analysed to identify possible patterns and trends. A broad range of information is evaluated to obtain a holistic view of causes and results.”

This is good but could be stronger.  Leaders should ensure the most knowledgeable individuals, regardless of their role or rank, are involved in addressing an issue. Problem solvers should think about the systemic relationships of issues, e.g., is an issue caused by activity in or feedback from some other sub-system, the result of a built-in time delay, or performance drift that exceeded the system’s capacities?  Will the proposed fix permanently address the issue or is it just a band-aid?

Raising Concerns

The organization encourages personnel to raise safety concerns and does not tolerate harassment, intimidation, retaliation or discrimination for raising safety concerns. 

This is the essence of a Safety Conscious Work Environment and is sine qua non for any high hazard undertaking.

Work Planning 


“Work is planned and conducted such that safety margins are preserved.”

Our Perspective

We have never been shy about criticizing IAEA for some of its feckless efforts to get out in front of the SC parade and pretend to be the drum major.***  However, in this case the agency has been content, so far, to build on the work of others.  It’s difficult for any organization to develop, implement, and maintain a strong, robust SC and the existence of many different SC guidebooks has never been helpful.  This is one step in the right direction.  We’d like to see other high hazard industries, in particular healthcare organizations such as hospitals, take to heart SC lessons learned from the nuclear industry.

Bottom line: This concise paper is worth checking out.


*  IAEA Working Document, “A Harmonized Safety Culture Model” (May 5, 2020).  This document is not an official IAEA publication.

**  Including IAEA, WANO, INPO, and government institutions from the United States, Japan, and Finland.

***  See, for example, our August 1, 2016 post on IAEA’s document describing how to perform safety culture self-assessments.  Click on the IAEA label to see all posts related to IAEA.

Tuesday, May 28, 2019

The Study of Organizational Culture: History, Assessment Methods, and Insights

We came across an academic journal article* that purports to describe the current state of research into organizational culture (OC).  It’s interesting because it includes a history of OC research and practice, and a critique of several methods used to assess it.  Following is a summary of the article and our perspective on it, focusing on any applicability to nuclear safety culture (NSC).

History

In the late 1970s scholars studying large organizations began to consider culture as one component of organizational identity.  In the same time frame, practicing managers also began to show an interest in culture.  A key driver of their interest was Japan’s economic ascendance and descriptions of Japanese management practices that depended heavily on cultural factors.  The notion of a linkage between culture and organizational performance inspired non-Japanese managers to seek out assistance in developing culture as a competitive advantage for their own companies.  Because of the sense of urgency, practical applications (usually developed and delivered by consultants) were more important than developing a consistent, unified theory of OC.  Practitioners got ahead of researchers and the academic world has yet to fully catch up.

Consultant models only needed a plausible, saleable relationship between culture and organizational performance.  In academic terms, this meant that a consultant’s model relating culture to performance only needed some degree of predictive validity.  Such models did not have to exhibit construct validity, i.e., some proof that they described, measured, or assessed a client organization’s actual underlying culture.  A second important selling point was the consultants’ emphasis on the singular role of the senior leaders (i.e., the paying clients) in molding a new high-performance culture.

Over time, the emphasis on practice over theory and the fragmented efforts of OC researchers led to some distracting issues, including the definition of OC itself, the culture vs. climate debate, and qualitative vs. quantitative models of OC. 

Culture assessment methods 


The authors provide a detailed comparison of four quantitative approaches for assessing OC: the Denison Organizational Culture Survey (used by more than 5,000 companies), the Competing Values Framework (used in more than 10,000 organizations), the Organizational Culture Inventory (more than 2,000,000 individual respondents), and the Organizational Culture Profile (OCP, developed by the authors and used in a “large number” of research studies).  We’ll spare you the gory details but unsurprisingly, the authors find shortcomings in all the approaches, even their own. 

Some of this criticism is sour grapes over the more popular methods.  However, the authors mix their criticism with acknowledgement of functional usefulness in their overall conclusion about the methods: because they lack a “clear definition of the underlying construct, it is difficult to know what is being measured even though the measure itself has been shown to be reliable and to be correlated with organizational outcomes.” (p. 15)

Building on their OCP, the authors argue that OC researchers should start with the Schein three-level model (basic assumptions and beliefs, norms and values, and cultural artifacts) and “focus on the norms that can act as a social control system in organizations.” (p. 16)  As controllers, norms can be descriptive (“people look to others for information about how to act and feel in a given situation”) or injunctive (how the group reacts when someone violates a descriptive norm).  Attributes of norms include content, consensus (how widely they are held), and intensity (how deeply they are held).

Our Perspective

So what are we to make of all this?  For starters, it’s important to recognize that some of the topics the academics are still quibbling over have already been settled in the NSC space.  The Schein model of culture is accepted world-wide.  Most folks now recognize that a safety survey, by itself, only reflects respondents’ perceptions at a specific point in time, i.e., it is a snapshot of safety climate.  And a competent safety culture assessment includes both qualitative and quantitative data: surveys, focus groups, interviews, observations, and review of artifacts such as documents.

However, we may still make mistakes.  Our mental models of safety culture may be incomplete or misassembled, e.g., we may see a direct connection between culture and some specific behavior when, in reality, there are intervening variables.  We must acknowledge that OC can be a multidimensional sub-system with complex internal relationships interacting with a complicated socio-technical system surrounded by a larger legal-political environment.  At the end of the day, we will probably still have some unknown unknowns.

Even if we follow the authors’ advice and focus on norms, it remains complicated.  For example, it’s fairly easy to envision that safety could be a widely agreed upon, but not intensely held, norm; that would define a weak safety culture.  But how about safety and production and cost norms in a context with an intensely held norm about maintaining good relations with and among long-serving coworkers?  That could make it more difficult to predict specific behaviors.  However, people might be more likely to align their behavior around the safety norm if there was general consensus across the other norms.  Even if safety is the first among equals, consensus on other norms is key to a stronger overall safety culture that is more likely to sanction deviant behavior.
 
The authors claim culture, as defined by Schein, is not well-investigated.  Most work has focused on correlating perceptions about norms, systems, policies, procedures, practices and behavior (one’s own and others’) to organizational effectiveness with a purpose of identifying areas for improvement initiatives that will lead to increased effectiveness.  The manager in the field may not care if diagnostic instruments measure actual culture, or even what culture he has or needs; he just wants to get the mission accomplished while avoiding the opprobrium of regulators, owners, bosses, lawmakers, activists and tweeters. If your primary focus is on increasing performance, then maybe you don’t need to know what’s under the hood. 

Bottom line: This is an academic paper with over 200 citations but is quite readable although it contains some pedantic terms you probably don’t hear every day, e.g., the ipsative approach to ranking culture attributes (ordinary people call this “forced choice”) and Q factor analysis.**  Some of the one-sentence descriptions of other OC research contain useful food for thought and informed our commentary in this write-up.  There is a decent dose of academic sniping in the deconstruction of commercially popular “culture” assessment methods.  However, if you or your organization are considering using one of those methods, you should be aware of what it does, and doesn’t, incorporate. 


*  J.A. Chatman and C.A. O’Reilly, “Paradigm lost: Reinvigorating the study of organizational culture,” Research in Organizational Behavior (2016).  Retrieved May 28, 2019.

**  “Normal factor analysis, called "R method," involves finding correlations between variables (say, height and age) across a sample of subjects. Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which are claimed to represent shared ways of thinking.”  Wikipedia, “Q methodology.”   Retrieved May 28, 2019.

Monday, April 1, 2019

Culture Insights from The Speed of Trust by Stephen M.R. Covey

In The Speed of Trust,* Stephen M.R. Covey posits that trust is the key competency that allows individuals (especially leaders), groups, organizations, and societies to work at optimum speed and cost.  In his view, “Leadership is getting results in a way that inspires trust.” (p. 40)  We saw the book mentioned in an NRC personnel development memo** and figured it was worth a look. 

Covey presents a model of trust made up of a framework, language to describe the framework’s components, and a set of recommended behaviors.  The framework consists of self trust, relationship trust and stakeholder trust.  Self trust is about building personal credibility; relationship trust is built on one’s behavior with others; and stakeholder trust is built within organizations, in markets (i.e., with customers), and over the larger society.  His model is not overly complicated but it has a lot of parts, as shown in the following figure.


Figure by Safetymatters

4 Cores of credibility 


Covey begins by describing how the individual can learn to trust him or herself.  This is basically an internal process of developing the 4 Cores of credibility: character attributes (integrity and intent) and competence attributes (capabilities and results).  Improvement in these areas increases self-confidence and one’s ability to project a trust-inspiring strength of character.  Integrity includes clarifying values and following them.  Intent includes a transparent, as opposed to hidden, agenda that drives one’s behavior.  Capabilities include the talents, skills, and knowledge, coupled with continuous improvement, that enable excellent performance.  Results, e.g., achieving goals and keeping commitments, are sine qua non for establishing and maintaining credibility and trust.

13 Behaviors  

The next step is learning how to trust and be trusted by others.  This is a social process, i.e., it is created through individual behavior and interaction with others.  Covey details 13 types of behavior to which the individual must attend.  Some types flow primarily, but not exclusively, from character, others from competence, and still others from a combination of the two.  He notes that “. . . the quickest way to decrease trust is to violate a behavior of character, while the quickest way to increase trust is to demonstrate a behavior of competence.” (p. 133)  Covey provides examples of each desired behavior, its opposite, and its “counterfeit” version, i.e., where people are espousing the desired behavior but actually avoiding doing it.  He describes the problems associated with underdoing and overdoing each behavior (an illustration of the Goldilocks Principle).  Behavioral change is possible if the individual has a compelling sense of purpose.  Each behavior type is guided by a set of principles, different for each behavior, as shown in the following figure.


Figure by Safetymatters

Organizational alignment

The third step is establishing trust throughout an organization.  The primary mechanism for accomplishing this is alignment of the organization’s visible symbols, underlying structures, and systems with the ideals expressed in the 4 Cores and 13 Behaviors, e.g., making and keeping commitments and accounting for results.  He describes the “taxes” associated with a low-trust organization and the “dividends” associated with a high-trust organization.  Beyond that, there is nothing new in this section.

Market and societal trust

We’ll briefly address the final topics.  Market trust is about an entity’s brand or reputation in the outside world.  Building a strong brand involves using the 4 Cores to establish, maintain or strengthen one’s reputation.  Societal trust is built on contribution, the value an entity creates in the world through ethical behavior, win-win business dealings, philanthropy and other forms of corporate social responsibility.     

Our Perspective 


Covey provides a comprehensive model of how trust is integral to relationships at every level of complexity, from the self to global relations.
 
The fundamental importance of trust is not new news.  We have long said organization-wide trust is vital to a strong safety culture.  Trust is a lubricant for organizational friction which, like physical friction, slows down activities, and makes them more expensive.  In our Safetysim*** management simulator, trust was an input variable that affected speed and effectiveness of problem resolution and overall cost performance. 

Covey’s treatment of culture is incomplete.  While he connects some of his behaviors or principles to organizational culture,**** he never actually defines culture.  It appears he thinks culture is something that “just is” or, perhaps, a consequence or artifact of performing the behaviors he prescribes.  It’s reasonable to assume Covey believes motivated individuals can behave their way to a better culture, saying “. . . behave your way into the person you want to be.” (pp. 87, 130)  His view is consistent with culture change theorists who believe people will eventually develop desired values if they model desired behavior long enough.  His recipe for cultural change boils down to “Just do it.”  We prefer a more explicit definition of culture, something along the spectrum from the straightforward notion of culture as an underlying set of values to the idea of culture as an emergent property of a complex socio-technical system. 

Trust is not the only candidate for the primary leadership or organizational competence.  The same or similar arguments could also be made about respect.  (Covey mentions respect but only as one of his 13 behaviors.)  Two-way respect is also essential for organizational success.  This leads to an interesting question: Could you respect a leader without trusting him/her?  How about some of the famous hard-ass bosses of management lore, like Harold Geneen?  Or General Patton? 

Covey is obviously a true believer in his message and his presentation has a fervor one normally associates with religious zeal.  He also includes many examples of family situations and describes how his prescriptions can be applied to families.  (Helpful if you want to manage your family like a little factory.)  Covey is a devout Mormon and his faith comes through in his writing. 

The book is an easy read.  Like many books written by successful consultants, it is interspersed with endorsements and quotes from business and political notables.  Covey includes a couple of useful self-assessment surveys.  He also offers a valuable observation: “. . . people tend to judge others based on behavior and judge themselves based on intent.” (p. 301)

Bottom line: This book is worth your time if lack of trust is a problem in your organization.


*  Stephen M. R. Covey, The Speed of Trust (New York: Free Press, 2016).  If the author’s name sounds familiar, it may be because his father, Stephen R. Covey, wrote The 7 Habits of Highly Effective People, a popular self-help book.

**  “Fiscal Year (FY) 2018 FEORP Plan Accomplishments and Successful/Promising Practices at the U.S. Nuclear Regulatory Commission (NRC),” Dec. 17, 2018.  ADAMS ML18351A243.  The agency uses The Speed of Trust concepts in manager and employee training. 

***  Safetysim is a management training simulation tool developed by Safetymatters’ Bob Cudlin.

****  For example, “A transparent culture of learning and growing will generally create credibility and trust, . . .” (p. 117)

Monday, December 3, 2018

Nuclear Safety Culture: Lessons from Factfulness by Hans Rosling

This book* is about biases that prevent us from making fact-based decisions.  It is based on the author’s world-wide work as a doctor and public health researcher.  We saw it on Bill Gates’ 2018 summer reading list.

Rosling discusses ten instincts (or reasons) why our individual worldviews (or mental models) are systematically wrong and prevent us from seeing situations are they truly are and making fact-based decisions about them.

Rosling mostly addresses global issues but the same instincts can affect our approach to work-related decision making from the enterprise level down to the individual.  We briefly discuss each instinct and highlight how it may hinder us from making good decisions during everyday work and one-off investigations.

The gap instinct

This is “that irresistible temptation we have to divide all kinds of things into two distinct and often conflicting groups, with an imagined gap—a huge chasm of injustice—in between.” (p. 26)  This is reinforced by our “strong dramatic instinct toward binary thinking . . .” (p. 42)  The gap instinct can apply to our thinking about safety, e.g., in the Safety I mental model there is acceptable performance and intolerable performance, with no middle ground and no normal transitions back and forth.  Rosling notes that usually there is no clear cleavage between two groups, even if it seems like that from the averages.  We saw this in Dekker's analysis of health provider data (reviewed Oct. 29, 2018) where both favorable and unfavorable patient outcomes exhibited the same negative work process traits.

The negativity instinct

This is “our tendency to notice the bad more than the good.” (p. 51)  We do not perceive  improvements that are “too slow, too fragmented, or too small one-by-one to ever qualify as news.” (p. 54)  “There are three things going on here: the misremembering of the past [erroneously glorifying the “good old days”]; selective reporting by journalists and activists; and the feeling that as long as things are bad it’s heartless to say they are getting better.” (p. 70)  To tell the truth, we don’t see this instinct inside the nuclear world where facilities with long-standing cultural problems (i.e., bad) are constantly reporting progress (i.e., getting better) while their cultural conditions still remain unacceptable.

The straight line instinct

This is the expectation that a line of data will continue straight into the future.  Most of you have technical training or exposure and know that accurate extrapolations can take many shapes including straight, s-bends, asymptotes, humps or exponential growth. 

The fear instinct

“[F]ears are hardwired deep in our brains for obvious evolutionary reasons.” (p. 105)  “The media cannot resist tapping into our fear instinct. It is such an easy way to grab our attention.” (p. 106)  Rosling observes that hundreds of elderly people who fled Fukushima to escape radiation ended up dying “because of the mental and physical stresses of the evacuation itself or of life in evacuation shelters.” (p. 114)  In other words, they fled something frightening (a perceived risk) and ended up in danger (a real risk).  How often does fear, e.g., fear of bad press, enter into your organization’s decision making?

The size instinct 


We overweight things that look big to us.  “It is instinctive to look at a lonely number and misjudge its importance.  It is also instinctive . . . to misjudge the importance of a single instance or an identifiable victim.” (p. 125)  Does the nuclear industry overreact to some single instances?

The generalization instinct

“[T]he generalization instinct makes “us” think of “them” as all the same.” (p. 140)  At the macro level, this is where the bad “isms” exist: racism, sexism, ageism, classism, etc.  But your coworkers may practice generalization on a more subtle, micro level.  How many people do you work with who think the root cause of most incidents is human error?  Or somewhat more generously, human error, inadequate procedures and/or equipment malfunctions— but not the larger socio-technical system?  Do people jump to conclusions based on an inadequate or incorrect categorization of a problem?  Are categories, rather than facts, used as explanations?  Are vivid examples used to over-glamorize alleged progress or over-dramatize poor outcomes?

The destiny instinct

“The destiny instinct is the idea that innate characteristics determine the destinies of people, countries, religions, or cultures.” (p. 158)  Culture includes deep-seated beliefs, where feelings can be disguised as facts.  Does your work culture assume that some people are naturally bad apples?

The single perspective instinct

This is preference for single causes and single solutions.  It is the fundamental weakness of Safety I where the underlying attitude is that problems arise from individuals who need to be better controlled.  Rosling advises us to “Beware of simple ideas and simple solutions. . . . Welcome complexity.” (p. 189)  We agree.

The blame instinct

“The blame instinct is the instinct to find a clear, simple reason for why something bad has happened. . . . when things go wrong, it must be because of some bad individual with bad intentions. . . . This undermines our ability to solve the problem, or prevent it from happening again, . . . To understand most of the world’s significant problems we have to look beyond a guilty individual and to the system.” (p. 192)  “Look for causes, not villains. When something goes wrong don’t look for an individual or a group to blame. Accept that bad things can happen without anyone intending them to.  Instead spend your energy on understanding the multiple interacting causes, or system, that created the situation.  Look for systems, not heroes.” (p. 204)  We totally agree with Rosling’s endorsement of a systems approach.

The urgency instinct

“The call to action makes you think less critically, decide more quickly, and act now.” (p. 209)  In a true emergency, people will fall back on their training (if any) and hope for the best.  However, in most situations, you should seek more information.  Beware of data that is relevant but inaccurate, or accurate but irrelevant.  Be wary of predictions that fail to acknowledge that the future is uncertain.

Our Perspective

The series of decisions an organization makes is a visible artifact of its culture and its decision making process internalizes culture.  Because of this linkage, we have long been interested in how organizations and individuals can make better decisions, where “better” means fact- and reality-based and consistent with the organization’s mission and espoused values.

We have reviewed many works that deal with decision making.  This book adds value because it is based on the author’s research and observations around the world; it is not based on controlled studies in a laboratory or observations in a single organization.  It uses very good graphics to illustrate various data sets, including changes, e.g., progress, over time.

Rosling believed “it has never been easier or more important for business leaders and employees to act on a fact-based worldview.” (p. 228)   His book is engagingly written and easy to read.  It is Rosling’s swan song; he died in 2017.

Bottom line: Rosling advocates for robust decision making, accurate mental models, and a systems approach.  We like it.


*  H. Rosling, O. Rosling and A.R. Rönnlund, Factfulness, 1st ed. ebook (New York: Flatiron, 2018).

Friday, November 9, 2018

Nuclear Safety Culture: Lessons from Turn the Ship Around! by L. David Marquet

Turn the Ship Around!* was written by a U.S. Navy officer who was assigned to command a submarine with a poor performance history.  He adopted a management approach that was radically different from the traditional top-down, leader-follower, “I say, you do” Navy model for controlling people.  The new captain’s primary method was to push decision making down to the lowest practical organizational levels; he supported his crew’s new authorities (and maintained control of the overall situation) with strategies to increase their competence and provide clarity on the organization’s purpose and goals.

Specific management practices were implemented or enhanced to support the overall approach.  For example, decision making guidelines were developed and disseminated.  Attaining goals was stressed over unconsciously following procedures.  Crew members were instructed to “think out loud” before initiating action; this practice communicated intent and increased organizational resilience because it created opportunities for others to identify potential errors before they could occur and propagate.  Pre-job briefs were changed from the supervisor reciting the procedure to asking participants questions about their roles and preparation.

As a result, several organizational characteristics that we have long promoted became more evident, including deferring to expertise (getting the most informed, capable people involved with a decision), increased trust, and a shared mental model of vision, purpose and organizational functioning.

As you can surmise, his approach worked.  (If it hadn’t, Marquet would have had a foreshortened career and there would be no book.)  All significant operational and personnel metrics improved under his command.  His subordinates and other crew members became highly promotable.  Importantly, the boat’s performance continued at a high level after he completed his tour; in other words, he established a system for success that could live on without his personal involvement.

Our Perspective 


This book provides a sharp contrast to nuclear industry folklore that promotes strong, omniscient leadership as the answer to every problem situation.  Marquet did not act out the role of the lone hero, instead he built a management system that created superior performance while he was in command and after he moved on.  There can be valuable lessons here for nuclear managers but one has to appreciate the particular requirements for undertaking this type of approach.

The manager’s attitude

You have to be willing to share some (maybe a lot) of your authority with your subordinates, their subordinates and so forth on down the line while still being held to account by your bosses for your unit’s performance.  Not everyone can do this.  It requires faith in the new system and your people and a certain detachment from short-term concerns about your own career.  You also need to have sufficient self-awareness to learn from mistakes as you move forward and recognize when you are failing to walk the talk with your subordinates.

In Marquet’s case, there were two important precursors to his grand experiment.  First, he had seen on previous assignments how demoralizing top-down micromanagement could be vs. how liberating and motivating it was for him (as a subordinate officer) to actually be allowed to make decisions.  Second, he had been training for a year on how to command a sub of a design different from the boat to which he was eventually assigned; he couldn’t go in and micromanage everyone from the get-go, he didn’t have sufficient technical knowledge.

The work environment

Marquet had one tremendous advantage: from a social perspective, a submarine is largely a self-contained world.  He did not have to worry about what people in the department next door were doing; he only had to get his remote boss to go along with his plan.  If you’re a nuclear plant department head and you want to adopt this approach but the rest of the organization runs top-down, it may be rough sledding unless you do lots of prep work to educate your superiors and get them to support you, perhaps for a pilot or trial project.

The book is easy reading, with short chapters, lots of illustrative examples (including some interesting information on how the Navy and nuclear submarines work), sufficient how-to lists, and discussion questions at the end of chapters.  Marquet did not invent his approach or techniques out of thin air.  As an example, some of his ideas and prescriptions, including rejecting the traditional Navy top-down leadership model, setting clear goals, providing principles for guiding decision making, enforcing reflection after making mistakes, giving people tools and advantages but holding them accountable, and culling people who can’t get with the program** are similar to points in Ray Dalio’s Principles, which we reviewed on April 17, 2018.  This is not surprising.  Effective, self-aware leaders should share some common managerial insights.

Bottom line: Read this book to see a real-world example of how authentic employee empowerment can work.


*  L.D. Marquet, Turn the Ship Around! (New York: Penguin, 2012).  This book was recommended to us by a Safetymatters reader.  Please contact us if you have any material you would like us to review.

**  People have different levels of appetite for empowerment or other forms of participatory management.  Not everyone wants to be fully empowered, highly self-motivated or expected to show lots of initiative.  You may end up with employees who never buy into your new program and, in the worst case, you won’t be allowed to get rid of them.

Tuesday, April 17, 2018

Nuclear Safety Culture: Insights from Principles by Ray Dalio

Book cover
Ray Dalio is the billionaire founder/builder of Bridgewater Associates, an investment management firm.  Principles* catalogs his policies, practices and lessons-learned for understanding reality and making decisions for achieving goals in that reality.  The book appears to cover every possible aspect of managerial and organizational behavior.  Our plan is to focus on two topics near and dear to us—decision making and culture—for ideas that could help strengthen nuclear safety culture (NSC).  We will then briefly summarize some of Dalio’s other thoughts on management.  Key concepts are shown in italics.

Decision Making

We’ll begin with Dalio’s mental model of reality.  Reality is a system of universal cause-effect relationships that repeat and evolve like a perpetual motion machine.  The system dynamic is driven by evolution (“the single greatest force in the universe” (p. 142)) which is the process of adaptation.

Because many situations repeat themselves, principles (policies or rules) advance the goal of making decisions in a systematic, repeatable way.  Any decision situation has two major steps: learning (obtaining and synthesizing data about the current situation) and deciding what to do.  Logic, reason and common sense are the primary decision making mechanisms, supported by applicable existing principles and tools, e.g., expected value calculations or evidence-based decision making tools.  The lessons learned from each decision situation can be incorporated into existing or new principles.  Practicing the principles develops good habits, i.e., automatic, reflexive behavior in the specified situations.  Ultimately, the principles can be converted into algorithms that can be computerized and used to support the human decision makers.

Believability weighting can be applied during the decision making process to obtain data or opinions about solutions.  Believable people can be anyone in the organization but are limited to those “who 1) have repeatedly and successfully accomplished the thing in question, and 2) . . . can logically explain the cause-effect relationships behind their conclusions.” (p. 371)  Believability weighting supplements and challenges responsible decision makers but does not overrule them.  Decision makers can also make use of thoughtful disagreement where they seek out brilliant people who disagree with them to gain a deeper understanding of decision situations.

The organization needs a process to get beyond disagreement.  After all discussion, the responsible party exercises his/her decision making authority.  Ultimately, those who disagree have to get on board (“get in sync”) and support the decision or leave the organization.

The two biggest barriers to good decision making are ego and blind spots.  Radical open-mindedness recognizes the search for what’s true and the best answer is more important than the need for any specific person, no matter their position in the organization, to be right.

Culture

Organizations and the individuals who populate them should also be viewed as machines.  Both are imperfect but capable of improving. The organization is a machine made up of culture and people that produces outcomes that provide feedback from which learning can occur.  Mistakes are natural but it is unacceptable to not learn from them.  Every problem is an opportunity to improve the machine.  

People are generally imperfect machines.  People are more emotional than logical.   They suffer from ego (subconscious drivers of thoughts) and blind spots (failure to see weaknesses in themselves).  They have different character attributes.  In short, people are all “wired” differently.  A strong culture with clear principles is needed to get and keep everyone in sync with each other and in pursuit of the organization’s goals.

Mutual adjustment takes place when people interact with culture.  Because people are different and the potential to change their wiring is low** it is imperative to select new employees who will embrace the existing culture.  If they can’t or won’t, or lack ability, they have to go.  Even with its stringent hiring practices, about a third of Bridgewater’s new hires are gone by the end of eighteen months.

Human relations are built on meaningful relationships, radical truth and tough love.  Meaningful relationships means people give more consideration to others than themselves and exhibit genuine caring for each other.  Radical truth means you are “transparent with your thoughts and open-mindedly accepting the feedback of others.” (p. 268)  Tough love recognizes that criticism is essential for improvement towards excellence; everyone in the organization is free to criticize any other member, no matter their position in the hierarchy.  People have an obligation to speak up if they disagree. 

“Great cultures bring problems and disagreements to the surface and solve them well . . .” (p. 299)  The culture should support a five-step management approach: Have clear goals, don’t tolerate problems, diagnose problems when they occur, design plans to correct the problems, and do what’s necessary to implement the plans, even if the decisions are unpopular.  The culture strives for excellence so it’s intolerant of folks who aren’t excellent and goal achievement is more important than pleasing others in the organization.

More on Management 


Dalio’s vision for Bridgewater is “an idea meritocracy in which meaningful work and meaningful relationships are the goals and radical truth and radical transparency are the ways of achieving them . . .” (p. 539)  An idea meritocracy is “a system that brings together smart, independent thinkers and has them productively disagree to come up with the best possible thinking and resolve their disagreements in a believability-weighted way . . .” (p. 308)  Radical truth means “not filtering one’s thoughts and one’s questions, especially the critical ones.” (ibid.)  Radical transparency means “giving most everyone the ability to see most everything.” (ibid.)

A person is a machine operating within a machine.  One must be one’s own machine designer and manager.  In managing people and oneself, take advantage of strengths and compensate for weaknesses via guardrails and soliciting help from others.  An example of a guardrail is assigning a team member whose strengths balance another member’s weaknesses.  People must learn from their own bad decisions so self-reflection after making a mistake is essential.  Managers must ascertain if mistakes are evidence of a weakness and whether compensatory action is required or, if the weakness is intolerable, termination.  Because values, abilities and skills are the drivers of behavior management should have a full profile for each employee.

Governance is the system of checks and balances in an organization.  No one is above the system, including the founder-owner.  In other words, senior managers like Dalio can be subject to the same criticism as any other employee.

Leadership in the traditional sense (“I say, you do”) is not so important in an idea meritocracy because the optimal decisions arise from a group process.  Managers are seen as decision makers, system designers and shapers who can visualize a better future and then build it.   Leaders “must be willing to recruit individuals who are willing to do the work that success requires.” (p. 520)

Our Perspective

We recognize international investment management is way different from nuclear power management so some of Dalio’s principles can only be applied to the nuclear industry in a limited way, if at all.  One obvious example of a lack of fit is the area of risk management.  The investing environment is extremely competitive with players evolving rapidly and searching for any edge.  Timely bets (investments) must be made under conditions where the risk of failure is many orders of magnitude greater than what acceptable in the nuclear industry.  Other examples include the relentless, somewhat ruthless, pursuit of goals and a willingness to jettison people that is foreign to the utility world.

But we shouldn’t throw the baby out with the bath.  While Dalio’s approach may be too extreme for wholesale application in your environment it does provide a comparison (note we don’t say “standard”) for your organization’s performance.  Does your decision making process measure up to Dalio’s in terms of robustness, transparency and the pursuit of truth?  Does your culture really strive for excellence (and eliminate those who don’t share that vision) or is it an effort constrained by hierarchical, policy or political realities?

This is a long book but it’s easy to read and key points are repeated often.  Not all of it is novel; many of the principles are based on observations or techniques that have been around for awhile and should be familiar to you.  For example, ideas about how human minds work are drawn, in part, from Daniel Kahneman; an integrated hierarchy of goals looks like Management by Objectives; and a culture that doesn’t automatically punish people for making mistakes or tolerable errors sounds like a “just culture” albeit with some mandatory individual learning attached.

Bottom line: Give this book a quick look.  It can’t hurt and might help you get a clearer picture of how your own organization actually operates.



*  R. Dalio, Principles (New York: Simon & Schuster, 2017).  This book was recommended to us by a Safetymatters reader.  Please contact us if you have any material you would like us to review.

**  A person’s basic values and abilities are relatively fixed, although skills may be improved through training.

Monday, October 16, 2017

Nuclear Safety Culture: A Suggestion for Integrating “Just Culture” Concepts

All of you have heard of “Just Culture” (JC).  At heart, it is an attitude toward investigating and explaining errors that occur in organizations in terms of “why” an error occurred, including systemic reasons, rather than focusing on identifying someone to blame.  How might JC be applied in practice?  A paper* by Shem Malmquist describes how JC concepts could be used in the early phases of an investigation to mitigate cognitive bias on the part of the investigators.

The author asserts that “cognitive bias has a high probability of occurring, and becoming integrated into the investigators subconscious during the early stages of an accident investigation.” 

He recommends that, from the get-go, investigators categorize all pertinent actions that preceded the error as an error (unintentional act), at-risk behavior (intentional but for a good reason) or reckless (conscious disregard of a substantial risk or intentional rule violation). (p. 5)  For errors or at-risk actions, the investigator should analyze the system, e.g., policies, procedures, training or equipment, for deficiencies; for reckless behavior, the investigator should determine what system components, if any, broke down and allowed the behavior to occur. (p. 12).  Individuals should still be held responsible for deliberate actions that resulted in negative consequences.

Adding this step to a traditional event chain model will enrich the investigation and help keep investigators from going down the rabbit hole of following chains suggested by their own initial biases.

Because JC is added to traditional investigation techniques, Malmquist believes it might be more readily accepted than other approaches for conducting more systemic investigations, e.g., Leveson’s System Theoretic Accident Model and Processes (STAMP).  Such approaches are complex, require lots of data and implementing them can be daunting for even experienced investigators.  In our opinion, these models usually necessitate hiring model experts who may be the only ones who can interpret the ultimate findings—sort of like an ancient priest reading the entrails of a sacrificial animal.  Snide comment aside, we admire Leveson’s work and reviewed it in our Nov. 11, 2013 post.

Our Perspective

This paper is not some great new insight into accident investigation but it does describe an incremental step that could make traditional investigation methods more expansive in outlook and robust in their findings.

The paper also provides a simple introduction to the works of authors who cover JC or decision-making biases.  The former category includes Reason and Dekker and the latter one Kahneman, all of whom we have reviewed here at Safetymatters.  For Reason, see our Nov. 3, 2014 post; for Dekker, see our Aug. 3, 2009 and Dec. 5, 2012 posts; for Kahneman, see our Nov. 4, 2011 and Dec. 18, 2013 posts.

Bottom line: The parts describing and justifying the author’s proposed approach are worth reading.  You are already familiar with much of the contextual material he includes.  


*  S. Malmquist, “Just Culture Accident Model – JCAM” (June 2017).

Tuesday, September 26, 2017

“New” IAEA Nuclear Safety Culture Self-Assessment Methodology

IAEA report cover
The International Atomic Energy Agency (IAEA) touted its safety culture (SC) self-assessment methodology at the Regulatory Cooperation Forum held during the recent IAEA 61st General Conference.  Their press release* refers to the methodology as “new” but it’s not exactly fresh from the factory.  We assume the IAEA presentation was based on a publication titled “Performing Safety Culture Self-assessments”** which was published in June 2016 and we reviewed on Aug. 1, 2016.  We encourage you to read our full review; it is too lengthy to reasonably summarize in this post.  Suffice to say the publication includes some worthwhile SC information and descriptions of relevant SC assessment practices but it also exhibits some execrable shortcomings.


*  IAEA, “New IAEA Self-Assessment Methodology and Enhancing SMR Licensing Discussed at Regulatory Cooperation Forum” (Sept. 22, 2017).

**  IAEA, “Performing Safety Culture Self-assessments,” Safety Reports Series no. 83 (Vienna: IAEA, 2016).

Wednesday, May 10, 2017

A Nordic Compendium on Nuclear Safety Culture

A new research paper* covers the challenges of establishing and improving nuclear safety culture (NSC) in a dynamic, i.e., project, environment.  The authors are Finnish and Swedish and it appears the problems of the Olkiluoto 3 plant inform their research interests.  Their summary and review of current NSC literature is of interest to us. 

They begin with an overall description of how organizational (and cultural) changes can occur in terms of direction, rate and scale.

Direction

Top-down (or planned) change relies on the familiar unfreeze-change-refreeze models of Kurt Lewin and Ed Schein.  Bottom-up (or emergent) change emphasizes self-organization and organizational learning.  Truly free form, unguided change leads to NSC being an emergent property of the organization.  As we know, the top-down approach is seldom, if ever, 100% effective because of frictional losses, unintended consequences or the impact of competing, emergent cultural currents.  In a nod to a systems perspective, the authors note organizational structures and behavior influence (and are influenced by) culture.

Rate

“Organizational change can also be distinguished by the rate of its occurrence, i.e, whether the change occurs abruptly or smoothly [italics added].” (p. 8)  We observe that most nuclear plants try to build on past success, hence they promote “continuous improvement” programs that don’t rattle the organization.  In contrast, a plant with major NSC problems sometimes receives shock treatment, often in the form of a new senior manager who is expected to clean things up.  New management systems and organizational structures can also cause abrupt change.

Scale

The authors identify four levels of change.  Most operating plants exhibit the least disruptive changes, called fine tuning and incremental adjustmentModular transformation attempts to change culture at the department level; corporate transformation is self-explanatory. 

The authors sound a cautionary note: “the more radical types of changes might not be easily initiated – or might not even be feasible, considering that safety culture is by nature a slowly and progressively changing phenomenon. The obvious condition where a safety-critical organization requires radical changes to its safety culture is when it is unacceptably unhealthy.” (p. 9)

Culture Change Strategies

The authors list seven specific strategies for improving NSC:

  • Change organizational structures,
  • Modify the behavior of a target group through, e.g. incentives and positive reinforcement,
  • Improve interaction and communication to build a shared culture,
  • Ensure all organizational members are committed to safety and jointly participate in its improvement,
  • Training,
  • Promote the concept and importance of NSC,
  • Recruit and select employees who will support a strong NSC.
This section includes a literature review for examples of the specific strategies.

Project Organizations

The nature of project organizations is discussed in detail including their time pressures, wide use of teams, complex tasks and a context of a temporary organization in a relatively permanent environment.  The authors observe that “in temporary organisations, the threat of prioritizing “production” over safety may occur more naturally than in permanent organizations.” (pp. 16-17)  Projects are not limited to building new plants; as we have seen, large projects (Crystal River containment penetration, SONGS steam generator replacement) can kill operating plants.

The balance of the paper covers the authors’ empirical work.

Our Perspective 


This is a useful paper because it provides a good summary of the host of approaches and methods that have been (and are being) applied in the NSC space.  That said, the authors offer no new insights into NSC practice.

Although the paper’s focus is on projects, basically new plant construction, people responsible for fixing NSC at problem plants, e.g., Watts Bar, should peruse this report for lessons they can apply that might help achieve the step function NSC improvements such plants need.


*  K.Viitanen, N. Gotcheva and C. Rollenhagen, “Safety Culture Assurance and Improvement Methods in Complex Projects – Intermediate Report from the NKS-R SC AIM” (Feb. 2017).  Thanks to Aili Hunt of the LinkedIn Nuclear Safety Culture group for publicizing this paper.

Thursday, November 3, 2016

Nuclear Safety Culture in the Latest U.S. Report for the Convention on Nuclear Safety

NUREG-1650 cover
The Nuclear Regulatory Commission (NRC) recently published NUREG-1650, rev. 6, the seventh national report for the Convention on Nuclear Safety.*  The report is prepared for the triennial meeting of the Convention and describes the policies, laws, practices and other activities utilized by the U.S. to meet its international obligations and ensure the safety of its commercial nuclear power plants.  Nuclear Safety Culture (NSC) is one of the topics discussed in the report.  This post highlights NSC changes (new items and updates) from the sixth report (NUREG-1650, rev. 5) which we reviewed on March 26, 2014.  The numbers shown below are section numbers in the current report.

8.1.5  International Responsibilities and Activities 


The NRC’s International Regulatory Development Partnership (IRDP) program supports the safe introduction of nuclear power in “new entrant” countries.  IRDP training addresses many topics including safety culture. (p. 99)

8.1.6.2  Human Resources 


This section was updated to include a reference to the 2015 NRC Safety Culture and Climate Survey.

10.1  Background [for article 10, “Priority to Safety”] 


The report notes “All U.S. nuclear power plants have committed to conducting a safety culture self-assessment every 2 years and have committed to conducting monitoring panels as described in Nuclear Energy Institute (NEI) 09-07, “Fostering a Healthy Nuclear Safety Culture,” dated March 2014.” (p. 120)  We reviewed NEI 09-07 on Jan. 6, 2011.

10.4  Safety Culture

The bulk of the report addressing NSC is in this section and exhibits a significant rewrite from the previous report.  Some of the changes reorganized existing material but there are also new items, discussed below, and additional background information.  Overall, section 10.4 is more complete and lucid than its predecessor.

10.4.1  Safety Culture Policy Statement

This contains material that formerly appeared under 10.4 and has been expanded to include two new safety culture traits, “questioning attitude” and “decisionmaking.”  The NRC worked with licensees and other stakeholders to develop a common language for discussing and assessing NSC; this effort resulted in NUREG-2165, “Safety Culture Common Language.”  We reviewed NUREG-2165 on April 6, 2014.

10.4.2  NRC Monitoring of Licensee Safety Culture 


This section has been edited to improve clarity and completeness, and provide more specific references to applicable procedures.  For example, IP 95003 now includes detailed guidance for NRC inspectors who conduct an independent assessment of licensee NSC.**

New language specifies interventions the NRC may take with respect to licensee NSC: “These activities range from requesting the licensee perform a safety culture self-assessment to a meeting between senior NRC managers and a licensee’s Board of Directors to discuss licensee performance issues and actions to address persistent and continuing safety culture cross-cutting issues.” (p. 128)

10.4.3 The NRC Safety Culture

This section covers the NRC’s efforts to maintain and enhance its own SC.  The section has been rewritten and strengthened throughout.  It discusses the need for continuous improvement and says “Complacency lends itself to a degradation in safety culture when new information and historical lessons are not processed and used to enhance the NRC and its regulatory products.” (p. 130)  That’s true; SC that is not actively maintained will invariably decay.

12.3.5  Human Factors Information System 


This system handles human performance information extracted from NRC inspection and licensee event reports.  The report notes “the database is being updated to include data with a safety culture perspective.” (p. 146)

Institute of Nuclear Power Operations (INPO)

INPO also provides content for the report, basically a description of INPO’s activities to ensure plant safety.  Their discussion includes a section on SC, which is not materially different from their contribution to the previous version of the report.

Our Perspective

Like the sixth national report, this seventh report appears to cover every aspect of the NRC’s operations but does not present any new information.  In other words, it’s a good reference document.

The NSC changes are incremental but move toward increased bureaucratization and intrusive oversight of NSC.  The NRC is certainly showing the hilt of the sword of regulation if not the blade.  We still believe if it reads like a set of requirements, results in enforceable interventions and quacks like the NRC, it’s de facto regulation.


*  NRC NUREG-1650 Rev. 6, “The United States of America Seventh National Report for the Convention on Nuclear Safety” (Oct. 2016).  ADAMS ML16293A104.  The Convention on Nuclear Safety is a legally binding commitment to maintain a level of safety that meets international benchmarks.

**  This detailed guidance is also mentioned in 12.3.6 Support to Event Investigations and For-Cause Inspections and Training (p. 148).