Showing posts with label Statistics. Show all posts
Showing posts with label Statistics. Show all posts

Sunday, October 5, 2014

Update on INPO Safety Culture Study

On October 22, 2010 we reported on an INPO study that correlated safety culture (SC) survey data with safety performance measures.  A more complete version of the analysis was published in an academic journal* this year and this post expands on our previous comments.

Summary of the Paper

The new paper begins with a brief description of SC and related research.  Earlier research suggests that some modest relationship exists between SC and safety performance but the studies were limited in scope.  Longitudinal (time-based) studies have yielded mixed results.  Overall, this leaves plenty of room for new research efforts.

According to the authors, “The current study provides a unique contribution to the safety culture literature by examining the relationship between safety culture and a diverse set of performance measures [NRC industry trends, ROP data and allegations, and INPO plant data] that focus on the overall operational safety of a nuclear power plant.” (p. 39)  They hypothesized small to medium correlations between current SC survey data and eleven then-current (2010) and future (2011) safety performance measures.**

The 110-item survey instrument was distributed across the U.S. nuclear industry and 2876 useable responses were received from employees and contractors representing almost all U.S. plants.  Principal components analysis (PCA) was applied to the survey data and resulted in nine useful factors.***  Survey items that did not have a high factor loading (on a single factor) or presented analysis problems were eliminated, resulting in 60 useful survey items.  Additional statistical analysis showed that the survey responses from each individual site were similar and the various sites had different responses on the nine factors.

Statistically significant correlations were observed between both overall SC and individual SC factors and the safety performance measures.****  A follow-on regression analysis suggested “that the factors collectively accounted for 23–52% of the variance in concurrent safety performance.” (p. 45)

“The significant correlations between overall safety culture and measures of safety performance ranged from -.26 to -.45, suggesting a medium effect and that safety culture accounts for 7–21% of the variance in most of the measures of safety performance examined in this study.” (p. 45)

Here is an example of a specific finding: “The most consistent relationship across both the correlation and regression analyses seemed to be between the safety culture factor questioning attitude, and the outcome variable NRC allegations. . . .Questioning attitude was also a significant predictor of concurrent counts of inspection findings associated with ROP cross-cutting aspects, the cross-cutting area of human performance, and total number of SCCIs. Fostering a questioning attitude may be a particularly important component of the overall safety culture of an organization.” (p. 45)

And another: “It is particularly interesting that the only measure of safety performance that was not significantly correlated with safety culture was industrial safety accident rate.” (p. 46)

The authors caution that “The single administration of the survey, combined with the correlational analyses, does not permit conclusions to be drawn regarding a causal relationship between safety culture and safety performance.  In particular, the findings presented here are exploratory, mainly because the correlational analyses cannot be used to verify causality and the data used represent snapshots of safety culture and safety performance.” (p. 46)

The relationships between SC and current performance were stronger than between SC and future performance.  This should give pause to those who would rush to use SC data as a leading indicator. 

Our Perspective 


This is a dense paper and important details may be missing from this summary.  If you are interested in this topic then you should definitely read the original and our October 22, 2010 post.

That recognizable factors dropped out of the PCA should not be a surprise.  In fact, the opposite would have been the real surprise.  After all, the survey was constructed to include previously identified SC traits.  The nine factors mapped well against previously identified SC traits and INPO principles. 

However, there was no explanation, in either the original presentation or this paper, of why the 11 safety performance measures were chosen out of a large universe.  After all, the NRC and INPO collect innumerable types of performance data.  Was there some cherry picking here?  I have no idea but it creates an opportunity for a statistical aside, presented in a footnote below.*****

The authors attempt to explain some correlations by inventing a logic that connects the SC factor to the performance measure.  But it just speculation because, as the authors note, correlation is not causality.  You should look at the correlation tables and see if they make sense to you, or if some different processes are at work here. 

One aspect of this paper bothers me a little.  In the October 22, 2010 NRC public meeting, the INPO presenter said the analysis was INPO’s while an NRC presenter said NRC staff had reviewed and accepted the INPO analysis, which had been verified by an outside NRC contractor.  For this paper, those two presenters are joined by another NRC staffer as co-authors.  This is a difference.  It passes the smell test but does evidence a close working relationship between an independent public agency and a secretive private entity.


*  S.L. Morrow, G.K. Koves and V.E. Barnes, “Exploring the relationship between safety culture and safety performance in U.S. nuclear power operations,” Safety Science 69 (2014), pp. 37–47.  ADAMS ML14224A131.

**  The eleven performance measures included seven NRC measures (Unplanned scrams, NRC allegations,  ROP cross-cutting aspects,  Human performance cross-cutting inspection findings, Problem identification and resolution cross-cutting inspection findings, Substantive cross-cutting issues in the human performance or problem identification and resolution area and ROP action matrix oversight, i.e., which column a plant is in) and four INPO measures (Chemistry performance, Human performance error rate, Forced loss rate and Industrial safety accident rate.

***  The nine SC factors were management commitment to safety, willingness to raise safety concerns, decision making, supervisor responsibility for safety, questioning attitude, safety communication, personal responsibility for safety, prioritizing safety and training quality.

****  Specifically, 13 (out of 22) overall SC correlations with the current and future performance measures were significant as were 84 (out of 198) individual SC factor correlations.

*****  It would be nice to know if any background statistical testing was performed to pick the performance measures.  This is important because if one calculates enough correlations, or any other statistic, one will eventually get some false positives (Type I errors).  One way to counteract this problem is to establish a more restrictive threshold for significance, e.g., 0.01 vs 0.05 or 0.005 vs. 0.01. This note is simply my cautionary view.  I am not suggesting there are any methodological problem areas in the subject paper.

Wednesday, December 18, 2013

Thinking, Fast and Slow by Daniel Kahneman

Kahneman is a Nobel Prize winner in economics.  His focus is on personal decision making, especially the biases and heuristics used by the unconscious mind as it forms intuitive opinions.  Biases lead to regular (systematic) errors in decision making.  Kahneman and Amos Tversky developed prospect theory, a model of choice, that helps explain why real people make decisions that are different from those of the rational man of economics.

Kahneman is a psychologist so his work focuses on the individual; many of his observations are not immediately linkable to safety culture (a group characteristic).  But even in a nominal group setting, individuals are often very important.  Think about the lawyers, inspectors, consultants and corporate types who show up after a plant incident.  What kind of biases do they bring to the table when they are evaluating your organization's performance leading up to the incident?

The book* has five parts, described below.  Kahneman reports on his own research and then adds the work of many other scholars.  Many of the experiments appear quite simple but provide insights into unconscious and conscious decision making.  There is a lot of content so this is a high level summary, punctuated by explicative or simply humorous quotes.

Part 1 describes two methods we use to make decisions: System 1 and System 2.  System 1 is impulsive, intuitive, fast and often unconscious; System 2 is more analytic, cautious, slow and controlled. (p. 48)  We often defer to System 1 because of its ease of use; we simply don't have the time, energy or desire to pore over every decision facing us.  Lack of desire is another term for lazy.

System 1 often operates below consciousness, utilizing associative memory to link a current stimulus to ideas or concepts stored in memory. (p. 51)  System 1's impressions become beliefs when accepted by System 2 and a mental model of the world takes shape.  System 1 forms impressions of familiarity and rapid, precise intuitions then passes them on to System 2 to accept/reject. (pp. 58-62)

System 2 activities take effort and require attention, which is a finite resource.  If we exceed the attention budget or become distracted then System 2 will fail to obtain correct answers.  System 2 is also responsible for self-control of thoughts and behaviors, another drain on mental resources. (pp. 41-42)

Biases include a readiness to infer causality, even where none exists; a willingness to believe and confirm in the absence of solid evidence; succumbing to the halo effect where we project a coherent whole based on an initial impression; and problems caused by WYSIATI** including basing conclusions on limited evidence, overconfidence, framing effects where decisions differ depending on how information and questions are presented and base-rate neglect where we ignore widely-known data about a decision situation. (pp. 76-88)

Heuristics include substituting easier questions for the more difficult ones that have been asked, letting current mood affect answers on general happiness and allowing emotions to trump facts. (pp. 97-103) 

Part 2 explores decision heuristics in greater detail, with research and examples of how we think associatively, metaphorically and causally.  A major topic throughout this section is the errors people tend to make when handling questions that have a statistical dimension.  Such errors occur because statistics requires us to think of many things at once, which System 1 is not designed to do, and a lazy or busy System 2, which could handle this analysis, is prone to accept System 1's proposed answer.  Other errors occur because:

We make incorrect inferences from small samples and are prone to ascribe causality to chance events.  “We are far too willing the reject the belief that much of what we in life is random.” (p. 117)  We are prone to attach “a causal interpretation to the inevitable fluctuations of a random process.” (p. 176)  “There is more luck in the outcomes of small samples.” (p. 194)

We fall for the anchoring effect, where we see a particular value for an unknown quantity (e.g., the asking price for a used car) before we develop our own value.  Even random anchors, which provide no relevant information, can influence decision making.

People search for relevant information when asked questions.  Information availability and ease of retrieval is a System 1 heuristic but only System 2 can judge the quality and relevance of retrieved content.  People are more strongly affected by ease of retrieval and go with their intuition when they are, for example, mentally busy or in a good mood. (p. 135)  However, “intuitive predictions tend to be overconfident and overly extreme.” (p. 192)

Unless we know the subject matter well, and have some statistical training, we have difficulty dealing with situations that require statistical reasoning.  One research finding “illustrates a basic limitation in the ability of our mind to deal with small risks: we either ignore them altogether or give them far too much weight—nothing in between.” (p. 143)  “There is one thing you can do when you have doubts about the quality of the evidence: let your judgments of probability stay close to the base rate.” (p. 153)  “. . . whenever the correlation between two scores is imperfect, there will be regression to the mean. . . . [a process that] has an explanation but does not have a cause.” (pp. 181-82)

Finally, and the PC folks may not appreciate this, but “neglecting valid stereotypes inevitably results in suboptimal judgments.” (p. 169)

Part 3 focuses on specific shortcomings of our thought processes: overconfidence, fed by the illusory certainty of hindsight, in what we think we know, and underappreciation of the role of chance in events.

“Subjective confidence in a judgment is not a reasoned evaluation of the probability that this judgment is correct.  Confidence is a feeling.” (p. 212)  Hindsight bias “leads observers to assess the quality of a decision not by whether the process was sound but by whether its outcome was good or bad. . . . a clear outcome bias.” (p. 203)  “. . . the optimistic bias may well be the most significant of the cognitive biases.” (p. 255)  “The optimistic style involves taking credit for success but little blame for failure.” (p. 263)

“The sense-making machinery of System 1 makes us see the world as more tidy, predictable, and coherent than it really is.” (p. 204)  “. . . reality emerges from the interactions of many different agents and force, including blind luck, often producing large and unpredictable results.” (p. 220)  “An unbiased appreciation of uncertainty is a cornerstone of rationality—but it is not what people and organizations want. . . . Acting on pretended knowledge is often the preferred solution.” (p. 263)

And the best quote in the book: “Professional controversies bring out the worst in academics.” (p. 234)

Part 4 contrasts the rational people of economics with the more complex people of psychology, in other words, the Econs vs. the Humans.  Kahneman shows how prospect theory opened a door between the two disciplines and contributed to the start of the field of behavioral economics.

Economists adopted expected utility theory to prescribe how decisions should be made and describe how Econs make choices.  In contrast, prospect theory has three cognitive features: evaluation of choices is relative to a reference point, outcomes above that point are gains, below that point are losses; diminishing sensitivity to changes; and loss aversion, where losses loom larger than gains. (p. 282)  In practice, loss aversion leads to risk-averse choices when both gains and losses are possible, and diminishing sensitivity leads to risk taking when sure losses are compared to a possible larger loss.  “Decision makers tend to prefer the sure thing over the gamble (they are risk averse) when the outcomes are good.  They tend to reject the sure thing and accept the gamble (they are risk seeking) when both outcomes are negative.” (p. 368)

“The fundamental ideas of prospect theory are that reference points exist, and that losses loom larger than corresponding gains.” (p. 297)  “A reference point is sometimes the status quo, but it can also be a goal in the future; not achieving the goal is a loss, exceeding the goal is a gain.” (p. 303)  Loss aversion is a powerful conservative force.” (p. 305)

When people do consider vary rare events, e.g., a nuclear accident, they will almost certainly overweight the probability in their decision making.  “ . . . people are almost completely insensitive to variations of risk among small probabilities.” (p. 316)  “. . . low-probability events are much more heavily weighted when described in terms of relative frequencies (how many) than when stated is more abstract terms of . . . “probability” (how likely).” (p. 329)  Framing of questions evoke emotions, e.g., “losses evokes stronger negative feelings than costs.” (p. 364)  But “[r]eframing is effortful and System 2 is normally lazy.” (p. 367)  As an exercise, think about how anti-nuclear activists and NEI would frame the same question about the probability and consequences of a major nuclear accident. 

There are some things an organization can do to improve its decision making.  It can use local centers of over optimism (Sales dept.) and loss aversion (Finance dept.) to offset each other.  In addition, an organization's decision making practices can require the use an outside view (i.e., a look at the probabilities of similar events in the larger world) and a formal risk policy to mitigate against known decision biases. (p. 340)

Part 5 covers two different selves that exist in every human, the experiencing self and the remembering self.  The former lives through an experience and the latter creates a memory of it (for possible later recovery) using specific heuristics.  Our tendency to remember events as a sample or summary of actual experience is a factor that biases current and future decisions.  We end up favoring (fearing) a short period of intense joy (pain) over a long period of moderate happiness (pain). (p. 409) 

Our memory has evolved to represent past events in terms of peak pain/pleasure during the events and our feelings when the event is over.  Event duration does not impact our ultimate memory of an event.  For example, we choose future vacations based on our final evaluations of past vacations even if many of our experiences during the past vacations were poor. (p. 389)

In a possibly more significant area, the life satisfaction score you assign to yourself is based on a small sample of highly available ideas or memories. (p. 400)  Ponder that the next time you take or review responses from a safety culture survey.

Our Perspective

This is an important book.  Although not explicitly stated, the great explanatory themes of cause (mechanical), choice (intentional) and chance (statistical) run through it.  It is filled with nuggets that apply to the individual (psychological) and also the aggregate if the group shares similar beliefs.  Many System 1 characteristics, if unchecked and shared by a group, have cultural implications.*** 

We have discussed Kahneman's work before on this blog, e.g., his view that an organization is a factory for producing decisions and his suggestion to use a “premortem” as a partial antidote for overconfidence.  (A premortem is an exercise the group undertakes before committing to an important decision: Imagine being a year into the future, the decision's outcome is a disaster.  What happened?)  For more on these points, see our Nov. 4, 2011 post.

We have also discussed some of the topics he raises, e.g., the hindsight bias.  Hindsight is 20/20 and it supposedly shows what decision makers could (and should) have known and done instead of their actual decisions that led to an unfavorable outcome, incident, accident or worse.  We now know that when the past was the present, things may not have been so clear-cut.

Kahneman's observation that the ability to control attention predicts on-the-job performance (p. 37) is certainly consistent with our reports on the characteristics of high reliability organizations (HROs). 

“The premise of this book is that it is easier to recognize other people's mistakes than our own.” (p. 28)  Having observers at important, stressful decision making meetings is useful; they are less cognitively involved than the main actors and more likely to see any problems in the answers being proposed.

Critics' major knock on Kahneman's research is that it doesn't reflect real world conditions.  His model is “overly concerned with failures and driven by artificial experiments than by the study of real people doing things that matter.” (p. 235)  He takes this on by collaborating with a critic in an investigation of intuitive decision making, specifically seeking to answer: “When can you trust a self-confident professional who claims to have an intuition?” (p. 239)  The answer is when the expert acquired skill in a predictable environment, and had sufficient practice with immediate, high-quality feedback.  For example, anesthesiologists are in a good position to develop predictive expertise; on the other hand, psychotherapists are not, primarily because a lot of time and external events can pass between their prognosis for a patient and ultimate results.  However, “System 1 takes over in emergencies . . .” (p. 35)  Because people tend to do what they've been trained to do in emergencies, training leading to (correct) responses is vital.

Another problem is that most of Kahneman's research uses university students, both undergraduate and graduate, as subjects.  It's fair to say professionals have more training and life experience, and have probably made some hasty decisions they later regretted and (maybe) learned from.  On the other hand, we often see people who make sub-optimal, or just plain bad decisions even though they should know better.

There are lessons here for managers and other would-be culture shapers.  System 1's search for answers is mostly constrained to information consistent with existing beliefs (p. 103) which is an entry point for  culture.  We have seen how group members can have their internal biases influenced by the dominant culture.  But to the extent System 1 dominates employees' decision making, decision quality may suffer.

Not all appeals can be made to the rational man in System 2 even though a customary, if tacit, assumption of managers is they and their employees are rational and always operating consciously, thus new experiences will lead to expected new values and beliefs, new decisions and improved safety culture.  But it may not be this straightforward.  System 1 may intervene and managers should be alert to evidence of System 1 type thinking and adjust their interventions accordingly.  Kahneman suggests encouraging “a culture in which people look out for one another as they approach minefields.” (p. 418) 

We should note Systems 1 and 2 are constructs and “do not really exist in the brain or anywhere else.” (p. 415)  System 1 is not Dr. Morbius' Id monster.****  System 1 can be trained to behave differently, but it is always ready to provide convenient answers for a lazy System 2.

The book is long, with small print, but the chapters are short so it's easy to invest 15-20 min. at a time.  One has to be on constant alert for useful nuggets that can pop up anywhere—which I guess promotes reader mindfulness.  It is better than Blink, which simply overwhelmed this reader with a cloudburst of data showing the informational value of thin slices and unintentionally over-promoted the value of intuition. (see pp. 235-36)  And it is much deeper than The Power of Habit, which we reviewed last February.

(Common sense is nothing more than a deposit of prejudices laid down by the mind before you reach eighteen.  Attributed to Albert Einstein)

*  D. Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011).

**  WYSIATI – What You See Is All There Is.  Information that is not retrieved from memory, or otherwise ignored, may as well not exist. (pp. 85-88)  WYSIATI means we base decisions on the limited information that we are able or willing to retrieve before a decision is due.  

***  A few of these characteristics are mentioned in this report, e.g., impressions morphing into beliefs, a bias to believe and confirm, and WYSIATI errors.  Others include links of cognitive ease to illusions of truth and reduced vigilance (complacency), and narrow framing where decision problems are isolated from one another. (p. 105)

****  Dr. Edward Morbius is a character in the 1956 sci-fi movie Forbidden Planet.

Wednesday, January 12, 2011

ESP and Safety Culture

A recent New York Times article* on an extrasensory perception (ESP) study and the statistical methods used therein caught our attention.  The article’s focus is on the controversy surrounding statistical significance testing.  “A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent.”  We have all seen such analyses.

However, critics of classical significance testing say a finding based on such a test “could overstate the significance of the finding by a factor of 10 or more,” a sort of super false positive.  The critics claim a better approach is to apply the methods of Bayesian analysis, which incorporates known probabilities, if available, from outside the study.  Check out the comments on the article, especially the reader recommended ones, for more information on statistical methods and issues.  (You can ignore the ESP-related comments unless you have some special interest in the topic). 

What has this got to do with safety culture?

Recall that last October we reported on an INPO study that, among other things, calculated correlations between safety culture survey factors and various safety-related performance measures.  We expressed reservations about the overall approach and results even though a few correlations supported points we have been making in our blog.

The controversy over the ESP study and its associated statistical methods reminds us that analysts in many fields are under pressure to find something “significant.”  This pressure comes from bosses, funding agencies, editors and tenure committees.  Studies that find no effects, or ones not aligned with higher-level organizational objectives, are less likely to be publicized and their authors rewarded.  In addition, I fear some (many?) social science researchers don’t fully understand the statistical methods they are using, i.e., their built-in biases and limitations.  So, once again, caveat emptor.   

By the way, we are not saying or implying the INPO study was biased in any way; we have no information on it other than what was presented at the NRC meeting referenced in our original blog post. 

*  B. Carey, “You Might Already Know This ...,” New York Times (Jan 11, 2011).

Friday, October 22, 2010

NRC Safety Culture Workshop

The information from the Sept 28, 2010 NRC safety culture meeting is available on the NRC website.  This was a meeting to review the draft safety culture policy statement, definition and traits.

As you probably know, the NRC definition now focuses on organizational “traits.”   According to the NRC, “A trait . . . is a pattern of thinking, feeling, and behaving that emphasizes safety, particularly in goal conflict situations, e.g., production vs. safety, schedule vs. safety, and cost of the effort vs. safety.”*  We applaud this recognition of goal conflicts as potential threats to effective safety management and a strong safety culture.

Several stakeholders made presentations at the meeting but the most interesting one was by INPO’s Dr. Ken Koves.**  He reported on a study that addressed two questions:
  • “How well do the factors from a safety culture survey align with the safety culture traits that were identified during the Feb 2010 workshop?
  • Do the factors relate to other measures of safety performance?” (p. 4)
The rest of this post summarizes and critiques the INPO study.

Methodology

For starters, INPO constructed and administered a safety culture survey.  The survey itself is interesting because it covered 63 sites and had 2876 respondents, not just a single facility or company.  They then performed a principal component analysis to reduce the survey data to nine factors.  Next, they mapped the nine survey factors against the safety culture traits from the NRC's Feb 2010 workshop, INPO principles, and Reactor Oversight Program components and found them generally consistent.  We have no issue with that conclusion. 

Finally, they ran correlations between the nine survey factors and INPO/NRC safety-related performance measures.  I assume the correlations included in his presentation are statistically significant.  Dr. Koves concludes that “Survey factors are related to other measures of organizational effectiveness and equipment performance . . . .” (p. 19)

The NRC reviewed the INPO study and found the “methods, data analyses and interpretations [were] appropriate.” ***

The Good News

Kudos to INPO for performing this study.  This analysis is the first (only?) large-scale attempt of which I am aware to relate safety culture survey data to anything else.  While we want to avoid over-inferring from the analysis, primarily because we have neither the raw data nor the complete analysis, we can find support in the correlation tables for things we’ve been saying for the last year on this blog.

For example, the factor with the highest average correlation to the performance measures is Management Decision Making, i.e., what management actually does in terms of allocating resources, setting priorities and walking the talk.  Prioritizing Safety, i.e., telling everyone how important it is and promulgating safety policies, is 7th (out of 9) on the list.  This reinforces what we have been saying all along: Management actions speak louder than words.

Second, the performance measures with the highest average correlation to the safety culture survey factors are the Human Error Rate and Unplanned Auto Scrams.  I take this to indicate that surveys at plants with obvious performance problems are more likely to recognize those problems.  We have been saying the value of safety culture surveys is limited, but can be more useful when perception (survey responses) agrees with reality (actual conditions).  Highly visible problems may drive perception and reality toward congruence.  For more information on perception vs. reality, see Bob Cudlin’s recent posts here and here.

Notwithstanding the foregoing, our concerns with this study far outweigh our comfort at seeing some putative findings that support our theses.

Issues and Questions

The industry has invested a lot in safety culture surveys and they, NRC and INPO have a definite interest (for different reasons) in promoting the validity and usefulness of safety culture survey data.  However, the published correlations are moderate, at best.  Should the public feel more secure over a positive safety culture survey because there's a "significant" correlation between survey results and some performance measures, some of which are judgment calls themselves?  Is this an effort to create a perception of management, measurement and control in a situation where the public has few other avenues for obtaining information about how well these organizations are actually protecting the public?

More important, what are the linkages (causal, logical or other) between safety culture survey results and safety-related performance data (evaluations and objective performance metrics) such as those listed in the INPO presentation?  Most folks know that correlation is not causation, i.e., just because two variables move together with some consistency doesn’t mean that one causes the other but what evidence exists that there is any relationship between the survey factors and the metrics?  Our skepticism might be assuaged if the analysts took some of the correlations, say, decision making and unplanned reactor scrams, and drilled into the scrams data for at least anecdotal evidence of how non-conservative decision making contributed to x number of scrams. We would be surprised to learn that anyone has followed the string on any scram events all the way back to safety culture.

Wrapping Up

The INPO analysis is a worthy first effort to tie safety culture survey results to other measures of safety-related performance but the analysis is far too incomplete to earn our endorsement.  We look forward to seeing any follow-on research that addresses our concerns.


*  “Presentation for Safety Club Public Meeting - Traits Comparison Charts,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102670381, p. 4.

**  G.K. Koves, “Safety Culture Traits Validation in Power Reactors,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010).

***  V. Barnes, “NRC Independent Evaluation of INPO’s Safety Culture Traits Validation Study,” NRC Public Meeting, Las Vegas, NV (Sept 28, 2010) ADAMS Accession Number ML102660125, p. 8.