Wednesday, August 17, 2011

Additional Thoughts on Significance Culture

Our previous post introduced the work of Constance Perin,  Visiting Scholar in Anthropology at MIT, including her thesis of “significance culture” in nuclear installations.  Here we expand on the intersection of her thesis with some of our work. 

Perin places primary emphasis on the availability and integration of information to systematize and enhance the determination of risk significance.  This becomes the true organizing principle of nuclear operational safety and supplants the often hazy construct of safety culture.  We agree with the emphasis on more rigorous and informed assessments of risk as an organizing principle and focus for the entire organization. 

Perin observes: “Significance culture arises out of a knowledge-using and knowledge-creating paradigm. Its effectiveness depends less on “management emphasis” and “personnel attitudes” than on having an operational philosophy represented in goals, policies, priorities, and actions organized around effectively characterizing questionable conditions before they can escalate risk.” (Significance Culture, p. 3)*

We found a similar thought from Kenneth Brawn on a recent LinkedIn post under the Nuclear Safety Group.  He states, “Decision making, and hence leadership, is based on accurate data collection that is orchestrated, focused, real time and presented in a structured fashion for a defined audience….Managers make decisions based on stakeholder needs – the problem is that risk is not adequately considered because not enough time is taken (given) to gather and orchestrate the necessary data to provide structured information for the real time circumstances.” ** 

While seeing the potential unifying force of significance culture, we are mindful also that such determinations often are made under a cloak of precision that is not warranted or routinely achievable.  Such analyses are complex, uncertain, and subject to considerable judgment by the involved analysts and decision makers.  In other words, they are inherently fuzzy.  This limitation can only be partly remedied through better availability of information.  Nuclear safety does not generally include “bright lines” of acceptable or unacceptable risks, or finely drawn increments of risk.  Sure, PRA analyses and other “risk informed” approaches provide the illusion of quantitative precision, and often provide useful insight for devising courses of action that that do not pose “undue risk” to public safety.  But one does not have to read too many Licensee Event Reports (LERs) to see that risk determinations are ultimately shades of gray.  For one example, see the background information on our decision scoring example involving a pipe leak in a 30” moderate energy piping elbow and interim repair.  The technical justification for the interim fix included terms such as “postulated”, “best estimate” and “based on the assumption”.  A full reading of the LER makes clear the risk determination involved considerable qualitative judgment by the licensee in making its case and the NRC in approving the interim measure. That said, the NRC’s justification also rested in large part on a finding of “hardship or unusual difficulty” if a code repair were to be required immediately.

Where is this leading us?  Are poor safety decisions the result of the lack of quality information?  Perhaps.  However another scenario that is at least equally likely, is that the appropriate risk information may not be pursued vigorously or the information may be interpreted in the light most favorable to the organization’s other priorities.  We believe that the intrinsic uncertainties in significance determination opens the door to the influence of other factors - namely those ever present considerations of cost, schedule, plant availability, and even more personal interests, such as incentive programs and career advancement.  Where significance is fuzzy, it invites rationalization in the determination of risk and marginalization of the intrinsic uncertainties.  Thus a desired decision outcome could encourage tailoring of the risk determination to achieve the appropriate fit.  It may mean that Perin’s focus on “effectively characterizing questionable conditions” must also account for the presence and potential influence of other non-safety factors as part of the knowledge paradigm.   

This brings us back to Perin’s ideas for how to pull the string and dig deeper into this subject.  She finds, “Condition reports and event reviews document not only material issues. Uniquely, they also document systemic interactions among people, priorities, and equipment — feedback not otherwise available.” (Significance Culture, p.5)  This emphasis makes a lot of sense and in her book, Shouldering Risks: The Culture of Control in the Nuclear Power Industry, she takes up the challenge of delving into the depths of a series of actual condition reports.  Stay tuned for our review of the book in a subsequent post.


*  C. Perin, “Significance Culture in Nuclear Installations,” a paper presented at the 2005 Annual Meeting of the American Nuclear Society (June 6, 2005).

**  You may be asked to join the LinkedIn Nuclear Safety group to view Mr. Brawn's comment and the discussion of which it is part.

Friday, August 12, 2011

An Anthropologist’s View

Academics in many disciplines study safety culture.  This post introduces to this blog the work of an MIT anthropologist, Constance Perin, and discusses a paper* she presented at the 2005 ANS annual meeting.

We picked a couple of the paper’s key recommendations to share with you.  First, Perin’s main point is to advocate the development of a “significance culture” in nuclear power plant organizations.  The idea is to organize knowledge and data in a manner that allows an organization to determine significance with respect to safety issues.  The objective is to increase an organization’s capabilities to recognize and evaluate questionable conditions before they can escalate risk.  We generally agree with this aim.  The real nub of safety culture effectiveness is how it shapes the way an organization responds to new or changing situations.

Perin understands that significance evaluation already occurs in both formal processes (e.g., NRC evaluations and PRAs) and in the more informal world of operational decisions, where trade-offs, negotiations, and satisficing behavior may be more dynamic and less likely to be completely rational.  She recommends that significance evaluation be ascribed a higher importance, i.e., be more formally and widely ingrained in the overall plant culture, and used as an organizing principle for defining knowledge-creating processes. 

Second, because of the importance of a plant's Corrective Action Program (CAP), Perin proposes making NRC assessment of the CAP the “eighth cornerstone” of the Reactor Oversight Process (ROP).  She criticizes the NRC’s categorization of cross cutting issues for not being subjected to specific criteria and performance indicators.  We have a somewhat different view.  Perin’s analysis does not acknowledge that the industry places great emphasis on each of the cross cutting issues in terms of performance indicators and monitoring including self assessment.**  It is also common to the other cornerstones where the plants use many more indicators to track and trend performance than the few included in the ROP.  In our opinion, a real problem with the ROP is that its few indicators do not provide any reliable or forward looking picture of nuclear safety. 

The fault line in the CAP itself may better be characterized in terms of the lack of measurement and assessment of how well the CAP program functions to sustain a strong safety culture.  Importantly such an approach would evaluate how decisions on conditions adverse to quality properly assessed not only significance, but balanced the influence of any competing priorities.  Perin also recognizes that competing priorities exist, especially in the operational world, but making the CAP a cornerstone might actually lead to increased false confidence in the CAP if its relationship with safety culture was left unexamined.

Prof. Perin has also written a book, Shouldering Risks: The Culture of Control in the Nuclear Power Industry,*** which is an ethnographic analysis of nuclear organizations and specific events they experienced.  We will be reviewing this book in a future post.  We hope that her detailed drill down on those events will yield some interesting insights, e.g., how different parts of an organization looked at the same situation but had differing evaluations of its risk implications.

We have to admit we didn’t detect Prof. Perin on our radar screen; she alerted us to the presence of her work.  Based on our limited review to date, we think we share similar perspectives on the challenges involved in attaining and maintaining a robust safety culture.


*  C. Perin, “Significance Culture in Nuclear Installations,” a paper presented at the 2005 Annual Meeting of the American Nuclear Society (June 6, 2005).

** The issue may be one of timing.  Prof. Perin based her CAP recommendation, in part, on a 2001 study that suggested licensees’ self-regulation might be inadequate.  We have the benefit of a more contemporary view.  

*** C. Perin, Shouldering Risks: The Culture of Control in the Nuclear Power Industry, (Princeton, NJ: Princeton University Press, 2005).