Our previous post introduced the work of Constance Perin, Visiting Scholar in Anthropology at MIT, including her thesis of “significance culture” in nuclear installations. Here we expand on the intersection of her thesis with some of our work.
Perin places primary emphasis on the availability and integration of information to systematize and enhance the determination of risk significance. This becomes the true organizing principle of nuclear operational safety and supplants the often hazy construct of safety culture. We agree with the emphasis on more rigorous and informed assessments of risk as an organizing principle and focus for the entire organization.
Perin observes: “Significance culture arises out of a knowledge-using and knowledge-creating paradigm. Its effectiveness depends less on “management emphasis” and “personnel attitudes” than on having an operational philosophy represented in goals, policies, priorities, and actions organized around effectively characterizing questionable conditions before they can escalate risk.” (Significance Culture, p. 3)*
We found a similar thought from Kenneth Brawn on
a recent LinkedIn post under the Nuclear Safety Group. He states, “Decision making, and hence leadership, is based on accurate data collection that is orchestrated, focused, real time and presented in a structured fashion for a defined audience….Managers make decisions based on stakeholder needs – the problem is that risk is not adequately considered because not enough time is taken (given) to gather and orchestrate the necessary data to provide structured information for the real time circumstances.” **
While seeing the potential unifying force of significance culture, we are mindful also that such determinations often are made under a cloak of precision that is not warranted or routinely achievable. Such analyses are complex, uncertain, and subject to considerable judgment by the involved analysts and decision makers. In other words, they are inherently fuzzy. This limitation can only be partly remedied through better availability of information. Nuclear safety does not generally include “bright lines” of acceptable or unacceptable risks, or finely drawn increments of risk. Sure, PRA analyses and other “risk informed” approaches provide the illusion of quantitative precision, and often provide useful insight for devising courses of action that that do not pose “undue risk” to public safety. But one does not have to read too many Licensee Event Reports (LERs) to see that risk determinations are ultimately shades of gray. For one example,
see the background information on our decision scoring example involving a pipe leak in a 30” moderate energy piping elbow and interim repair. The technical justification for the interim fix included terms such as “postulated”, “best estimate” and “based on the assumption”. A full reading of the LER makes clear the risk determination involved considerable qualitative judgment by the licensee in making its case and the NRC in approving the interim measure. That said, the NRC’s justification also rested in large part on a finding of “hardship or unusual difficulty” if a code repair were to be required immediately.
Where is this leading us? Are poor safety decisions the result of the lack of quality information? Perhaps. However another scenario that is at least equally likely, is that the appropriate risk information may not be pursued vigorously or the information may be interpreted in the light most favorable to the organization’s other priorities. We believe that the intrinsic uncertainties in significance determination opens the door to the influence of other factors - namely those ever present considerations of cost, schedule, plant availability, and even more personal interests, such as incentive programs and career advancement. Where significance is fuzzy, it invites rationalization in the determination of risk and marginalization of the intrinsic uncertainties. Thus a desired decision outcome could encourage tailoring of the risk determination to achieve the appropriate fit. It may mean that Perin’s focus on “effectively characterizing questionable conditions” must also account for the presence and potential influence of other non-safety factors as part of the knowledge paradigm.
This brings us back to Perin’s ideas for how to pull the string and dig deeper into this subject. She finds, “Condition reports and event reviews document not only material issues. Uniquely, they also document systemic interactions among people, priorities, and equipment — feedback not otherwise available.” (Significance Culture, p.5) This emphasis makes a lot of sense and in her book,
Shouldering Risks: The Culture of Control in the Nuclear Power Industry, she takes up the challenge of delving into the depths of a series of actual condition reports. Stay tuned for our review of the book in a subsequent post.
** You may be asked to join the LinkedIn Nuclear Safety group to view Mr. Brawn's comment and the discussion of which it is part.