Monday, September 26, 2011

Beyond Training - Reinforcing Culture

One of our recurring themes has been how to strengthen safety culture, either to sustain an acceptable level of culture or to address weaknesses and improve it.  We have been skeptical of the most common initiative - retraining personnel on safety culture principles and values.  Simply put, we don’t believe you can PowerPoint or poster your way to culture improvement.

By comparison we were more favorably inclined to some of the approaches put forth in a recent New York Times interview of Andrew Thompson, a Silicon Valley entrepreneur.  As Thompson observes,

“...it’s the culture of what you talk about, what you celebrate, what you reward, what you make visible.  For example, in this company, which is very heavily driven by intellectual property, if you file a patent or have your name on a patent, we give you a little foam brain.”*

Foam “brains”.  How clever.  He goes on to describe other ideas such as employees being able to recognize each other for demonstrating desired values by awarding small gold coins (a nice touch here as the coins have monetary value that can be realized or retained as a visible trophy), and volunteer teams that work on aspects of culture.  The common denominator of much of this: management doesn’t do it, employees do.

*  A. Bryant, “Speak Frankly, but Don’t Go ‘Over the Net’,” New York Times (September 17, 2011).

Monday, September 12, 2011

Understanding the Risks in Managing Risks

Our recent blog posts have discussed the work of anthropologist Constance Perin.  This post looks at her book, Shouldering Risks: The Culture of Control in the Nuclear Power Industry.*  The book presents four lengthy case studies of incidents at three nuclear power plants and Perin’s analysis which aims to explain the cultural attributes that facilitated the incidents’ occurrence or their unfavorable evolution.

Because they fit nicely with our interest in decision-making, this post will focus on the two case studies that concerned hardware issues.**  The first case involved a leaking, unisolable valve in the reactor coolant system (RCS) that needed repacking, a routine job.  The mechanics put the valve on its backseat, opened it, observed the packing moving up (indicating that the water pressure was too high or the backseat step hadn't worked), and closed it up.  After management meetings to review the situation, the mechanics tried again, packing came out, and the leak became more serious.  The valve stem and disc had separated, a fact that was belatedly recognized.  The leak was eventually sufficiently controlled so the plant could wait until the next outage to repair/replace the valve.  

The second case involved a switchyard transformer that exhibited a hot spot during a thermography examination.  Managers initially thought they had a circulating current issue, a common problem.  After additional investigations, including people climbing on ladders up alongside the transformer, a cover bolt was removed and the employee saw a glow inside the transformer, the result of a major short.  Transformers can, and have, exploded from such thermal stresses but the plant was able to safely shut down to repair/replace the transformer.

In both cases, there was at least one individual who knew (or strongly suspected) that something more serious was wrong from the get-go but was unable to get the rest of the organization to accept a more serious, i.e., costly, diagnosis.

Why were the plant organizations so willing, even eager, to assume the more conventional explanations for the problems they were seeing?  Perin provides a multidimensional framework that helps answer that question.

The first dimension is the tradeoff quandary, the ubiquitous tension between production and cost, including costs associated with safety.  Plant organizations are expected to be making electricity, at a budgeted cost, and that subtle (or not-so-subtle) pressure colors the discussion of any problem.  There is usually a preference for a problem explanation and corrective action that allows the plant to continue running.

Three control logics constitute a second dimension.  The calculated logics are the theory of how a plant is (or should be) designed, built, and operated.  The real-time logics consist of the knowledge of how things actually work in practice.  Policy logics come from above, and represent generalized guidelines or rules for behavior, including decision-making.  An “answer” that comes from calculated or policy logic will be preferred over one that comes from real-time logic, partly because the former have been developed by higher-status groups and partly because such answers are more defensible to corporate bosses and regulators.

Finally, traditional notions of group and individual status and a key status property, credibility, populate a third dimension: design engineers over operators over system engineers over maintenance over others; managers over individual contributors; old-timers over newcomers.  Perin creates a construct of the various "orders"*** in a plant organization, specialists such as operators or system engineers.  Each order has its own worldview, values and logics – optimum conditions for nurturing organizational silos.  Information and work flows are mediated among different orders via plant-wide programs (themselves products of calculated and policy logics).
 
Application to Cases

The aforementioned considerations can be applied to the two cases.  Because the valve was part of the RCS, it should have been subject to more detailed planning, including additional risk analysis and contingency prep.  This was pointed out by a new-to-his-job work planner who was basically ignored because of his newcomer status.  And before the work was started, the system engineer (SE) observed that this type of valve (which had a problem history at this plant and elsewhere) was prone to valve disk/stem separation and this particular valve appeared to have the problem based on his visual inspection (it had one thread less visible than other similar valves).  But the SE did not make his observations forcefully and/or officially (by initiating a CR) so his (accurate) observation was not factored into the early decision-making.  Ultimately, their concerns did not sway the overall discussion where the schedule was highest priority.  A radiographic examination that would have shown the valve/disc separation was not performed early on because that was an Engineering responsibility and the valve repair was a Maintenance project.

The transformer is on the non-nuclear side of the plant, which makes the attitudes toward it less focused and critical than for safety-related equipment.  The hot spot was discovered by a tech who was working with a couple of thermography consultants.  Thermography was a relatively new technology at this plant and not well-understood by plant managers (or trusted because early applications had given false alarms).  The tech said that the patterns he observed were not typical for circulating currents but neither he nor the consultants (the three people on-site who understood thermography) were in the meetings where the problem was discussed.  The circulating current theory was popular because (a) the plant had experienced such problems in the past and (b) addressing it could be done without shutting down the plant.  Production pressure, the nature of past problems, and the lower status of roles and equipment that are not safety related all acted to suppress the emergent new knowledge of what the problem actually was.  

Lessons Learned

Perin’s analytic constructs are complicated and not light reading.  However, the interviews in the case studies are easy to read and very revealing.  It will come as no surprise to people with consulting backgrounds that the interviewees were capable of significant introspection.  In the harsh light of hindsight, lots of folks can see what should (and could) have happened.  

The big question is what did those organizations learn?  Will they make the same mistakes again?  Probably not.  But will they misinterpret future weak or ambiguous signals of a different nascent problem?  That’s still likely.  “Conventional wisdom” codified in various logics and orders and guided by a production imperative remains a strong force working against the open discussion of alternative explanations for new experiences, especially when problem information is incomplete or fuzzy.  As Bob Cudlin noted in his August 17, 2011 post: [When dealing with risk-imbued issues] “the intrinsic uncertainties in significance determination opens the door to the influence of other factors - namely those ever present considerations of cost, schedule, plant availability, and even more personal interests, such as incentive programs and career advancement.”

   
*  C. Perin, Shouldering Risks: The Culture of Control in the Nuclear Power Industry, (Princeton, NJ: Princeton University Press, 2005).

**  The case studies and Perin’s analysis have been greatly summarized for this blog post.

***  The “orders” include outsiders such as NRC, INPO or corporate overseers.  Although this may not be totally accurate, I picture orders as akin to medieval guilds.

Wednesday, August 17, 2011

Additional Thoughts on Significance Culture

Our previous post introduced the work of Constance Perin,  Visiting Scholar in Anthropology at MIT, including her thesis of “significance culture” in nuclear installations.  Here we expand on the intersection of her thesis with some of our work. 

Perin places primary emphasis on the availability and integration of information to systematize and enhance the determination of risk significance.  This becomes the true organizing principle of nuclear operational safety and supplants the often hazy construct of safety culture.  We agree with the emphasis on more rigorous and informed assessments of risk as an organizing principle and focus for the entire organization. 

Perin observes: “Significance culture arises out of a knowledge-using and knowledge-creating paradigm. Its effectiveness depends less on “management emphasis” and “personnel attitudes” than on having an operational philosophy represented in goals, policies, priorities, and actions organized around effectively characterizing questionable conditions before they can escalate risk.” (Significance Culture, p. 3)*

We found a similar thought from Kenneth Brawn on a recent LinkedIn post under the Nuclear Safety Group.  He states, “Decision making, and hence leadership, is based on accurate data collection that is orchestrated, focused, real time and presented in a structured fashion for a defined audience….Managers make decisions based on stakeholder needs – the problem is that risk is not adequately considered because not enough time is taken (given) to gather and orchestrate the necessary data to provide structured information for the real time circumstances.” ** 

While seeing the potential unifying force of significance culture, we are mindful also that such determinations often are made under a cloak of precision that is not warranted or routinely achievable.  Such analyses are complex, uncertain, and subject to considerable judgment by the involved analysts and decision makers.  In other words, they are inherently fuzzy.  This limitation can only be partly remedied through better availability of information.  Nuclear safety does not generally include “bright lines” of acceptable or unacceptable risks, or finely drawn increments of risk.  Sure, PRA analyses and other “risk informed” approaches provide the illusion of quantitative precision, and often provide useful insight for devising courses of action that that do not pose “undue risk” to public safety.  But one does not have to read too many Licensee Event Reports (LERs) to see that risk determinations are ultimately shades of gray.  For one example, see the background information on our decision scoring example involving a pipe leak in a 30” moderate energy piping elbow and interim repair.  The technical justification for the interim fix included terms such as “postulated”, “best estimate” and “based on the assumption”.  A full reading of the LER makes clear the risk determination involved considerable qualitative judgment by the licensee in making its case and the NRC in approving the interim measure. That said, the NRC’s justification also rested in large part on a finding of “hardship or unusual difficulty” if a code repair were to be required immediately.

Where is this leading us?  Are poor safety decisions the result of the lack of quality information?  Perhaps.  However another scenario that is at least equally likely, is that the appropriate risk information may not be pursued vigorously or the information may be interpreted in the light most favorable to the organization’s other priorities.  We believe that the intrinsic uncertainties in significance determination opens the door to the influence of other factors - namely those ever present considerations of cost, schedule, plant availability, and even more personal interests, such as incentive programs and career advancement.  Where significance is fuzzy, it invites rationalization in the determination of risk and marginalization of the intrinsic uncertainties.  Thus a desired decision outcome could encourage tailoring of the risk determination to achieve the appropriate fit.  It may mean that Perin’s focus on “effectively characterizing questionable conditions” must also account for the presence and potential influence of other non-safety factors as part of the knowledge paradigm.   

This brings us back to Perin’s ideas for how to pull the string and dig deeper into this subject.  She finds, “Condition reports and event reviews document not only material issues. Uniquely, they also document systemic interactions among people, priorities, and equipment — feedback not otherwise available.” (Significance Culture, p.5)  This emphasis makes a lot of sense and in her book, Shouldering Risks: The Culture of Control in the Nuclear Power Industry, she takes up the challenge of delving into the depths of a series of actual condition reports.  Stay tuned for our review of the book in a subsequent post.


*  C. Perin, “Significance Culture in Nuclear Installations,” a paper presented at the 2005 Annual Meeting of the American Nuclear Society (June 6, 2005).

**  You may be asked to join the LinkedIn Nuclear Safety group to view Mr. Brawn's comment and the discussion of which it is part.

Friday, August 12, 2011

An Anthropologist’s View

Academics in many disciplines study safety culture.  This post introduces to this blog the work of an MIT anthropologist, Constance Perin, and discusses a paper* she presented at the 2005 ANS annual meeting.

We picked a couple of the paper’s key recommendations to share with you.  First, Perin’s main point is to advocate the development of a “significance culture” in nuclear power plant organizations.  The idea is to organize knowledge and data in a manner that allows an organization to determine significance with respect to safety issues.  The objective is to increase an organization’s capabilities to recognize and evaluate questionable conditions before they can escalate risk.  We generally agree with this aim.  The real nub of safety culture effectiveness is how it shapes the way an organization responds to new or changing situations.

Perin understands that significance evaluation already occurs in both formal processes (e.g., NRC evaluations and PRAs) and in the more informal world of operational decisions, where trade-offs, negotiations, and satisficing behavior may be more dynamic and less likely to be completely rational.  She recommends that significance evaluation be ascribed a higher importance, i.e., be more formally and widely ingrained in the overall plant culture, and used as an organizing principle for defining knowledge-creating processes. 

Second, because of the importance of a plant's Corrective Action Program (CAP), Perin proposes making NRC assessment of the CAP the “eighth cornerstone” of the Reactor Oversight Process (ROP).  She criticizes the NRC’s categorization of cross cutting issues for not being subjected to specific criteria and performance indicators.  We have a somewhat different view.  Perin’s analysis does not acknowledge that the industry places great emphasis on each of the cross cutting issues in terms of performance indicators and monitoring including self assessment.**  It is also common to the other cornerstones where the plants use many more indicators to track and trend performance than the few included in the ROP.  In our opinion, a real problem with the ROP is that its few indicators do not provide any reliable or forward looking picture of nuclear safety. 

The fault line in the CAP itself may better be characterized in terms of the lack of measurement and assessment of how well the CAP program functions to sustain a strong safety culture.  Importantly such an approach would evaluate how decisions on conditions adverse to quality properly assessed not only significance, but balanced the influence of any competing priorities.  Perin also recognizes that competing priorities exist, especially in the operational world, but making the CAP a cornerstone might actually lead to increased false confidence in the CAP if its relationship with safety culture was left unexamined.

Prof. Perin has also written a book, Shouldering Risks: The Culture of Control in the Nuclear Power Industry,*** which is an ethnographic analysis of nuclear organizations and specific events they experienced.  We will be reviewing this book in a future post.  We hope that her detailed drill down on those events will yield some interesting insights, e.g., how different parts of an organization looked at the same situation but had differing evaluations of its risk implications.

We have to admit we didn’t detect Prof. Perin on our radar screen; she alerted us to the presence of her work.  Based on our limited review to date, we think we share similar perspectives on the challenges involved in attaining and maintaining a robust safety culture.


*  C. Perin, “Significance Culture in Nuclear Installations,” a paper presented at the 2005 Annual Meeting of the American Nuclear Society (June 6, 2005).

** The issue may be one of timing.  Prof. Perin based her CAP recommendation, in part, on a 2001 study that suggested licensees’ self-regulation might be inadequate.  We have the benefit of a more contemporary view.  

*** C. Perin, Shouldering Risks: The Culture of Control in the Nuclear Power Industry, (Princeton, NJ: Princeton University Press, 2005).

Friday, July 15, 2011

Decision Scoring No. 2

This post introduces the second decision scoring example.  Click here, or the box above this post, to access the detailed decision summary and scoring feature.  

This example involves a proposed non-code repair to a leak in the elbow of service water system piping.  By opting for a non-code, temporary repair, a near term plant shutdown will be avoided but the permanent repair will be deferred for as long as 20 months.  In grading this decision for safety impact and decision strength, it may be helpful to think about what alternatives were available to this licensee.  We could think of several:

-    not perform a temporary repair as current leakage was within tech spec limits, but implement an augmented inspection and monitoring program to timely identify any further degradation.

-    perform the temporary repair as described but commit to perform the permanent repair within a shorter time period, say 6 months.

-    immediately shut down and perform the code repair.

Each of these alternatives would likely affect the potential safety impact of this leak condition and influence the perception of the decision strength.  For example a decision to shut down immediately and perform the code repair would likely be viewed as quite conservative, certainly more conservative than the other options.  Such a decision might provide the strongest reinforcement of safety culture.  The point is that none of these decisions is necessarily right or wrong, or good or bad.  They do however reflect more or less conservatism, and ultimately say something about safety culture.

Wednesday, July 13, 2011

Decision No. 1 Scoring Results


We wanted to present the results to date for the first of the decision scoring examples.  (The decision scoring framework is discussed here.)  This decision involved the replacement of a bearing in the air handling unit for a safety related pump room.  After declaring the air unit inoperable, the bearing was replaced within the LCO time window.

We asked readers to assess the decision in two dimensions: potential safety impact and the strength of the decision, using anchored scales to quantify the scores.  The chart to the left shows the scoring results with the size of the data symbols related to the number of responses.  Our interpretation of the results is as follows:

First, most of the scores did coalesce in the mid ranges of each scoring dimension.  Based on the anchored scales, this meant most people thought the safety impact associated with the air handling unit problem was fairly minimal and did not extend out in time.  This is consistent with the fact that the air handler bearing was replaced within the LCO time window.  The people that scored safety significance in this mid range also scored the decision strength as one that reasonably balanced safety and other operational priorities.  This seems consistent to us with the fact that the licensee had also ordered a new shaft for the air handler and would install it at the next outage - the new shaft being necessary for addressing the cause of the bearing problem.  Notwithstanding that most scores were in the mid range, we find it interesting that there is still a spread from 4-7 in the scoring of decision strength, and somewhat smaller spread of 4-6 in safety impact.  This would be an attribute of decision scores that might be tracked closely to see identify situations where the spreads change over time - perhaps signaling that either there is disagreement regarding the merits of the decisions or that there is a need for better communication of the bases for decisions.

Second, while not a definitive trend, it is apparent that in the mid-range scores people tended to see decision strength in terms of safety impact.  In other words, in situations where the safety impact was viewed as greater (e.g., 6 or so), the perceived strength of the decision was viewed as somewhat less than when the safety impact was viewed as somewhat lower (e.g., 4 or so).  This trend was emphasized by the scores that rated decision strength at 9 based on safety impact of 2.  There is intrinsic logic to this and also may highlight to managers that an organization’s perception of safety priorities will be directly influenced by their understanding of the safety significance of the issues involved.  One can also see the potential for decision scores “explaining” safety culture survey results which often indicate a relatively high percentage of respondents “somewhat agreeing” that e.g., safety is a high priority, a smaller percentage “mostly agreeing” and a smaller percentage yet, “strongly agreeing”. 

Third, there were some scores that appeared to us to be “outside the ballpark”.  These were the scores that rated safety impact at 10 did not seem consistent with our reading of the air handling unit issue, including the note indicating that the licensee had assessed the safety significance as minimal.

Stay tuned for the next decision scoring example and please provide your input.

Friday, June 24, 2011

Rigged Decisions?

The Wall Street Journal reported on June 23, 2011* on an internal investigation conducted by Transocean, owner of the Deepwater Horizon drill rig, that placed much of the blame for the disaster on a series of decisions made by BP.  Is this news?  No, the blame game has been in full swing almost since the time of the rig explosion.  But we did note that Transocean’s conclusion was based on a razor sharp focus on:

“...a succession of interrelated well design, construction, and temporary abandonment decisions that compromised the integrity of the well and compounded the risk of its failure…”**  (p. 10)


Note, their report did not place the focus on the “attitudes, beliefs or values” of BP personnel or rig workers, and really did not let their conclusions drift into the fuzzy answer space of “safety culture”.  In fact the only mention of safety culture in their 200+ page report is in reference to a U.S. Coast Guard (USCG) inspection of the drill rig in 2009 which found:

“outstanding safety culture, performance during drills and condition of the rig.” (p. 201)

There is no mention of how the USCG reached such a conclusion and the report does not rely on it to support its conclusions.  It would not be the first time that a favorable safety culture assessment at a high risk enterprise preceded a major disaster.***

We also found the following thread in the findings that reinforce the importance of recognizing and understanding the impact of underlying constraints on decisions:

“The decisions, many made by the operator, BP, in the two weeks leading up to the incident, were driven by BP’s knowledge that the geological window for safe drilling was becoming increasingly narrow.” (p.10)

The fact is, decisions get squeezed all the time resulting in decisions which may be reducing margins but arguably are still “acceptable”.  But such decisions do not necessarily lead to unsafe, much less disastrous, results.  Most of the time the system is not challenged, nothing bad happens, and you could even say the marginal decisions are reinforced.  Are these tradeoffs to accommodate conflicting priorities the result of a weakened safety culture?  Perhaps.  But we suspect that the individuals making the decisions would say they believed safety was their priority and culture may have appeared normal to outsiders as well (e.g., the USCG).  The paradox occurs because decisions can trend in a weaker direction before other, more distinct evidence of degrading culture become apparent.  In this case, a very big explosion.

*  B. Casselman and A. Gonzalez, "Transocean Puts Blame on BP for Gulf Oil Spill," wsj.com (June 23, 2011).

** "Macondo Well Incident: Transocean Investigation Report," Vol I, Transocean, Ltd. (June 2011).

*** For example, see our August 2, 2010 post.

Tuesday, June 21, 2011

Decisions….Decisions

Safety Culture Performance Measures

Developing forward looking performance measures for safety culture remains a key challenge today and is the logical next step following the promulgation of the NRC’s policy statement on safety culture.  The need remains high as safety culture issues continue to be identified by the NRC subsequent to weaknesses developing in the safety culture and ultimately manifesting in traditional (lagging) performance indicators.

Current practice has continued to rely on safety culture surveys which focus almost entirely on attitudes and perceptions about safety.  But other cultural values are also present in nuclear operations - such as meeting production goals - and it is the rationalization of competing values on a daily basis that is at the heart of safety culture.  In essence decision makers are pulled in several directions by these competing priorities and must reach answers that accord safety its appropriate priority.

Our focus is on safety management decisions made every day at nuclear plants; e.g., operability, exceeding LCO limits, LER determinations, JCOs, as well as many determinations associated with problem reporting, and corrective action.  We are developing methods to “score” decisions based on how well they balance competing priorities and to relate those scores to inference of safety culture.  As part of that process we are asking our readers to participate in the scoring of decisions that we will post each week - and then share the results and interpretation.  The scoring method will be a more limited version of our developmental effort but should illustrate some of the benefits of a decision-centric view of safety culture.

Look in the right column for the links to Score Decisions.  They will take you to the decision summaries and score cards.  We look forward to your participation and welcome any questions or comments.