Showing posts with label Nuclear. Show all posts
Showing posts with label Nuclear. Show all posts

Tuesday, June 20, 2017

Learning About Nuclear Safety Culture from the Web, Maybe

The Internet  Source:Wikipedia
We’ve come across some Internet content (one website, one article) that purports to inform the reader about nuclear safety culture (NSC).  This post reviews the content and provides our perspective on its value.

NSC Website

It appears the title of this site is “Nuclear Safety Culture”* and the primary target is journalists who want an introduction to NSC concepts, history and issues.  It is a product of a group of European entities.  It is a professional looking site that covers four major topics; we’ll summarize them in some detail to show their wide scope and shallow depth. 

Nuclear Safety Culture covers five sub-topics:

History traces the shift in attitudes toward and protection from ionizing radiation as the possible consequences became better known but the story ends in the 1950s.  Key actions describe the roles of internal and external stakeholders during routine operations and emergency situations.  The focus is on power production although medicine, industrial uses and weapons are also mentioned.  Definition of NSC starts with INSAG (esp. INSAG-4), then adds INPO’s directive to emphasize safety over competing goals, and a familiar list of attributes from the Nuclear Safety Journal.  As usual, there is nothing in the attributes about executive compensation or the importance of a systems view.  IAEA safety principles are self explanatory.  Key scientific concepts cover the units of radiation for dose, intake and exposure.  Some values are shown for typical activities but only one legal limit, for US airport X-rays, is included.**  There is no information in this sub-topic on how much radiation a person can tolerate or the regulatory limits for industrial exposure.

From Events to Accidents has two sub-topics:

From events to accidents describes the 7-level International Nuclear Event Scale (from a minor anomaly to major accident) but the scale itself is not shown.  This is a major omission.  Defence in depth discusses this important concept but provides only one example, the levels of physical protection between a fuel rod in a reactor and the environment outside the containment.

Controversies has two sub-topics:

Strengths and Weaknesses discuss some of the nuclear industry’s issues and characteristics: industry transparency is a double-edge sword, where increased information on events may be used to criticize a plant owner; general radiation protection standards for the industry; uncertainties surrounding the health effects of low radiation doses; the usual nuclear waste issues; technology evolution through generations of reactors; stress tests for European reactors; supply chain realities where a problem anywhere is used against the entire industry; the political climate, focusing on Germany and France; and energy economics that have diminished nuclear’s competitiveness.  Overall, this is a hodgepodge of topics and a B- discussion.  The human factor provides a brief discussion of the “blame culture” and the need for a systemic view, followed by summaries of the Korean and French document falsification events.

Stories summarizes three events: the Brazilian theft of a radioactive source, Chernobyl and Fukushima.  They are all reported in an overly dramatic style although the basic facts are probably correct.

The authors describe what they call the “safety culture breach” for each event.  The problem is they comingle overarching cultural issues, e.g., TEPCO’s overconfident management, with far more specific failures, e.g., violations of safety and security rules, and consequences of weak NSC, e.g., plant design inadequacies.  It makes one wonder if the author(s) of this section have a clear notion of what NSC is.

It isn’t apparent how helpful this site will be for newbie journalists, it is certainly not a complete “toolkit.”  Some topics are presented in an over-simplified manner and others are missing key figures.  In terms of examples, the site emphasizes major accidents (the ultimate trailing indicators) and ignores the small events, normalization of deviance, organizational drift and other dynamics that make up the bulk of daily life in an organization.  Overall, the toolkit looks a bit like a rush job or unedited committee work, e.g., the section on the major accidents is satisfactory but others are incomplete.  Importantly (or perhaps thankfully) the authors offer no original observations or insights with respect to NSC.  It’s worrisome that what the site creators call NSC is often just the safety practices that evolved as the hazards of radiation became better known. 

NSC Article

There is an article on NSC in the online version of Power magazine.  We are not publishing a link to the article because it isn’t very good; it looks more like a high schooler’s Internet-sourced term paper than a thoughtful reference or essay on NSC.

However, like the stopped clock that shows the correct time twice per day, there can be a worthwhile nugget in such an article.  After summarizing a research paper that correlated plants’ performance indicators with assessments of their NSC attributes (which paper we reviewed on Oct. 5, 2014), the author says “There are no established thresholds for determining whether a safety culture is “healthy” or “unhealthy.””  That’s correct.  After NSC assessors consolidate their interviews, focus groups, observations, surveys and document reviews, they always identify some improvement opportunities but the usual overall grade is “pass.”***  There’s no point score, meter or gauge.  Perhaps there should be.

Our Perspective

Don’t waste your time with pap.  Go to primary sources; an excellent starting point is the survey of NSC literature performed by a U.S. National Laboratory (which we reviewed on Feb. 10, 2013.)  Click on our References label to get other possibilities and follow folks who actually know something about NSC, like Safetymatters.


Nuclear Safety Culture was developed as part of the NUSHARE project under the aegis of the European Nuclear Education Network.   Retrieved June 19, 2017.

**  The airport X-ray limit happens to be the same as the amount of radiation emitted by an ordinary banana.

***  A violation of the Safety Conscious Work Environment (SCWE) regulations is quite different.  There it’s zero tolerance and if there’s a credible complaint about actual retaliation for raising a safety issue, the licensee is in deep doo-doo until they convince the regulator they have made the necessary adjustments in the work environment.

Wednesday, July 30, 2014

National Research Council: Safety Culture Lessons Learned from Fukushima

The National Research Council has released a report* on lessons learned from the Fukushima nuclear accident that may be applicable to U.S. nuclear plants.  The report begins with a recitation of various Fukushima aspects including site history, BWR technology, and plant failure causes and consequences.  Lessons learned were identified in the areas of Plant Operations and Safety Regulations, Offsite Emergency Management, and Nuclear Safety Culture (SC).  This review focuses on the SC aspects of the report.

Spoiler alert: the report reflects the work of a 24-person committee, with the draft reviewed by two dozen other individuals.**  We suggest you adjust your expectations accordingly.

The SC chapter of the report provides some background on SC and echoes the by-now familiar cultural issues at both Tokyo Electric Power Company (TEPCO) and Japan’s Nuclear Energy Agency.  Moving to the U.S., the committee summarizes the current situation in a finding: “The U.S. nuclear industry, acting through the Institute of Nuclear Power Operations, has voluntarily established nuclear safety culture programs and mechanisms for evaluating their implementation at nuclear plants. The U.S. Nuclear Regulatory Commission has published a policy statement on nuclear safety culture, but that statement does not contain implementation steps or specific requirements for industry adoption.” (p. 7-8)  This is accurate as far as it goes.

After additional discussion of the U.S. nuclear milieu, the chapter concludes with two recommendations, reproduced below along with associated commentary.

An Effective, Independent Regulator

“RECOMMENDATION 7.2A: The U.S. Nuclear Regulatory Commission and the U.S. nuclear power industry must maintain and continuously monitor a strong nuclear safety culture in all of their safety-related activities. Additionally, the leadership of the U.S. Nuclear Regulatory Commission must maintain the independence of the regulator. The agency must ensure that outside influences do not compromise its nuclear safety culture and/or hinder its discussions with and disclosures to the public about safety-related matters.” (pp. S-9, 7-17)

In the lead up to this recommendation, there was some lack of unanimity on the subject of whether the NRC was sufficiently independent and if some degree of regulatory capture has occurred.  The debate covered industry involvement in rule-making, Davis-Besse and other examples.

We saw one quote worth repeating here: “The president and Senate of the United States also play important roles in helping to maintain the USNRC’s regulatory independence by nominating and appointing highly qualified agency leaders (i.e., commissioners) and working to ensure that the agency is free from undue influences.” (pp. 7-14/15)  We’ll leave it to the reader to determine if the executive and legislative branches met that standard with the previous NRC chairman and the two current commissioner nominees, both lawyers—one an NRC lifer and the other a former staffer on the Hill.

Snarky comment notwithstanding, the first recommendation is a motherhood statement and borderline tautology (who can envision the effective negation of any of the three imperative statements?)  More importantly, it appears only remotely related to the concept of SC; even at its simplest, SC consists of values and artifacts and there’s not much of either in the recommendation.

Increased Industry Transparency

“RECOMMENDATION 7.2B: The U.S. nuclear industry and the U.S. Nuclear Regulatory Commission should examine opportunities to increase the transparency of and communication about their efforts to assess and improve their nuclear safety cultures.” (pp. S-9, 7-17)

The discussion includes a big kiss for INPO.  “INPO has taken the lead for promoting a strong nuclear safety culture in the U.S. nuclear industry through training and evaluation programs.” (p. 7-10)  The praise for INPO continues in an attachment to the SC chapter but it eventually gets to the elephant in the room: “The results of INPO’s inspection program are shared among INPO membership, but such information is not made available to the public. . . . Releases of summaries of these inspections by management to the public would help increase transparency.” (p. 7-21)

The committee recognizes that implementing the recommendation “would require that the industry and regulators disclose additional information to the public about their efforts to assess safety culture effectiveness, remediate deficiencies, and implement improvements.” (p. 7-17)

At least transparency is a cultural attribute.  We have long opined that the nuclear industry’s penchant for secrecy is a major contributor to the industry being its own worst enemy in the court of public opinion. 

Our Perspective

This report looks like what it is: a crowd sourced effort by a focus group of academics using the National Academy of Sciences’ established bureaucratic processes.  The report is 367 pages long, with over 350 references and a bunch of footnotes.  The committee’s mental model of SC focuses on organizational processes that influence SC. (p. 7-1)  I think it's fair to infer that their notion of improvement is to revise the rules that govern the processes, then maximize compliant behavior.  Because of the committee’s limited mental model, restricted mission*** and the real or perceived need to document every factoid, the report ultimately provides no new insights into how U.S. nuclear plants might actually realize stronger SC.


*  National Research Council Committee on Lessons Learned from the Fukushima Nuclear Accident for Improving Safety and Security of U.S. Nuclear Plants, “Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants” Prepublication Copy.  Downloaded July 26, 2014.  The National Research Council is part of the National Academy of Sciences (NAS).  Thanks to Bill Mullins for bringing this report to our attention.

**  The technical advisor to the committee was Najmedin Meshkati from the University of Southern California.  If that name rings a bell with Safetymatters readers, it may be because he and his student, Airi Ryu, published an op-ed last March contrasting the culture of Tohoru Electric with the culture of TEPCO.  We posted our review of the op-ed here.

***  The committee was tasked to consider causes of the Fukushima accident, conclusions from previous NAS studies and lessons that can be learned to improve nuclear plant safety in certain specified areas.  The committee was directed to not make any policy recommendations that involved non-technical value judgments. (p. S-10)

Friday, July 9, 2010

Nuclear Management Compensation (Part 2)


In Part 1 we provided the results of our survey of nuclear executive compensation levels, structure and performance metrics.  In this post we try to ascertain what impact, if any, these factors could have on nuclear safety culture and safety management.

To us there were several fairly striking observations from the data.  One, the amounts of compensation are very significant for the top nuclear executives.  Two, the compensation is heavily dependent on each year’s performance.  And three, business performance measured by EPS is the key to compensation, safety performance is a minor contributor.  A corollary to the third point might be that in no cases that we could identify was safety performance a condition precedent or qualification for earning the business-based incentives.

Some specifics.  Generally the compensation programs state that they are designed, in part, to reward safety and other stewardship goals.  Sometimes the term “operational excellence” is used and can include “safety” but it is difficult to know what this encompasses as SEC filings include various levels of specificity.   In other cases the term “culture” is used but in general the context is not nuclear safety culture.*   Some incentive plans state they are designed to prevent “promoting excessive risk” or unnecessary risks.  These plans did not discuss any specific penalties or reductions in incentive linked to risk so it is difficult to judge how they are intended to accomplish the goal.  It may be that using company stock for significant parts of the incentive payout is intended to ensure that executives have an interest in protecting their ownership stake.  By far the most common metric in the incentive programs associated with safety is industrial safety accident rate and/or fatalities.  This is understandable in that across the enterprise of a power generation company, industrial safety is a common goal and can be measured on a common basis.  Generally, fatal accidents result in a safety penalty in the incentive program, and the penalties could reduce the total incentive by about 5-10%.  The largest safety impact on incentive that I came across was a 10% factor for OSHA safety and 10% for nuclear safety culture.

One caveat is that the incentives and executives described in SEC filings necessarily represent the very top level of the enterprise.  The specifics of incentive programs for lower tier executives, including those within the nuclear operating organization, are not available.  However, it seems a safe assumption that the design of the incentive program is carried down the organization to at least the VP level, including significant performance incentives, but with metrics weighted more to business unit performance. 

What is the significance of these compensation programs to nuclear safety management, if any?  First, with regard to total compensation, the high levels recently achieved could be viewed as a positive in assuring that the highest quality management is attracted and retained in nuclear operations.  The responsibilities of nuclear executives are unusually large, complex and unremitting.  While the totals are eye-popping, viewed in the context of the compensation for other executives in the enterprise, there is only parity. 

With regard to the structure of compensation - the amounts that are based on performance and the metrics used to define goals - there may cause for greater concern.  As shown in Part 1, with 60-70% of total compensation at risk, executives can see their compensation, and that of the entire management team, impacted by as much as several million dollars in a year.  And almost all of that incentive is based on business goals such as EPS, cash flow, budgets, and plant performance.

As we have commented many times in this blog, we view achievement of nuclear safety culture as a process of successfully balancing safety goals with other competing business priorities.  That context is unavoidable.  Simplistic statements that “safety is our highest priority” do not reflect the reality of day-to-day decision making, or insulate decision makers.  The balance is complicated and difficult since decisions are rarely black and white.  And we have seen time and again, bad outcomes can evolve from a series of decisions and judgments that tilt the wrong way, apparent only in retrospective.  When executive incentive programs weight business goals 90% and some aspect of safety 10%, what is the message?  How does it not amplify the pressure on the executives and possibly skew the balancing act?  How does it “prevent promoting excessive risk” as some compensation plans state? 

In Part 3 we will discuss whether executive compensation should be an element of nuclear safety culture and what might be done to ensure that it reinforces the commitment to safety.

*  In one case “People, Culture, Reputation” as a goal is expanded to mean: employee performance appraisals, executive-level employee diversity, ethics and compliance training, corporate values, and regulatory compliance.  It counts for 10% of the total incentive.

Tuesday, July 6, 2010

Nuclear Management Compensation (Part 1)

We have posted previously on the general topic of competing priorities (cost, schedule, political, personal career interest, etc.) being a potential issue for nuclear managers to balance in making safety decisions.  We have also recently posted regarding the possible influence that cost and schedule pressures may have had on BP personnel in the events leading up to the spill and in several prior BP accidents.  Lastly we highlighted some research results that suggest that compensation incentives may have perverse impacts on desired results.  In this post we turn directly to the issue of compensation for executives with corporate responsibilities for nuclear plants and their operations.  In subsequent posts we will discuss the potential implications of compensation on nuclear safety management.

We researched compensation data that was available in SEC filings of public corporations owning nuclear facilities.  As a practical matter this is the only publicly available information on this subject.  In their annual prospectus, corporations disclose data for the top five compensated executives - the CEO and four other “NEOs” (Named Executive Officers).  In some cases the executive who is the Chief Nuclear Officer is an NEO.  In other cases a higher level executive, typically head of the generation business unit or other operating officer, is the NEO with direct, attributable responsibility for the nuclear facilities.  To obtain an overview of compensation practices we looked at CNOs when available and, if not, the responsible NEOs .  We compiled data for compensation levels, the structure of compensation, and the performance metrics being applied by the corporation. 

Total Compensation

For NEOs with direct nuclear responsibilities, total average annual compensation is now $2.5-3 million (the range for our sample was approximately $1.25 to $5 million).  We counted salary, annual bonus, non-equity incentive awards and equity incentive awards.  We did not include changes in retirement benefit accruals (which can be quite large as well) or other miscellaneous fringe benefits. 

In virtually all cases the level of compensation has shown a significant increase in the last 3-4 years.  This may reflect increased competition for nuclear executive talent rippling through the industry and/or the increased business value associated with nuclear generation assets.  The figure below illustrates how compensation has changed over recent years in two nuclear organizations, one operating a single unit and the other a small fleet.



Compensation Structure

We also examined the structure of executive compensation with an emphasis on the amounts of compensation that are fixed on an annual basis (typically salary and benefits) and those amounts that are “at risk”, meaning they are performance-based.  We found that compensation is heavily incentive based where the amount of total compensation that is earned based on performance now averages greater than 70% (the range for our sample was 60% - 80%).  As with total compensation, the percent of compensation at risk has increased significantly in the last 3-4 years.  The next figure shows the trend lines for the same two corporations as in the first figure.


These results are not an accident as the SEC filings for these corporations set out compensation policies that are heavily weighted toward performance.  For the most part incentives are based on the current year performance though for some specific incentives such as stock options, rolling three averages may be used.  The principal objective appears to be to align executive pay with shareholder interests.

Performance Metrics

Typically the compensation structure uses a combination of corporate level performance and business unit performance to determine incentive payouts.  At the highest level of the organization, the CEO, metrics may be exclusively corporate level and almost always focused on earnings per share (EPS).  Lower tier executives generally have a significant portion (60-70%) of their incentive determined by corporate performance (almost always EPS) with the remainder determined by business unit or personal goals.  The metrics for nuclear business units are typically capacity factor and budget based, occasionally using an index of 8-10 metrics.  This performance will generally be associated with about 20% of the total incentive.  The remaining 10% of incentive is usually associated with some type of safety performance.  OSHA industrial safety measures are most common though we identified one corporation that used a “nuclear safety culture” metric.

In Part 2 we will try to ascertain what impact these factors could have on nuclear safety culture and safety management.

Friday, June 11, 2010

Safety Culture Issue at Callaway. Or Not.

We just read a KBIA* report on the handling of an employee’s safety concern at the Callaway nuclear plant that piqued our interest. This particular case was first reported in 2009 but has not had widespread media attention so we are passing it on to you.

The history seems straightforward: an employee raised a safety concern after an operational incident, was rebuffed by management, drafted a discrimination complaint for the U.S. Dept. of Labor, received (and accepted) a $550K settlement offer from the plant owner, and went to work elsewhere. The owner claimed the settlement fell under the NRC’s Alternative Dispute Resolution (ADR) process, and the NRC agreed.

We have no special knowledge of, nor business interest in, this case. It may be a tempest in a tea pot but we think it raises some interesting questions from a safety culture perspective.

First, here is another instance where an employee feels he must go outside the organization to get attention to a safety concern. The issue didn't seem to be that significant, at most an oversight by the operators or a deficient procedure. Why couldn’t the plant safety culture process his concern, determine an appropriate resolution, and move on?

Second, why was the company so ready to pony up $550K? That is a lot of dough and seems a bit strange. Even the employee noted that it was a generous offer. It makes one wonder what else was going on in the background. To encourage licensees to participate in ADR, the NRC closes investigations into alleged discrimination against employees when an ADR settlement is reached. Can safety essentially be for sale under ADR if an owner can settle with an employee?

Third, what happened to the original safety concern? According to another source,** the NRC found the operators’ actions to be “not prudent” but did not penalize any operators. Did the plant ever take any steps to address the issue to avoid repetition?


* P. Sweet and R. Townsend, KBIA Investigative Report: Looking Into Callaway Nuclear Power Plant’s “Safety Culture” (May 24, 2010).  KBIA is an NPR-member radio station owned by the U. of Missouri school of journalism.

**  FOCUS/Midwest website, "Did Ameren pay a whistleblower to shut up and go away?" (Jan. 4, 2009).

Tuesday, June 8, 2010

Toothpaste and Oil Slicks

At the end of last week came the surprise announcement from the former Dominion engineer, David Collins, that he was withdrawing his allegations regarding his former employer’s safety management and the NRC’s ability to provide effective oversight of safety culture.* The reasons for the withdrawal are still unclear though Collins cited lack of support by local politicians and environmental groups.

What is to be made of this? As we stated in a post at the time of the original allegations, we don’t have any specific insight into the bases for the allegations. We did indicate that how Dominion and the NRC would go about addressing the allegations might present some challenges.

What can be said about the allegations with more certainty is that they will not go away. Like the proverbial toothpaste, allegations can’t be put back into the tube and they will need to be addressed on their merits. We assume that Collins acted in good faith in raising the allegations. In addition, a strong safety culture at Dominion and the NRC should almost welcome the opportunity to evaluate and respond to such matters. A linchpin of any robust safety culture is the encouragement for stakeholders to raise safety concerns and for the organization to respond to them in an open and effective manner. If the allegations turn out to not have merit, it has still been an opportunity for the process to work.

In a somewhat similar vein, the fallout (I am mixing my metaphors) from the oil released into the gulf from the BP spill will remain and have to be dealt with long after the source is capped or shut off. It will serve as an ongoing reminder of the consequences of decisions where safety and business objectives try to occupy a very limited success space. In recent days there have been extensive pieces* in the Wall Street Journal and New York Times delineating in considerable detail the events and decision making leading up to the blowout. These accounts are worthy of reading and digesting by anyone involved in high risk industries. Two things made a particular impression. One, it is clear that the environment leading up to the blowout included fairly significant schedule and cost pressures. What is not clear at this time is to what extent those business pressures contributed to the outcome. There are numerous cited instances where best practices were not followed and concerns or recommendations for prudent actions were brushed aside. One wishes the reporters had pursued this issue in more depth to find out “Why?” Two, the eventual catastrophic outcome was the result of a series of many seemingly less significant decisions and developments. In other words it was a cumulative process that apparently never flashed an unmistakable warning alarm. In this respect it reminds us of the need for safety management to maintain a highly developed “systems” understanding with the ability to connect the dots of risk.

* Links below



Thursday, June 3, 2010

25 Standard Deviation Moves

A Reuters Breakingviews commentary in today’s New York Times makes some interesting arguments about the consequences of the BP oil spill on the energy industry. The commentary draws parallels between BP and the financial implosion that led to Lehman Brothers bankruptcy. ". . . flawed risk management, systemic hazard, and regulatory incompetence" are cited as the common causes, and business models that did not take account of the possibility for "25 standard deviation moves". These factors will inevitably lead to government intervention and industry consolidation as the estimated $27 billion in claims (a current estimate for the BP spill) is ". . . a liability no investor will be comfortable taking, . . ."

While much of this commentary makes sense, we think it is missing a big part of the picture by not focusing on the essential need for much more rigorous safety management. By all reports, the safety performance of BP is a significant outlier in the oil industry; maybe not 25 sigma but 2 or 3 sigma at least. We have posted previously about BP and its safety deficiencies and its apparent inability to learn from past mistakes. There has also been ample analysis of the events leading up to the spill to suggest that a greater commitment to safety could, and likely would, have avoided the blowout. Safety commitment and safety culture provide context, direction and constraints for risk calculations. The potential consequences of a deep sea accident will remain very large, but the probability of the event can and should be brought much lower. Simply configuring energy companies with vastly deep pockets seems unlikely to be a sufficient remedy. For one, money damages are at best an imperfect response to such a disaster. More important, a repeat of this type of event would likely result in a ban on deep sea drilling regardless of the financial resources of the driller.

In the nuclear industry the potentially large consequences of an incident have, so far, been assumed by the government. In this respect there is something of a parallel to the financial crisis where the government stepped in to bail out the "too large to fail" entities. Aside from the obvious lessons of the BP spill, nuclear industry participants have to ensure that their safety commitment is both reality and public perception, or there may be some collateral damage as policy makers think about how high risk industry, including nuclear, liabilities are being apportioned.

Tuesday, June 1, 2010

Underestimating Risk and Cost

Good article in today's New York Times Magazine Preview about economic decision making in general and the oil industry in particular. In summary, when an event is difficult to imagine (e.g., the current BP disaster), people tend to underestimate the probability of it occurring; when it's easier to imagine (e.g., a domestic terrorist attack after 9/11), people tend to overestimate the probability. Now add government caps on liability and decision-making can get really skewed, with unreasonable estimates of both event-related probabilities and costs.

The relevance of this decision-making model to the nuclear industry is obvious but we want to focus on something the article didn't mention: the role of safety culture. Nuclear safety culture guides planning for and reacting to unexpected, negative events. On the planning side, culture can encourage making dispassionate, fact-based decisions regarding unfavorable event probabilities and potential consequences. However, if such an event occurs, then affected personnel will respond consistent with their training and cultural expectations.

Wednesday, May 26, 2010

Oil and Nuclear – Again

We saw an essay on Clean Skies that suggests the oil industry could learn from the nuclear industry, in particular, that the oil industry should adopt international standards of practice similar to the nuclear industry’s response to Chernobyl. We totally agree. The one point we would add is that the single most important thing the oil industry could learn from nuclear is the significance of establishing and maintaining an adequate safety culture. Safety culture is the sine qua non of safe, profitable operations in any high-hazard business.

And safety culture has to be real. Beleaguered executives proclaiming that safety is number are simply not convincing when equipment and methods haven’t been tested (or don’t work), and profit has obviously been driving decision-making.

We want to remind the nuclear industry that they are one significant incident away from an even worse political and public relation disaster. As we have said before, the oil industry is not as tightly interwoven in the public mind as nuclear; to date, BP is being vilified in Washington and in the press, but the damage to other companies has been incidental, a temporary stop in issuing drilling permits. The nuclear industry would not get off so relatively easy if an incident of similar magnitude were to occur in their bailiwick.


Friday, May 7, 2010

Why Nuclear is Better than Oil

Today's New York Times has an article whose title says it all, "Regulator Deferred to Oil Industry on Offshore Rig Safety." It turns out the the Minerals Management Service, an Interior Department agency, is charged both with regulating the oil industry and with collecting royalties from it. The article goes to to say that the trend in other parts of the world is to separate these functions.

We wring our hands on this blog about creeping complacency, normalization of deviance, and failure to sufficiently acknowledge conflicting goals in nuclear plant operations. But we don't have to bang the drum about getting a dedicated safety regulator; the NRC was created in 1974 specifically to separate the government's promotional and regulatory roles in the nuclear industry.

Saturday, May 1, 2010

Why is Nuclear Different?

We saw a very interesting observation in a recent World Nuclear News item describing updates to World Association of Nuclear Operators’ structure. The WANO managing director said “Any CEO must ensure their own facilities are safe but also ensure every other facility is safe. [emphasis added] It's part of their commitment to investors to do everything they can to ensure absolute safety and the one CEO that doesn't believe in this concept will risk the investment of every other.” As WNN succinctly put it, “These company heads are hostages of one another when it comes to nuclear safety.”

I think it's true that nuclear operators are joined at the wallet, but why? In most industries, a problem at one competitor creates opportunities for others. Why is the nuclear industry so tightly coupled and at constant risk of contagion? Is it the mystery and associated fears, suspicion and, in some cases, local visibility that attends nuclear?


Coal mining and oil exploration exist in sharp contrast to nuclear. "Everyone knows" coal mining is dirty and dangerous but bad things only happen, with no wide-ranging effects, to unfortunate folks in remote locations. Oil exploration is somewhat more visible: people will be upset for awhile over the recent blow-out in the Gulf of Mexico, offshore drilling will be put on a temporary hold, but things will eventually settle down. In the meantime, critics will use BP as a punching bag (again) but there will be no negative spillover to, say, Chevron.

Tuesday, April 6, 2010

Safety Culture as Competitive Capability

A recent McKinsey survey describes companies' desire to use training to build competitive capabilities however, most training actually goes toward lower-priority areas more aligned with the organizations' culture. For example, a company should be focusing its training on developing project management but instead focuses on pricing because price leadership is viewed as an important component of company culture.

This caused us to wonder: How many nuclear managers believe their plant's safety culture is a competitive capability and where is safety culture on the training priority list? We believe that safety culture is actually a competitive asset of nuclear organizations in that safety performance is inextricably linked to overall performance. But how many resources are allocated to safety culture training? How is training effectiveness measured? We fear the traditional tools for such training may not be that effective in actually moving the culture dial, thus not yield measurable competitive benefit.


Hope exists. One unsurprising survey conclusion is "When senior leaders set the agenda for building capabilities, those agendas are more often aligned with the capability most important to performance." (p. 7) The challenge is to get senior nuclear managers to recognize and act on the importance of safety culture training.

Friday, April 2, 2010

NRC Briefing on Safety Culture - March 30, 2010

It would be difficult to come up with an attention-grabbing headline for the March 30 Commission briefing on safety culture. Not much happened. There were a lot of high fives for the perceived success of the staff’s February workshop and its main product, a strawman definition of nuclear safety culture. The only provocative remarks came from a couple of outside the mainstream “stakeholders”, the union rep for the NRC employees (and this was really limited to perceptions of internal NRC safety culture) and long time nuclear gadfly, Bille Garde (commended by Commissioner Svinicki for her consistency of position on safety culture spanning the last 20 years). Otherwise the discussions were heavily process oriented with very light questioning by the two currently seated Commissioners.

The main thrust of the briefing was on the definition of safety culture that was produced in the workshop. That strawman is different than that proposed by the NRC staff, or for that matter those used by other nuclear organizations such as INPO and INSAG. The workshop process sounded much more open and collegial than recent legislative processes on Capitol Hill.

Perhaps the one quote of the session that yields some insight as to where the Commission may be headed was from Chairman Jaczko; his comments can be viewed in the video below. Later in the briefing the staff demurred on endorsing the workshop product (versus the original staff proposal) pending additional input from internal and external sources.


Wednesday, March 31, 2010

Can Blogging Be Good for Safety Culture?

I came across a very interesting idea in some of my recent web browsing - an idea that I like for several reasons. First, it centers on an approach of using a blog or blogging, to enhance safety culture in large, high risk organizations. Hard for someone writing a safety culture blog not to find the idea intriguing. Second, the idea emanated from an engineer at NASA, Dale Huls, at the Johnson Space Center in Houston. NASA has been directly and significantly challenged in safety issues multiple times, including the Challenger and Columbia shuttle accidents. Third, the idea was presented in a PAPER written five years ago when blogging was still a shadow of what it has become - now of course it occupies its own world, the blogosphere.

The thesis of Dale’s paper is “...to explore an innovative approach to culture change at NASA that goes beyond reorganizations, management training, and a renewed emphasis on safety.” (p.1) Whatever you may conclude about blogging as an approach, I do think it is time to look beyond the standard recipe of “fixes” that Dale enumerates and which the nuclear industry also follows almost as black letter law.

One of the benefits that Dale sees is that “Blogs could be a key component to overcoming NASA’s ‘silent safety culture.’ As a communications tool, blogs are used to establish trust...(p.1)....and to create and promote a workplace climate in which dissent can be constructively addressed and resolved.” (p.2) It seems to me that almost any mechanism that promotes safety dialogue could be beneficial. Blogs encourage participation and communication. Even if many visitors to the blog only read posts and do not post themselves, they are part of the discussion. To the extent that managers and even senior executives participate, it can provide a direct, unfiltered path to connect with people in the organization. All of this promotes trust, understanding, and openness. While these are things that management can preach, bringing about the reality can be more difficult.

“Note that blogs are not expected to replace formal lines of communication, but rather enhance those communication lines with an informal process that encourages participation without peer pressure or fear of retribution.” (p.2) In any event, much of a useful safety dialogue is broader than raising a particular safety issue or concern.

So you might be wondering, did NASA implement blogging to facilitate its safety culture change? Dale wrote me in an email, “While NASA did not formally take up a specific use of blogging for safety matters, it seems that NASA is beginning to embrace the blogging culture. Several prominent NASA members utilize blogging to address NASA culture, e.g., Wayne Hale, manager of the Space Shuttle Program.

What should the nuclear industry take away from this? It might start with a question or two. Are there informal communication media such as blogs active at individual nuclear plants and how are they viewed by employees? Are they supported in any way by management, or the organization, or the industry? Are there any nuclear industry blogs that fulfill a comparable role? There is the Nuclear Safety Culture Group on LinkedIn that has seen sporadic discussion and commenting on a few issues. It currently has 257 members. This would be a good topic for some input from those who know of other forums.

Monday, March 29, 2010

Well Done by NRC Staffer

To support the discussion items on this blog we spend time ferreting out interesting pieces of information that bear on the issue of nuclear safety culture and promote further thought within the nuclear community. This week brought us to the NRC website and its Key Topics area.

As probably most of you are aware, the NRC hosted a workshop in February of this year for further discussions of safety culture definitions. In general we believe that the amount of time and attention being given to definitional issues currently seems to be at the point of diminishing returns. When we examine safety culture performance issues that arise around the industry, it is not apparent that confusion over the definition of safety culture is a serious causal issue, i.e., that someone was thinking of the INPO definition of safety culture instead of the INSAG one or the Schein perspective. Perhaps it must be a step in the process but to us what is interesting, and of paramount importance, is what causes disconnects between safety beliefs and actions taken and what can be done about them?


Thus, it was heartening and refreshing to see a presentation that addressed the key issue of culture and actions head-on. Most definitions of safety culture are heavy on descriptions of commitment, values, beliefs and attributes and light on the actual behaviors and decisions people make everyday. However, the definition that caught our attention was:


“The values, attitudes, motivations and knowledge that affect the extent to which safety is emphasized over competing goals in decisions and behavior.”

(Dr. Valerie Barnes, USNRC, “What is Safety Culture”, Powerpoint presentation, NRC workshop on safety culture, February 2010, p. 13)

This definition acknowledges the existence of competing goals and the need to address the bottom line manifestation of culture: decisions and actual behavior. We would prefer “actions” to “behavior” as it appears that behavior is often used or meant in the context of process or state of mind. Actions, as with decisions, signify to us the conscious and intentional acts of individuals. The definition also focuses on result in another way - “the extent to which safety is emphasized . . . in decisions. . . .” [emphasis added] What counts is not just the act of emphasizing, i.e. stressing or highlighting, safety but the extent to which safety impacts decisions made, or actions taken.


For similar reasons we think Dr. Barnes' definition is superior to the definition that was the outcome of the workshop:


“Nuclear safety culture is the core values and behaviors resulting from a collective commitment by leaders and individuals to emphasize safety over competing goals to ensure protection of people and the environment.”


(Workshop Summary, March 12, 2010, ADAMS ACCESSION NUMBER ML100700065, p.2)


As we previously argued in a 2008 white paper:


“. . . it is hard to avoid the trap that beliefs may be definitive but decisions and actions often are much more nuanced. . . .


"First, safety management requires balancing safety and other legitimate business goals, in an environment where there are few bright lines defining what is adequately safe, and where there are significant incentives and penalties associated with both types of goals. As a practical matter, ‘Safety culture is fragile.....a balance of people, problems and pressures.’


"Second, safety culture in practice is “situational”, and is continually being re-interpreted based on people’s actual behaviors and decisions in the safety management process. Safety culture beliefs can be reinforced or challenged through the perception of each action (or inaction), yielding an impact on culture that can be immediate or incubate gradually over time.”


(Robert Cudlin, "Practicing Nuclear Safety Management," March 2008, p. 3)


We hope the Barnes definition gets further attention and helps inform this aspect of safety culture policy.

Friday, March 26, 2010

Because They Don’t Understand...?

This post's title is part of a quote from the book Switch: How to Change Things When Change is Hard that we introduced in our March 7, 2010 post. The full quote is:

“It can sometimes be challenging.....to distinguish why people don’t support your change. Is it because they don’t understand or because they’re not enthused?....The answer isn’t always obvious, even to experts.” [p. 107]

So it appears that when people don’t comply with prescribed standards or regimens, the problem may not be knowledge or understanding, it may be something tied to emotion. Bringing about change in something as deeply embedded as culture is not simply a matter of clicking on the new, desired program. The authors provide a number of interesting examples of situations resistant to change and how they have been overcome through using emotion to galvanize action. There are teenagers with cancer who play video games that help them visualize beating the cancer. The accounting manager who changes his priorities after visiting his not-for-profit vendor organizations and experiencing for himself their limited resources and the dire consequences of late reimbursements.


The most common situation for generating emotion sufficient to support change is a crisis, often an organizational crisis that is existential. But crisis is associated with “negative emotions” that may yield specific but not necessarily long lasting actions. Positive emotions on the other hand can lead to being more open to new thoughts and values and a mindset that wants to adopt what is essentially a new identity. One of the more effective ways to generate the needed positive emotion is through experiencing (e.g., using a video game, or immersion in the environment of a stakeholder) the conditions associated with the needed changes.


In nuclear safety management, how often after events that are deemed to be indicative of safety culture weakness, are personnel provided with additional training on expectations and elements of safety culture. Does this appear to be a knowledge-based approach? If so is the problem that staff don’t understand what is expected? Or is positive emotion the missing ingredient - the addition of which might help personnel want to identify with and inhabit the cultural values?

Wednesday, March 24, 2010

Vermont Yankee (part 3)

There was an interesting article in the March 22, 2010 Hartford Courant regarding Paul Blanch, the former Northeast Utilities engineer who was in the middle of safety issues at Millstone in the 1990s. Specifically he was in the news due to his recent testimony against the extension of the operating license for Vermont Yankee. But what caught my eye was some of his broader observations regarding safety and the nuclear industry. Regarding the industry, Blanch states, "Safety is not their No. 1 concern," he said. "Making money is their No. 1 concern." He goes on to say he has no faith in the NRC, or utilities’ commitment to safety.

Bringing attention to these comments is important not because one may agree or disagree with them. They are significant because they represent a perception of the industry, and the NRC for that matter, that can and does get attention. One problem is that everyone says safety is their highest priority but then certain events suggest otherwise - as an example, let’s look at another company and industry recently in the news:


From the BP website:


Safe and reliable operations are BP’s number one priority....


This is from a company that was recently fined over $3 million by OSHA for safety violations at its Ohio refinery (see our March 12, 2010 post) and had previously been fined almost $90 million for the explosion at its Texas refinery.


Supporting this commitment is the following description of safety management at BP:

“...members of the executive team undertook site visits, in which safety was a focus, to reinforce the importance of their commitment to safe and reliable operations. The executives also regularly included safety and operations issues in video broadcasts and communications to employees, townhall meetings and messages to senior leaders.“

It is hardly unreasonable that someone could have a perception that BP’s highest priority was not safety. Unfortunately almost those identical words can also be found in the statements and pronouncements of many nuclear utilities. (By the way the narrow emphasis by BP management on “reinforcement” might be considered in the context of our post dated March 22, 2010 on Safety Culture Dynamics.)


As Dr. Reason has noted so simply, no organization is just in the business of being safe. What might be much more beneficial is some better acknowledgment of the tension between safety and production (and cost and schedule) and how nuclear organizations are able to address it. This awareness is a more credible posture for public perception, for regulators and for the organization itself. It would also highlight the insight that many have in the nuclear industry - that safety and reliable production are actually tightly coupled - that over the long term they must coexist. The irony may be that I recall 20 years ago Entergy was the leader in publicizing (and achieving) their goals to be upper quartile in safety, production and cost.

Monday, March 22, 2010

Safety Culture Dynamics (part 1)

Over the last several years there have been a number of nuclear organizations that have encountered safety culture and climate issues at their plants. Often new leadership is brought to the plant in hopes of stimulating the needed changes in culture. Almost always there is increased training and reiteration of safety values and a safety culture survey to gain a sense of the organizational temperature. It is a little difficult to gauge precisely how effective these measures are - surveys are snapshots in time and direct indicators of safety culture are lacking. In some cases, safety culture appears to respond in the short term to these changes but then loses momentum and backslides further out in time.

How does one explain these types of evolutions in culture? Conventional wisdom has been that culture is leadership driven and when safety culture is deficient, new management can “turn around” the situation. We have argued that the dynamics of safety culture are more complex and are subject to a confluence of factors that compete for the priorities and decisions of the organization. We use simulation models of safety culture to suggest how these various factors can interact and respond to various initiatives. We made an attempt at a simple illustration of what may illustrate the situation at a plant which responds as described above. CLICK ON THIS LINK to see the simulated safety culture dynamic response.

The simulation shows changes in some key variables over time. In this case the time period is 5 years. For approximately the first year the simulation illustrates the status quo prior to the change in leadership. Safety culture was in gradual decline despite nominal attention to actions to reinforce a safety mindset in the organization.

At approximately the one year mark, leadership is changed and actions are taken to significantly increase the safety priority of the organization. This is reflected in a spike in reinforcement that typically includes training, communications and strong management emphasis on the elements of safety culture. Note that following a lag, safety culture starts to improve in response to these changes. As time progresses, the reinforcement curve peaks and starts to decay due to something we refer to as “saturation”. Essentially the new leadership’s message is starting to have less and less impact even though it is being constantly reiterated. For a time safety culture continues to improve but then turns around due to the decreasing effectiveness of reinforcement. Eventually safety culture regresses to a level where many of the same problems start to recur.

Is this a diagnosis of what is happening at any particular site? No, it is merely suggestive of some of the dynamics that are work in safety culture. In this particular simulation other actions that may be needed to build strong, enduring safety culture were not implemented in order to isolate the failure of one-dimensional actions to provide long term solutions. One of the indicators of this narrow approach can be seen in the line on the simulation representing the trust level within the organization. It hardly changes or responds to the other dynamics. Why? In our view trust tends to be driven by the overall, big picture of forces at work and the extent to which they consistently demonstrate safety priority. Reinforcement (in our model) reflects primarily a training and messaging action by management. Other more potent forces include whether management “walks the talk”, whether resources are allocated consistent with safety priorities, whether short term needs are allowed to dominate longer term priorities, whether problems are identified and corrected in a manner to prevent recurrence, etc. In this particular simulation example, these other signals are not entirely consistent with the reinforcement messages, with a net result that trust hardly changes.

More information regarding safety culture simulation is available at the nuclearsafetysim.com website. Under the Models tab, Model 3 provides a short tutorial on the concept of saturation and its effect on safety culture reinforcement.

Friday, March 19, 2010

Dr. Bill Corcoran

From time to time we will mention other safety culture professionals whose work you may find interesting. William R. Corcoran, Ph.D., P.E. has long been active in the safety field; we even shared a common employer many years ago. He publishes "The Firebird Forum," a newsletter focusing on root cause analysis. For more information on Bill and his newsletter, please visit his profile here.

Thursday, March 18, 2010

Honest Errors vs Unacceptable Errors

This podcast is excerpted from an interview of Dr. James Reason in association with VoiceMap (www.voicemap.net), a provider of live guidance applications to improve human performance. In this second segment, Dr. Reason discusses how to build a better safety culture based on a “just” culture. He also cites the need to distinguish between honest errors (he estimates 90% of errors fall in this category) and unacceptable errors.

With regard to the importance of a “just culture” you may want to refer back to our post of August 3, 2009 where we highlight a book of that title by Sidney Dekker. In that post we emphasize the need to balance accountability and learning in the investigation of the causes of errors. Both advocates of a just culture, Reason and Dekker, are from European countries and their work may not be as well known in the U.S. nuclear industry but it appears to contain valuable lessons for us.