Showing posts with label Deepwater. Show all posts
Showing posts with label Deepwater. Show all posts

Thursday, September 4, 2014

DNFSB Hearings on Safety Culture, Round Two

DNFSB Headquarters
On August 27, 2014 the Defense Nuclear Facilities Safety Board (DNFSB) convened the second of three hearings “to address safety culture at Department of Energy defense nuclear facilities and the Board’s Recommendation 2011–1, Safety Culture at the Waste Treatment and Immobilization Plant.”*  The first hearing was held on May 28, 2014 and heard from industry and federal government safety culture (SC) experts; we reviewed that hearing on June 9, 2014.  The second hearing received SC expert testimony from the U.S. Navy, the U.S. Chemical Safety and Hazard Investigation Board and academia.  The following discussion reviews the presentations in the order they were made to the board. 


Adm. Norton's (Naval Safety Center) presentation** on the Navy’s SC programs was certainly comprehensive with 32 slides for a half-hour talk (plus 22 backup slides).  It appears the major safety focus has been on aviation but the Center’s programs also address the afloat communities (surface, submarine and diving) and Marines.  The programs make heavy use of surveys and unit visits in addition to developing and presenting training and workshops.  Not surprisingly, the Navy stresses the importance of leadership, especially personal involvement and commitment, in creating a strong SC.  They recognize that implementing a strong SC faces a direct challenge from other organizational values such as the warfighter mentality*** and softer challenges in areas such as IT (where there are issues with multiple systems and data problems).

Program strengths include the focus on leadership (leadership drives climate, climate drives cultural change) and the importance of determining why mishaps occurred.  The positive influence of a strong SC on decision making is implied.

Program weaknesses can be inferred from what was not mentioned.  For example, there was no discussion of the importance of fixing problems or identifying hard-to-see technical problems.  More significantly, there was no mention of High Reliability Organization (HRO) attributes, a real head-scratcher given that some of the seminal work on HROs was conducted on aircraft carriers. 

Adm. Eccles' (Navy ret.) presentation**** basically reviews the Navy’s SUBSAFE program and its focus on compliance with program requirements from design through operations.  Eccles notes that ignorance, arrogance and complacency are challenges to maintaining an effective program.


Mr. Griffon's (Chemical Safety Board Member) presentation***** illustrates the CSB’s straightforward approach to investigating incidents, as reflected in the following quotes:

“Intent of CSB investigations are to get to root cause(s) and make recommendations toward prevention.” (p. 3)

While searching for root causes the CSB asks: “Why conditions or decisions leading to accident were seen as normal, rational, or acceptable prior to the accident.” (p. 4)


CSB review of incident-related artifacts includes two of our hot button issues, Process Safety Management action item closure (akin to a CAP) and the repair backlog. (p. 5)  Griffon reviews major incidents, e.g., Texas City and Deepwater Horizon.  For Deepwater, he notes how certain decisions were (deliberately) incompletely informed, i.e., did not utilize readily available relevant information, and thus are indicative of an inadequate SC. (p. 16)  Toward the end Griffon observes that “Safety culture study/change must consider inequalities of power and authority.” (p. 19)  That seems obvious but it doesn’t often get said so clearly.

We like the CSB’s approach.  There is no new information here but it’s a quick read of what basic SC should and shouldn’t be.


Prof. Meshkati's (Univ. of S. Cal.) presentation^ compares the cultures at TEPCO’s Fukushima Daiichi plant and Tohoku Electric’s Onagawa plant.  It is mainly a rehash of the op-ed Meshkati co-authored back in March 2014 (and we reviewed on March 19, 2014.)  The presentation adds something we pointed out as an omission in that op-ed, viz., that TEPCO’s Fukushima Daini plant eventually managed to shut down safely after the earthquake and tsunami.  Meshkati notes approvingly that Daini personnel exhibited impromptu, but prudent, decision-making and improvisation, e.g., by flexibly applying emergency operation procedures. (p. 37)

Prof. Sutcliffe (John Hopkins Univ.) co-authored an important book on High Reliability Organizations (which we reviewed on May 3, 2013) and this academically-oriented presentation^^ draws on her earlier work.  It begins with a familiar description of culture and how its evolution can be influenced.  Importantly it shows rewards (including money) as a key input affecting the link between leaders’ philosophy and employees’ behavior. (p. 6) 

Sutcliffe discusses how failure to redirect action (in a situation where a change is needed) can result from failure of foresight or sensemaking, or being overcome by dysfunctional momentum.  She includes a lengthy example featuring wildland firefighters that illustrates the linkages between cues, voiced concerns, search for disparate perspectives, situational reevaluation and redirected actions.  It’s worth a few minutes of your time to flip through these slides.

Our Perspective

For starters, the Naval Safety Center's
activities may be too bureaucratic, with too many initiatives and programs, and focused mainly on compliance with procedures, rules, designs, etc.  It’s not clear what SC lessons can be learned from the Navy experience beyond the vital role of leadership in creating a cultural vision and attempting to influence behavior toward that vision.

The other presenters added nothing that was not already available to you, either through Safetymatters or from observing SC tidbits in the information soup that flows by everyone these days.

Subsequent to the first hearing we reported that Safety Conscious Work Environment (SCWE) issues exist at multiple DOE sites (see our July 8, 2014 post).  This should increase the sense of urgency associated with strengthening SC throughout DOE.  However, our bottom line remains the same as after the first hearing: “The DNFSB is still trying to figure out the correct balance between prescription and flexibility in its effort to bring DOE to heel on the SC issue.  SC is a vital part of the puzzle of how to increase DOE line management effectiveness in ensuring adequate safety performance at DOE facilities.” 


*  DNFSB Aug. 27, 2014 Public Hearing on Safety Culture and Board Recommendation 2011-1.  There is a video of the hearing available.

**  K.J. Norton (U.S. Navy), “The Naval Safety Center and Naval Safety Culture,“ presentation to DNFSB (Aug. 27, 2014).

***  “Anything, anywhere, anytime…at any cost”—desirable warfighter mentality perceived to conflict with safety.” (p. 11)

****  T. J. Eccles (U.S. Navy ret.), “A Culture of Safety: Submarine Safety in the U. S. Navy,” presentation to DNFSB (Aug. 27, 2014).

*****  M.A. Griffon (Chem. Safety Bd.), “CSB Investigations and Safety Culture,” presentation to DNFSB (Aug. 27, 2014).

^  Najm Meshkati, “Leadership and Safety Culture: Personal Reflections on Lessons Learned,” presentation to DNFSB (Aug. 27, 2014).  Prof. Meshkati was also the technical advisor to the National Research Council’s safety culture lessons learned from Fukushima report which we reviewed on July 30, 2014.

^^  K.M. Sutcliffe, “Leadership and Safety Culture,” presentation to DNFSB (Aug. 27, 2014).

Tuesday, November 20, 2012

BP/Deepwater Horizon: Upping the Stakes

Anyone who thought safety culture and safety decision making was an institutional artifact, or mostly a matter of regulatory enforcement, might want to take a close look at what is happening on the BP/Deepwater Horizon front these days. Three BP employees have been criminally indicted - and two of those indictments bear directly on safety in operational decisions. The indictments of the well-site leaders, the most senior BP personnel on the platform, accuses them of causing the deaths of 11 crewmen aboard the Deepwater Horizon rig in April 2010 through gross negligence, primarily by misinterpreting a crucial pressure test that should have alerted them that the well was in trouble.*

The crux of the matter relates to the interpretation of a pressure test to determine whether the well had been properly sealed prior to being temporarily abandoned. Apparently BP’s own investigation found that the men had misinterpreted the test results.

The indictment states, “The Well Site Leaders were responsible for...ensuring that well drilling operations were performed safely in light of the intrinsic danger and complexity of deepwater drilling.” (Indictment p.3)

The following specific actions are cited as constituting gross negligence: “...failed to phone engineers onshore to advise them ...that the well was not secure; failed to adequately account for the abnormal readings during the testing; accepted a nonsensical explanation for the abnormal readings, again without calling engineers onshore to consult…” (Indictment p.7)

The willingness of federal prosecutors to advance these charges should (and perhaps are intended to) send a chill down every manager’s spine in high risk industries. While gross negligence is a relatively high standard, and may or may not be provable in the BP case, the actions cited in the indictment may not sound all that extraordinary - failure to consult with onshore engineers, failure to account for “abnormal” readings, accepting a “nonsensical” explanation. Whether this amounts to “reckless” or willful disregard for a known risk is a matter for the legal system. As an article in the Wall Street Journal notes, “There were no federal rules about how to conduct such a test at the time. That has since changed; federal regulators finalized new drilling rules last week that spell out test procedures.”**

The indictment asserts that the men violated the “standard of care” applicable to the deepwater oil exploration industry. One might ponder what federal prosecutors think the “standard of care” is for the nuclear power generation industry.
 

Clearly the well site leaders made a serious misjudgment - one that turned out to have catastrophic consequences. But then consider the statement by the Assistant Attorney General, that the accident was caused by “BP’s culture of privileging profit over prudence.” (WSJ article)   Are there really a few simple, direct causes of this accident or is this an example of a highly complex system failure? Where does culpability for culture lie?  Stay tuned.


* U.S. District Court Eastern District of Louisiana, “Superseding Indictment for Involuntary Manslaughter, Seaman's Manslaughter and Clean Water Act: United States of America v. Robert Kaluza and Donald Vidrine,” Criminal No. 12-265.


** T. Fowler and R. Gold, “Engineers Deny Charges in BP Spill,” Wall Street Journal online (Nov. 18, 2012).



Wednesday, May 2, 2012

Conduct of the Science Enterprise and Effective Nuclear Safety Culture – A Reflection (Part 1)

(Ed. note: We have asked Bill Mullins to develop occasional posts for Safetymatters.  His posts will focus on, but not be limited to, the Hanford Waste Treatment Plant aka the Vit Plant.)

In a recent post the question was posed: “Can reality in the nuclear operating environment be similar (to the challenges of production pressures on scientists), or is nuclear somehow unique and different?”
 
In a prior post a Chief Nuclear Officer is quoted: “ . . the one issue is our corrective action program culture, our -- and it’s a culture that evolved over time. We looked at it more of a work driver, more of a -- you know, it’s a way to manage the system rather than . . . finding and correcting our performance deficiency.”

Another recent post describes the inherently multi-factor and non-linear character of what we’ve come to refer to as “Nuclear Safety Culture.”  Bob Cudlin observed: “We think there are a number of potential causes that are important to ensuring strong safety culture but are not receiving the explicit attention they deserve.  Whatever the true causes we believe that there will be multiple causes acting in a systematic manner - i.e., causes that interact and feedback in complex combinations to either reinforce or erode the safety culture state.

I’d like to suggest a framework in which these questions and observations can be brought into useful relationship for thinking about the future of the US National Nuclear Energy Enterprise (NNEE).

This week I read yet another report on the Black Swan at Fukushima – this one representing views of US Nuclear industry heavy weights. It is just one of perhaps a dozen reviews, complete or on-going, that are adding to the stew pot of observations, findings, and recommendations about lessons to be learned from those “wreck the plant” events. I was wondering how all this “stuff” comes together in a manner that gives confidence that the net reliability of the US NNEE is increased rather than encumbered.

Were all these various “nuclear safety” reports scientific papers of the type referred to in the recent news story, then we would understand how they are “received” into the shared body of knowledge. Contributions would be examined, validations pursued, implications assessed, and yes, rewards or sanctions for work quality distributed. This system for the conduct of scientific research is very mature and has seemingly responded well to the extraordinary growth in volume and variety of research during the past half-century.

In the case of the Fukushima reports (and I’d suggest as validated by the corresponding pile of Deepwater Horizon reviews) there is no process akin to the publishing standards commonly employed in science or other academic research. In form, industrial catastrophes are typically investigated with some variation of causal analysis; also typically a distinguished panel of “experts” is assembled to conduct the review.

The credentials of those selected experts are relied upon to lend gravity to report results; this is generally in lieu of any peer or independent stakeholder review. An exception to this occurs when legislative hearings are convened to receive testimony from panel members and/or the responsible officials implicated in the events – but these second tier reviews are more often political theater than exercises in “seeking to understand.”

Since the TMI accident this trial by Blue Ribbon Panel methodology has proliferated; often firms such a BP hire such reviews (e.g. the Baker Panel on Texas City) to be done for official stakeholders that are below the level of regulatory or legislative responsibility. In the case of Deepwater Horizon and Fukushima it has been virtually open season for interested parties with any sort of credentialed authority (i.e. academic, professional society, watchdog group, etc.) to offer up a formal assessment of these major events.

And today of course we have the 24 hour news cycle with its voracious maw and indiscriminate headline writers; and let’s not forget the opinionated individuals like me – blogging furiously away with no authentic credentials but personal experience! How, I ask myself, does “sense-making” occur across the NNEE in this flurry of bits and bytes – unencumbered by the benefit of a reasoning tradition such as the world of scientific research? Not very well would be my conclusion.

There would appear to be an unexamined assumption that some mechanisms do exist to vet all the material generated in these investigation reports, but that seems to be susceptible to the kind of “forest lost for the trees” misperception cited in the Chief Nuclear Officer’s quote regarding corrective action systems becoming “the way we think about managing work.”
 
I can understand how, for a line manager at a single nuclear plant site that is operating in the main course of its life cycle, a scarce resource pot would lead to focusing on every improvement opportunity you’d like to address appearing as a “corrective action.” I would go a step further and say that given the domination of 10 CFR 50 Appendix B on the hierarchical norms for “quality” and “safety” that managing to a single “list” makes sense – if only to ensure that each potential action is evaluated for its nuclear licensing implications.

At the site level, the CNO has a substantial and carefully groomed basis for establishing the relative significance of each material condition in the plant; in most instances administrative matters are brightly color-coded “nuclear” or “other.” As we move up the risk-reckoning ladder through corporate decision-making and then branching into a covey of regulatory bodies, stockholder perspectives, and public perceptions, the purity of issue descriptions degrades – benchmarks become fuzzy.

The overlap of stakeholder jurisdictions presents multiple perspectives (via diverse lexicons) for what “safety,” “risk,” and “culture” weights are to be assigned to any particular issue. Often the issue as first identified is a muddle of actual facts and supposition which may or may not be pruned upon further study. The potential for dilemmas, predicaments, and double-binding stakeholder expectations goes up dramatically.
 
I would suggest that responses to the recent spate of high-profile nuclear facility events, beginning with the Davis-Besse Reactor Pressure Vessel Head near-miss, has provoked a serious cleavage in our collective ability to reason prudently about the policy, industrial strategy, and regulatory levels of risk. The consequences of this cleavage are to increase the degree of chaotic programmatic action and to obscure the longer term significance of these large-scale, unanticipated/unwelcome events, i.e., Black Swan vulnerabilities.

In the case of the NNEE I hypothesize that we are victims of our own history – and the presumption of exceptional success in performance improvement that followed the TMI event. With the promulgation of the Reactor Oversight Process in 1999, NRC and the industry appeared to believe that a mature understanding of oversight and self-governance practice existed and that going forward clarity would only increase regarding what factors were important to sustained high reliability across the entire NNEE.
 
That presumption has proven a premature one, but it does not appear from the Fukushima responses that many in leadership positions recognize this fact. Today, the US NNEE finds itself trapped in a “limits to growth system.” That risk-reckoning system institutionalizes a series of related conclusions about the overall significance of nuclear energy health hazards and their relationship to other forms of risk common to all large industrial sectors.

The NNEE elements of thought leadership appear to act (on the evidence of the many Fukushima reports) as if the rationale of 10 CFR 50 Appendix B regarding “conditions adverse to quality” and the preeminence of “nuclear safety corrective actions” is beyond question. It’s time to do an obsolescence check on what I’ve come to call the Nuclear Fear Cycle.

Quoting Bob Cudlin again: “Whatever the true causes we believe that there will be multiple causes acting in a systematic manner - i.e., causes that interact and feedback in complex combinations to either reinforce or erode the safety culture state.” You are invited to ponder the following system.

 (Mr. Mullins is a Principal at Better Choices Consulting.)

Friday, February 24, 2012

More BP

We have posted numerous times on the travails of BP following the Deepwater Horizon disaster and the contribution of safety culture to these performance results.  BP is back in the news since the trial date for a variety of suits and countersuits is coming up shortly.  We thought we would take the opportunity for a quick update.

The good news is the absence of any more significant events at BP facilities.  In its presentation to investors on 4Q11 and 2012 Strategy, BP highlighted its 10 point moving forward plan, including at the top of the list, “relentless focus on safety and managing risk”.* 

It is impossible for us to assess how substantive and effective this focus has been or will be, but we’ve now heard from BP’s Board member Frank Bowman.  Bowman is head of the Board’s Safety, Ethics and Environment Assurance Committee.  He served on the panel that investigated BP’s US refineries after the Texas City explosion in 2005 and then became a member of BP’s US advisory council; and in November 2010, he joined the main board as a non-executive director.  Basically Bowman’s mission is to help transfer his U.S. nuclear navy safety philosophy to BP’s energy business.

Bowman reports that he has been impressed by the way the safety and operational risk and upstream organizations have taken decisions to suspend operations when necessary. “We’ve recently walked away from several jobs where our standards were not being met by our partners or a contractor. That sends a message heard around the world, and we should continue to do that.”**

Looking for more specifics in the 4Q11 investor presentation, we came across the following “safety performance record”. (BP 4Q11, p. 12)


The charts plot “loss of containment” issues (these are basically releases of hydrocarbons) and personnel injury frequency.  The presentation notes that “Aside from the exceptional activities of the Deepwater Horizon response, steady progress has been made over the last decade.”  Perhaps but we are skeptical that these data are useful for measuring progress in the area of safety culture and management.  For one they both show positive trends over a time period where BP had two major disasters - the Texas City oil refinery fire in 2005 and Deepwater Horizon in 2010.  At a minimum these charts confirm that the tracked parameters do nothing to proactively predict safety health.  As Mr. Bowman notes, “Culture is set by the collective behaviour of an organisation’s leaders… The collective behaviour of BP’s leaders must consistently endorse safety as central to our very being.” (BP Magazine, p. 10)

On the subject of management behavior, the investigations and analyses of Deepwater Horizon consistently noted the contribution of business pressures and competing priorities that lead to poor decisions.  In our September 30, 2010 blog post we included a quote from the then-new BP CEO:

“Mr. Dudley said he also plans a review of how BP creates incentives for business performance, to find out how it can encourage staff to improve safety and risk management.”

The 4Q11 presentation and Mr. Bowman’s interview are noticeably silent on this subject.  The best we could come up with was the following rather cryptic statement in the 4Q11: “We’ve also evolved our approach to performance management and reward, requiring employees to set personal priorities for safety and risk management, focus more on the long term and working as one team.” (BP 4Q11, p. 15)  We’re not sure how “personal priorities” relate to the compensation incentives which were the real focus of the concerns expressed in the accident investigations.

Looking a bit further we uncovered the following in a statement by the chairwoman of BP’s Board Remuneration Committee: “For 2011 the overall policy for executive directors [compensation] will remain largely unchanged…”***  If you guessed that incentives would be based only on meeting business results, you would be right.

In closing we leave with one other comment from Mr. Bowman, one that we think has great salience in the instant situation of BP and for other high risk industries including nuclear generation: “In any business dealing with an unforgiving environment, complacency is your worst enemy. You have to be very careful about what conclusion to draw from the absence of an accident.” (BP Magazine, p. 9) [emphasis added]


BP 4Q11 & 2012 Strategy presentation, p. 8.

**  BP Magazine, Issue 4 2011, p. 9.

***  Letter from the chairman of the remuneration committee (Mar. 2, 2011).

Friday, June 24, 2011

Rigged Decisions?

The Wall Street Journal reported on June 23, 2011* on an internal investigation conducted by Transocean, owner of the Deepwater Horizon drill rig, that placed much of the blame for the disaster on a series of decisions made by BP.  Is this news?  No, the blame game has been in full swing almost since the time of the rig explosion.  But we did note that Transocean’s conclusion was based on a razor sharp focus on:

“...a succession of interrelated well design, construction, and temporary abandonment decisions that compromised the integrity of the well and compounded the risk of its failure…”**  (p. 10)


Note, their report did not place the focus on the “attitudes, beliefs or values” of BP personnel or rig workers, and really did not let their conclusions drift into the fuzzy answer space of “safety culture”.  In fact the only mention of safety culture in their 200+ page report is in reference to a U.S. Coast Guard (USCG) inspection of the drill rig in 2009 which found:

“outstanding safety culture, performance during drills and condition of the rig.” (p. 201)

There is no mention of how the USCG reached such a conclusion and the report does not rely on it to support its conclusions.  It would not be the first time that a favorable safety culture assessment at a high risk enterprise preceded a major disaster.***

We also found the following thread in the findings that reinforce the importance of recognizing and understanding the impact of underlying constraints on decisions:

“The decisions, many made by the operator, BP, in the two weeks leading up to the incident, were driven by BP’s knowledge that the geological window for safe drilling was becoming increasingly narrow.” (p.10)

The fact is, decisions get squeezed all the time resulting in decisions which may be reducing margins but arguably are still “acceptable”.  But such decisions do not necessarily lead to unsafe, much less disastrous, results.  Most of the time the system is not challenged, nothing bad happens, and you could even say the marginal decisions are reinforced.  Are these tradeoffs to accommodate conflicting priorities the result of a weakened safety culture?  Perhaps.  But we suspect that the individuals making the decisions would say they believed safety was their priority and culture may have appeared normal to outsiders as well (e.g., the USCG).  The paradox occurs because decisions can trend in a weaker direction before other, more distinct evidence of degrading culture become apparent.  In this case, a very big explosion.

*  B. Casselman and A. Gonzalez, "Transocean Puts Blame on BP for Gulf Oil Spill," wsj.com (June 23, 2011).

** "Macondo Well Incident: Transocean Investigation Report," Vol I, Transocean, Ltd. (June 2011).

*** For example, see our August 2, 2010 post.

Thursday, April 7, 2011

Incredible

“...notwithstanding the tragic loss of life in the Gulf of Mexico, we [Transocean] achieved an exemplary statistical safety record as measured by our total recordable incident rate (‘‘TRIR’’) and total potential severity rate (‘‘TPSR’’).  As measured by these standards, we recorded the best year in safety performance in our Company’s history, which is a reflection on our commitment to achieving an incident free environment, all the time, everywhere.”*

Good grief.  Did Transocean really say this?  Eleven people including nine Transocean employees died in the Deepwater Horizon oil rig explosion.  The quote is from Transocean’s 2010 Annual Report and Proxy recently filed with the SEC.  It provides another illuminating example where the structure and award of management incentives speak much greater volumes than corporate safety rubrics.  (For our report on compensation structures within nuclear power companies and the extent to which such compensation included incentives other than safety, look here and here.)  Or as the saying goes, “Follow the money”.

To fully comprehend how Transocean’s incentive program purports to encourage safety performance we are providing the following additional quotes from its Annual Report.

“Safety Performance.  Our business involves numerous operating hazards and we remain committed to protecting our employees, our property and the environment. Our ultimate goal is expressed in our Safety Vision of ‘‘an incident-free workplace—all the time, everywhere…..

"The [Compensation] Committee measures our safety performance through a combination of our total recordable incident rate (‘‘TRIR’’) and total potential severity rate (‘‘TPSR’’).

•    "TRIR is an industry standard measure of safety performance that is used to measure the frequency of a company’s recordable incidents and comprised 50% of the overall safety metric. TRIR is measured in number of recordable incidents per 200,000 employee hours worked.

•    "TPSR is a proprietary safety measure that we use to monitor the total potential severity of incidents and comprised 50% of the overall safety metric. Each incident is reviewed and assigned a number based on the impact that such incident could have had on our employees and contractors, and the total is then combined to determine the TPSR.

"The occurrence of a fatality may override the safety performance measure.

"….Based on the foregoing safety performance measures, the actual TRIR was 0.74 and the TPSR was 35.4 for 2010. These outcomes together resulted in a calculated payout percentage of 115% for the safety performance measure for 2010. However, due to the fatalities that occurred in 2010, the Committee exercised its discretionary authority to modify the TRIR payout component to zero, which resulted in a modified payout percentage of 67.4% for the safety performance measure." (p. 45)
The treatment of bonuses for Transocean execs was picked up in various media outlets and met with, shall we say, skepticism.  Transocean responded to the blowback with the following:

“We acknowledge that some of the wording in our 2010 proxy statement may have been insensitive in light of the incident that claimed the lives of eleven exceptional men last year and we deeply regret any pain that it may have caused...” **

Note that the apology is directed at the “wording” of the proxy, not to the actual award of bonus compensation for safety performance.  We are tempted here to make some reference to “density” but it is self-evident.

Perhaps realizing that something more would be appropriate, Transocean announced yesterday that members of the senior management team would be donating their bonuses to the Deepwater Horizon Memorial Fund.*** 

Oops, actually they will be donating just the “safety portion” of their bonuses to the fund.  All other bonus amounts and incentive awards are not affected and the Transocean incentive structure for safety performance remains unchanged for 2011.



***  Announcement by Transocean Ltd. Senior Management Team, Zug, Switzerland (Apr 5, 2011 MARKETWIRE via COMTEX).

Wednesday, February 16, 2011

BP Exec Quit Over Safety Before Deepwater Disaster

Today’s Wall Street Journal has an interesting news item about a BP Vice President who quit prior to the Deepwater Horizon disaster because he felt BP "was not adequately committed to improving its safety protocols in offshore drilling to the level of its industry peers." The full article is available here.

Tuesday, January 25, 2011

A Nuclear Model for Oil and Gas

The President’s Commission has issued its report on the Deepwater Horizon disaster.* The report reviews the history of the tragedy and makes recommendations based on lessons learned.  This post focuses on the report’s use of the nuclear industry, in particular the role played by INPO, as a model for an oil and gas industry safety institute and auditor.

The report provides an in-depth review of INPO’s role and methods and we will not repeat that review in this space.  We want to highlight the differences between the oil and gas and nuclear industries, some recognized in the report, that would challenge a new safety auditor. 

First, “The oil and gas industry is more fragmented and diversified in nature. . . .” (p. 240)  The industry includes vertically integrated giants, specialty niche firms and everything in-between.  Some are global in nature while others are regional firms.  In our view, it appears that oil and gas industry participants cooperate with each other in certain instances and compete with each other in different cases.  (In contrast, most [all?] U.S. nuclear plants are not in direct competition with other plants.)  Obtaining agreement to create a relatively powerful industry auditing entity will not be a simple matter.    

Second, “concerns about potential disclosure to business competitors of proprietary information might make it harder to establish an INPO-like entity in the oil and gas industry.” (p. 240)  Oil and gas firms regard technology as an important source of competitive advantage.  “[A]n INPO-like approach might run into problems if companies perceived the potential for inspections of offshore facilities to reveal ‘technical and proprietary and confidential information that companies may be reluctant to share with one another.’” (p. 241)  Not only will it be difficult to get a firm to share its proprietary technology if it may lose competitive advantage by doing so, but this will make it more difficult for the auditing organization to promote the industry-wide use of the most effective, safest technologies

Third, and this could be a potentially large problem, INPO operates in almost total secrecy.  “[INPO] assessment results are never revealed to anyone other than the utility CEOs and site managers, but INPO formally meets with the NRC four times a year to discuss trends and information of “mutual interest.” And if INPO has discovered serious problems associated with specific plants, it notifies the NRC.”  (p. 236)  INPO claims, probably realistically, that maintaining member confidentiality is key to obtaining full and willing cooperation in evaluations. 

However, this secrecy contributes zero to public understanding of and support for nuclear plant operations and owners.  At this point in its evolution, the oil and gas industry needs more transparency in its auditing and oversight functions, not less.  After all, and forgive the bluntness here, very few people have died at U.S. commercial nuclear power plants (and those were in non-nuclear incidents) while the oil and gas industry has suffered numerous fatalities.  We think a government auditor, whose evaluations of facilities and managements would be made public, is the better answer for the oil and gas industry at this time.


*  National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, “Deep Water: The Gulf Oil Disaster and the Future of Offshore Drilling,” Report to the President (Jan 2011).

Monday, January 10, 2011

Pick Any Two

Last week principal findings of the BP Oil Spill Presidential Commission were released.   Not surprisingly it cited root causes that were “systemic”, decisions without adequate consideration of risks, and failures of regulatory oversight.  It also cited a lack of a culture of safety at the companies involved in the Deepwater Horizon.  We came across an interesting entry in a blog tied to an article in the New York Times by John Broder on January 5, 2011, “Blunders Abounded Before Gulf Oil Spill, Panel Says”.  We thought it was worth passing on. 

Comment No. 7 of 66 submitted by:
Jim S.
Cleveland
January 5th, 2011
7:23 pm

“A fundamental law of engineering (or maybe of the world in general) is "Cheaper, Faster, Better: Pick Any Two".  

Clearly those involved, whether deliberately or by culture, chose Cheaper and Faster.”

Thursday, November 18, 2010

Another Brick in the Wall for BP et al

Yesterday the National Academy of Engineering released their report* on the Deepwater Horizon blowout.  The report includes a critical appraisal of many decisions made during the period when the well was being prepared for temporary abandonment, decisions that in the aggregate decreased safety margins and increased risks.  This Washington Post article** provides a good summary of the report.

The report was written by engineers and scientists and has a certain “Just the facts, ma’am” tone.  It does not specifically address safety culture.  But we have to ask: What can one infer about a culture where the business practices don’t include “any standard practice . . . to guide the tradeoffs between cost and schedule and the safety implications of the many decisions (that is, a risk management approach).”  (p. 15)

We have had plenty to say about BP and the Deepwater Horizon accident.  Click on the BP label below to see all of our related blog entries.


*  Committee for the Analysis of Causes of the Deepwater Horizon Explosion, Fire, and Oil Spill to Identify Measures to Prevent Similar Accidents in the Future; National Academy of Engineering; National Research Council, “Interim Report on Causes of the Deepwater Horizon Oil Rig Blowout and Ways to Prevent Such Events” (2010).

**  D. Cappiello, “Experts: BP ignored warning signs on doomed well,” The Washington Post (Nov 17, 2010).  Given our blog’s focus on the nuclear industry, it’s worth noting that, in an interview, the committee chairman said, “the behavior leading up to the oil spill would be considered unacceptable in companies that work with nuclear power or aviation.”

Thursday, September 30, 2010

BP's New Safety Division

It looks like oil company BP believes that creating a new, “global” safety division is part of the answer to their ongoing safety performance issues including most recently the explosion of Deepwater Horizon oil rig in the Gulf of Mexico.  An article in the September 29, 2010 New York Times* quotes BP’s new CEO as stating “safety and risk management [are] our most urgent priority” but does not provide many details of how the initiative will accomplish its goal.  Without seeming to jump to conclusions, it is hard for us to see how a separate safety organization is the answer although BP asserts it will be “powerful”. 

Of more interest was a lesser headline in the article with the following quote from BP’s new CEO:

“Mr. Dudley said he also plans a review of how BP creates incentives for business performance, to find out how it can encourage staff to improve safety and risk management.”

We see this as one of the factors that is a lot closer to the mark for changing behaviors and priorities.  It parallels recent findings by FPL in its nuclear program (see our July 29, 2010 post) and warning flags that we had raised in our July 6 and July 9, 2010 posts regarding trends in U.S. nuclear industry compensation.  Let’s see which speaks the loudest to the organization: CEO pronouncements about safety priority or the large financial incentives that executives can realize by achieving performance goals.  If they are not aligned, the new “division of safety” will simply mean business as usual.

*  The original article is available via the iCyte below.  An updated version is available on the NY Times website.

Monday, August 2, 2010

Mission Impossible

We are back to the topic of safety culture surveys with a new post regarding an important piece of research by Dr. Stian Antonsen of the Norwegian University of Science and Technology.  He presents an empirical analysis of the following question:

    “..whether it is possible to ‘predict’ if an organization is prone to having major accidents on the basis of safety culture assessments.”*

We have previously posted a number of times on the use and efficacy of safety culture assessments.  As we observed in an August 17, 2009 post, “Both the NRC and the nuclear industry appear aligned on the use of assessments as a response to performance issues and even as an ongoing prophylactic tool.  But, are these assessments useful?  Or accurate?  Do they provide insights into the origins of cultural deficiencies?”

Safety culture surveys have become ubiquitous across the U.S. nuclear industry.  This reliance on surveys may be justified, Antonsen observes, to the extent they provide a “snapshot” of “attitudes, values and perceptions about organizational practices…”  But Antonsen cautions that the ability of surveys to predict organizational accidents has not been established empirically and cites some researchers who suspect surveys “‘invite respondents to espouse rationalisations, aspirations, cognitions or attitudes at best’ and that ‘we simply don’t know how to interpret the scales and factors resulting from this research’”.  Furthermore, surveys present questions where the favorable or desired answers may be obvious.  “The risk is, therefore, that the respondents’ answers reflect the way they feel they should feel, think and act regarding safety, rather than the way they actually do feel, think and act…”  As we have stated in a white paper** on nuclear safety management, “it is hard to avoid the trap that beliefs may be definitive but decisions and actions often are much more nuanced.”

To investigate the utility of safety culture surveys Antonsen compared results of a safety survey conducted of the employees of an offshore oil platform (Snorre Alpha) prior to a major operational incident, with the results of detailed investigations and analyses following the incident.  The survey questionnaire included twenty questions similar to those found in nuclear plant surveys.  Answers were structured on a six-point Likert scale, also similar to nuclear plant surveys.  The overall result of the survey was that employees had a highly positive view of safety culture on the rig.

The after incident analysis was performed by the Norwegian Petroleum Safety Authority  and a causal analysis was subsequently performed by Statoil (the rig owner) and a team of researchers.  The findings from the original survey and the later incident investigations were “dramatically different” as to the Snorre Alpha safety culture.  Perhaps one of the telling differences was that the post hoc analyses identified that the rig culture included meeting production targets as a dominant cultural value.  The bottom line finding was that the survey failed to identify significant organizational problems that later emerged in the incident investigations.

Antonsen evaluates possible reasons for the disconnect between surveys and performance outcomes.  He also comments on the useful role surveys can play; for example inter-organizational comparisons and inferring cultural traits.  In the end the research sounds a cautionary note on the link between survey-based measures and the “real” conditions that determine safety outcomes.

Post Script: Antonsen’s “Mission Impossible” paper was published in December 2009.  We now have seen another oil rig accident with the recent explosion and oil spill from BP’s Deepwater Horizon rig.  As we noted in our July 22, 2010 post, a safety culture survey had been performed of that rig’s staff several weeks prior to the explosion with overall positive results.  The investigations of this latest event could well provide additional empirical support for the "Mission Impossible" study results. 

* The study is “Safety Culture Assessment: A Mission Impossible?”  The link connects to the abstract; the paper is available for purchase at the same site.

**  Robert L. Cudlin, "Practicing Nuclear Safety Management" (March 2008), p. 3.

Thursday, July 22, 2010

Transocean Safety Culture and Surveys

An article in today’s New York Times, “Workers on Doomed Rig Voiced Concern About Safety”  reports on the safety culture on the Deepwater Horizon drilling rig that exploded in the Gulf of Mexico.  The article reveals that a safety culture survey had been performed of the staff on the rig in the weeks prior to the explosion.  The survey was commissioned by Transocean and performed by Lloyd’s Register Group, a maritime and risk-management organization that conducted focus groups and one-on-one interviews with at least 40 Transocean workers.

There are two noteworthy findings from the safety culture survey.  While the headline is that workers voiced safety concerns, the survey results indicate:
“Almost everyone felt they could raise safety concerns and these issues would be acted upon if this was within the immediate control of the rig,” said the report, which also found that more than 97 percent of workers felt encouraged to raise ideas for safety improvements and more than 90 percent felt encouraged to participate in safety-improvement initiatives....But investigators also said, ‘It must be stated at this point, however, that the workforce felt that this level of influence was restricted to issues that could be resolved directly on the rig, and that they had little influence at Divisional or Corporate levels.’ “
This highlights several of the shortcomings of safety culture surveys.  One, the vast majority of respondents to the survey indicated they were comfortable raising safety concerns - yet subsequent events and decisions led to a major safety breakdown.  So, is there a response level that is indicative of how the organization is actually doing business or do respondents tell the the survey takers “what they want to hear”?  And, is comfort in raising a safety concern the appropriate standard, when the larger corporate environment may not be responsive to such concerns or bury them with resource and schedule mandates?  Second, this survey focused on the workers on the rig.  Apparently there was a reasonably good culture in that location but it did not extend to the larger organization.  Consistent with that perception are some of the preliminary reports that corporate was pushing production over safety which may have influenced risk taking on the rig.  This is reminiscent of the space shuttle Challenger where political pressure seeped down into the decision making process, subtly changing the perception of risk at the operational levels of NASA.  How useful are surveys if they do not capture the dynamics higher in the organization or the the insidious ability of exogenous factors to change risk perceptions?
The other aspect of the Transocean surveys came not from the survey results but the rationalization by Transocean of their safety performance.  They “noted that the Deepwater Horizon had seven consecutive years without a single lost-time incident or major environmental event.”  This highlights two fallacies.  One, that the absence of a major accident demonstrates that safety performance is meeting its goals.  Two, that industrial accident rates correlate to safety culture and prudent safety management.  They don’t.  Also, recall our recent posts regarding nuclear compensation where we noted that the the most common metric for determining safety performance incentives in the nuclear industry is industrial accident rate. 

The NY Times article may be found at http://www.nytimes.com/2010/07/23/us/23hearing.html.