“You don’t ever want a crisis to go to waste; it’s an opportunity to do important things that you would otherwise avoid.” So said Rahm Emanuel, memorably, several years ago. Perhaps taking a page from the Emanuel book, the Union of Concerned Scientists took the opportunity last Thursday to release a report chronicling a series of problems it had investigated at U.S. nuclear plants.* Apparently the events in Japan pumped plenty of fresh oxygen into the UCS war room in time for them to trot out their latest list of concerns regarding nuclear plant safety.
[UCS senior scientist Edwin] “Lyman was speaking in a conference call with reporters on the release of a report examining critical problems — known as “near misses” — at various nuclear facilities in the United States last year, and the N.R.C.’s handling of critical problems”
David Lochbaum, the author of the report and the director of the nuclear safety program for the organization, was quoted as:
[The report] “also suggested that federal regulators needed to do more to investigate why problems existed in the first place — including examining the overall safety culture of companies that operate nuclear power plants — rather than simply order them to be fixed.”
It could be that the UCS is aiming at the heart of the recent discussions surrounding the NRC’s new policy statement on safety culture. It is clear that the NRC has little appetite to regulate the safety culture of its licensees; instead urging licensees to maintain a strong safety culture and and taking action only if “results” are not acceptable. UCS would like specific issues, such as the “near misses” in their report, to be broadly interpreted to establish a more fundamental, cultural flaw in the enterprise itself.
Perhaps the larger question raised by the events in Japan is the dominance of natural phenomena in challenging man-made structures, and whether safety culture provides any insulation. While the earthquake itself seemed fairly well contained at the nuclear plants, the tsunami easily over powered the sea wall at the facility and caused widespread disability of crucial plant systems. Does this sound familiar? Does it remind one of a Category 5 hurricane sweeping aside the levees in New Orleans? Or the overwhelming forces of an oil well blowout brushing aside the isolation capability of a blowout preventer?
John McPhee’s 1990 book The Control of Nature chronicles a number of instances of man’s struggle against nature - in his view, one that is inevitably bound to fail. Often the very acts undertaken to “control nature” contribute to future failures of that control. McPhee cites the leveeing of the Mississippi, leading to faster channel flows, more silting, more leveeing, and ultimately the kind of macro disaster occurring in Katrina. Or the “debris bins” built in the canyons above Los Angeles communities. The bins fill over successive storms, eventually leading to failures of the bins themselves and catastrophic mud and debris floods in the downstream valleys.
It is probably inevitable that in the aftermath of Japan there will be calls to up the design criteria of nuclear plants to higher levels of earthquakes and other natural phenomena. The expectation will be that this will provide the absolute protection desired by the public or groups such as UCS. Until of course the next storm or earthquake that is incrementally larger, or in a worse location or in combination with some other event, that supersedes the more stringent assumptions.
Safety culture cannot deliver on an expectation that safety is absolute or without limits. It can and should emphasize the priority and unflagging attention to safety that maximizes the capacity of a facility and its staff to withstand unforeseen challenges . We know that the Japan event proves the former. It will be equally important to determine if it also showed the latter.
* T.Zeller Jr., "Citing Near Misses, Report Faults Both Nuclear Regulators and Operators," New York Times, Green: A Blog About Energy and the Environment (Mar 17, 2011, 1:50 PM)
Monday, March 21, 2011
Friday, March 11, 2011
Safety Culture Performance Indicators
In our recent post on safety culture management in the DOE complex, we concentrated on documents created by the DOE team. But there was also some good material in the references assembled by the team. For example, we saw some interesting thoughts on performance indicators in a paper by Andrew Hopkins, a sociology professor at The Australian National University.* Although the paper was prepared for an oil and gas industry conference, the focus on overall process safety has parallels with nuclear power production.
Contrary to the view of many safety culture pundits, including ourselves, Professor Hopkins is not particularly interested in separating lagging from leading indicators; he says that trying to separate them may not be a useful exercise. Instead, he is interested in a company’s efforts to develop a set of useful indicators that in total measure or reflect the state of the organization’s risk control system. In his words, “. . . the important thing is to identify measures of how well the process safety controls are functioning. Whether we call them lead or lag indicators is a secondary matter. Companies I have studied that are actively seeking to identify indicators of process safety do not make use of the lead/lag distinction in any systematic way. They use indicators of failure in use, when these are available, as well as indicators arising out their own safety management activities, where appropriate, without thought as to whether they be lead or lag. . . . Improving performance in relation to these indicators must enhance process safety. [emphasis added]” (p. 11)
Are his observations useful for people trying to evaluate the overall health of a nuclear organization’s safety culture? Possibly. Organizations use a multitude of safety culture assessment techniques including (but not limited to) interviews; observations; surveys; assessments of the CAP and other administrative processes, and management metrics such as maintenance performance, all believed to be correlated to safety culture. Maybe it would be OK to dial back our concern with identifying which of them are leading (if any) and which are lagging. More importantly, perhaps we should be asking how confident we are that an improvement in any one of them implies that the overall safety culture is in better shape.
Contrary to the view of many safety culture pundits, including ourselves, Professor Hopkins is not particularly interested in separating lagging from leading indicators; he says that trying to separate them may not be a useful exercise. Instead, he is interested in a company’s efforts to develop a set of useful indicators that in total measure or reflect the state of the organization’s risk control system. In his words, “. . . the important thing is to identify measures of how well the process safety controls are functioning. Whether we call them lead or lag indicators is a secondary matter. Companies I have studied that are actively seeking to identify indicators of process safety do not make use of the lead/lag distinction in any systematic way. They use indicators of failure in use, when these are available, as well as indicators arising out their own safety management activities, where appropriate, without thought as to whether they be lead or lag. . . . Improving performance in relation to these indicators must enhance process safety. [emphasis added]” (p. 11)
Are his observations useful for people trying to evaluate the overall health of a nuclear organization’s safety culture? Possibly. Organizations use a multitude of safety culture assessment techniques including (but not limited to) interviews; observations; surveys; assessments of the CAP and other administrative processes, and management metrics such as maintenance performance, all believed to be correlated to safety culture. Maybe it would be OK to dial back our concern with identifying which of them are leading (if any) and which are lagging. More importantly, perhaps we should be asking how confident we are that an improvement in any one of them implies that the overall safety culture is in better shape.
* A. Hopkins, "Thinking About Process Safety Indicators," Working Paper 53, National Research Centre for OHS Regulation, Australian National University (May 2007). We have referred to Professor Hopkins’ work before (here and here).
Posted by
Lewis Conner
0
comments. Click to view/add.
Labels:
DOE,
References,
Safety Culture,
SC Performance Indicators,
SC Survey
Monday, March 7, 2011
Culture Wars
We wanted to bring to our readers attention an article from the McKinsey Quarterly (March 2011) that highlights the ability of management simulators to be powerful business tools. The context is the use of such “war games” in assisting management teams to accomplish their business goals; but we would allow that their utility extends to other challenges such as managing safety culture.
“Well-designed war games, though not a panacea, can be powerful learning experiences that allow managers to make better decisions.”
“...the company designed a game to answer the more strategic question: how can we win market share given the budget pressures on the Department of Defense and the moves of competitors? The game tested levers such as pricing, contracting, operational improvements, and partnerships. The outcome wasn’t a tactical playbook—a list of things to execute and monitor—but rather strategic guidance on the industry’s direction, the most promising types of moves, the company’s competitive strengths and weaknesses, and where to focus further analysis.” (p. 3) We have often used the term “levers” to bring attention to the need for managers to understand when and how to take actions to bring about a desired safety culture result. Levers connote control and, as with any control system, control must be based on an understanding of the system’s dynamics. Importantly the above quote distinguishes the outcome of the simulated experience is not a “playbook”, but “guidance” (we would add a deeper understanding and developed skills) that can be applied in the real world.
Interestingly the article mentions the use of games to facilitate or achieve organizational alignment around a strategic decision. This treads very close to our contention that using a safety culture simulator offers a powerful environment within which managers can interact including developing common mental models and understanding of culture dynamics. As noted in the article, “This shared experience...has continued to stimulate discussions across the company…” (p. 4) What could be more valuable for reinforcing safety culture than informed and broad based discussion within the organization? As Horn says, “It’s often beneficial, however, to repeat a game for the sake of organizational alignment ... usually, the wider group of employees who will implement the decision. Most people learn better by doing, and when they have shared experiences, they are more likely to embrace change.”
“Well-designed war games, though not a panacea, can be powerful learning experiences that allow managers to make better decisions.”
“...the company designed a game to answer the more strategic question: how can we win market share given the budget pressures on the Department of Defense and the moves of competitors? The game tested levers such as pricing, contracting, operational improvements, and partnerships. The outcome wasn’t a tactical playbook—a list of things to execute and monitor—but rather strategic guidance on the industry’s direction, the most promising types of moves, the company’s competitive strengths and weaknesses, and where to focus further analysis.” (p. 3) We have often used the term “levers” to bring attention to the need for managers to understand when and how to take actions to bring about a desired safety culture result. Levers connote control and, as with any control system, control must be based on an understanding of the system’s dynamics. Importantly the above quote distinguishes the outcome of the simulated experience is not a “playbook”, but “guidance” (we would add a deeper understanding and developed skills) that can be applied in the real world.
Interestingly the article mentions the use of games to facilitate or achieve organizational alignment around a strategic decision. This treads very close to our contention that using a safety culture simulator offers a powerful environment within which managers can interact including developing common mental models and understanding of culture dynamics. As noted in the article, “This shared experience...has continued to stimulate discussions across the company…” (p. 4) What could be more valuable for reinforcing safety culture than informed and broad based discussion within the organization? As Horn says, “It’s often beneficial, however, to repeat a game for the sake of organizational alignment ... usually, the wider group of employees who will implement the decision. Most people learn better by doing, and when they have shared experiences, they are more likely to embrace change.”
Posted by
Bob Cudlin
0
comments. Click to view/add.
Labels:
Management,
Simulation
Thursday, March 3, 2011
Safety Culture in the DOE Complex
This post reviews a Department of Energy (DOE) effort to provide safety culture assessment and improvement tools for its own operations and those of its contractors.
Introduction
The DOE is responsible for a vast array of organizations that work on DOE’s programs. These organizations range from very small to huge in size and include private contractors, government facilities, specialty shops, niche manufacturers, labs and factories. Many are engaged in high-hazard activities (including nuclear) so DOE is interested in promoting an effective safety culture across the complex.
To that end, a task team* was established in 2007 “to identify a consensus set of safety culture principles, along with implementation practices that could be used by DOE . . . and their contractors. . . . The goal of this effort was to achieve an improved safety culture through ISMS [Integrated Safety Management System] continuous improvement, building on operating experience from similar industries, such as the domestic and international commercial nuclear and chemical industries.” (Final Report**, p. 2)
It appears the team performed most of its research during 2008, conducted a pilot program in 2009 and published its final report in 2010. Research included reviewing the space shuttle and Texas City disasters, the Davis-Besse incident, works by gurus such as James Reason, and guidance and practices published by NASA, NRC, IAEA, INPO and OSHA.
Major Results
The team developed a definition of safety culture and described a process whereby using organizations could assess their safety culture and, if necessary, take steps to improve it.
The team’s definition of safety culture:
“An organization’s values and behaviors modeled by its leaders and internalized by its members, which serve to make safe performance of work the overriding priority to protect the workers, public, and the environment.” (Final Report, p. 5)
After presenting this definition, the report goes on to say “The Team believes that voluntary, proactive pursuit of excellence is preferable to regulatory approaches to address safety culture because it is difficult to regulate values and behaviors. DOE is not currently considering regulation or requirements relative to safety culture.” (Final Report, pp. 5-6)
The team identified three focus areas that were judged to have the most impact on improving safety and production performance within the DOE complex: Leadership, Employee/Worker Engagement, and Organizational Learning. For each of these three focus areas, the team identified related attributes.
The overall process for a using organization is to review the focus areas and attributes, assess the current safety culture, select and use appropriate improvement tools, and reinforce results.
The list of tools to assess safety culture includes direct observations, causal factors analysis (CFA), surveys, interviews, review of key processes, performance indicators, Voluntary Protection Program (VPP) assessments, stream analysis and Human Performance Improvement (HPI) assessments.*** The Final Report also mentioned performance metrics and workshops. (Final Report, p. 9)
Tools to improve safety culture include senior management commitment, clear expectations, ISMS training, managers spending time in the field, coaching and mentoring, Behavior Based Safety (BBS), VPP, Six Sigma, the problem identification process, and HPI.**** The Final Report also mentioned High Reliability Organization (HRO), Safety Conscious Work Environment (SCWE) and Differing Professional Opinion (DPO). (Final Report, p. 9) Whew.
The results of a one-year pilot program at multiple contractors were evaluated and the lessons learned were incorporated in the final report.
Our Assessment
Given the diversity of the DOE complex, it’s obvious that no “one size fits all” approach is likely to be effective. But it’s not clear that what the team has provided will be all that effective either. The team’s product is really a collection of concepts and tools culled from the work of outsiders, combined with DOE’s existing management programs, and repackaged as a combination of overall process and laundry lists. Users are left to determine for themselves exactly which sub-set of tools might be useful in their individual situations.
It’s not that the report is bad. For example, the general discussion of safety culture improvement emphasizes the importance of creating a learning organization focused on continuous improvement. In addition, a major point they got right was recognizing that safety can contribute to better mission performance. “The strong correlation between good safety performance with good mission performance (or productivity or reliability) has been observed in many different contexts, including industrial, chemical, and nuclear operations.” (Final Report, p. 20)
On the other hand, the team has adopted the works of others but does not appear to recognize how, in a systems sense, safety culture is interwoven into the fabric of an organization. For example, feedback loops from the multitude of possible interventions to overall safety culture are not even mentioned. And this is not a trivial issue. An intervention can provide an initial boost to safety culture but then safety culture may start to decay because of saturation effects, especially if the organization is hit with one intervention after another.
In addition, some of the major, omnipresent threats to safety culture do not get the emphasis they deserve. Goal conflict, normalization of deviance and institutional complacency are included in a list of issues from the Columbia, Davis-Besse and Texas City events (Final Report, p. 13-15) but the authors do not give them the overarching importance they merit. Goal conflict, often expressed as safety vs mission, should obviously be avoided but its insidiousness is not adequately recognized; the other two factors are treated in a similar manner.
Two final picky points: First, the report says it’s difficult to regulate behavior. That’s true but companies and government do it all the time. DOE could definitely promulgate a behavior-based safety culture regulatory requirement if it chose to do so. Second, the final report (p. 9) mentions leading (vs lagging) indicators as part of assessment but the guidelines do not provide any examples. If someone has some useful leading indicators, we’d definitely like to know about them.
Bottom line, the DOE effort draws from many sources and probably represents consensus building among stakeholders on an epic scale. However, the team provides no new insights into safety culture and, in fact, may not be taking advantage of the state of the art in our understanding of how safety culture interacts with other organizational attributes.
* Energy Facility Contractors Group (EFCOG)/DOE Integrated Safety Management System (ISMS) Safety Culture Task Team.
** J. McDonald, P. Worthington, N. Barker, G. Podonsky, “EFCOG/DOE ISMS Safety Culture Task Team Final Report” (Jun 4, 2010).
*** EFCOG/DOE ISMS Safety Culture Task Team, “Assessing Safety Culture in DOE Facilities,” EFCOG meeting handout (Jan 23, 2009).
**** EFCOG/DOE ISMS Safety Culture Task Team, “Activities to Improve Safety Culture in DOE Facilities,” EFCOG meeting handout (Jan 23, 2009).
Introduction
The DOE is responsible for a vast array of organizations that work on DOE’s programs. These organizations range from very small to huge in size and include private contractors, government facilities, specialty shops, niche manufacturers, labs and factories. Many are engaged in high-hazard activities (including nuclear) so DOE is interested in promoting an effective safety culture across the complex.
To that end, a task team* was established in 2007 “to identify a consensus set of safety culture principles, along with implementation practices that could be used by DOE . . . and their contractors. . . . The goal of this effort was to achieve an improved safety culture through ISMS [Integrated Safety Management System] continuous improvement, building on operating experience from similar industries, such as the domestic and international commercial nuclear and chemical industries.” (Final Report**, p. 2)
It appears the team performed most of its research during 2008, conducted a pilot program in 2009 and published its final report in 2010. Research included reviewing the space shuttle and Texas City disasters, the Davis-Besse incident, works by gurus such as James Reason, and guidance and practices published by NASA, NRC, IAEA, INPO and OSHA.
Major Results
The team developed a definition of safety culture and described a process whereby using organizations could assess their safety culture and, if necessary, take steps to improve it.
The team’s definition of safety culture:
“An organization’s values and behaviors modeled by its leaders and internalized by its members, which serve to make safe performance of work the overriding priority to protect the workers, public, and the environment.” (Final Report, p. 5)
After presenting this definition, the report goes on to say “The Team believes that voluntary, proactive pursuit of excellence is preferable to regulatory approaches to address safety culture because it is difficult to regulate values and behaviors. DOE is not currently considering regulation or requirements relative to safety culture.” (Final Report, pp. 5-6)
The team identified three focus areas that were judged to have the most impact on improving safety and production performance within the DOE complex: Leadership, Employee/Worker Engagement, and Organizational Learning. For each of these three focus areas, the team identified related attributes.
The overall process for a using organization is to review the focus areas and attributes, assess the current safety culture, select and use appropriate improvement tools, and reinforce results.
The list of tools to assess safety culture includes direct observations, causal factors analysis (CFA), surveys, interviews, review of key processes, performance indicators, Voluntary Protection Program (VPP) assessments, stream analysis and Human Performance Improvement (HPI) assessments.*** The Final Report also mentioned performance metrics and workshops. (Final Report, p. 9)
Tools to improve safety culture include senior management commitment, clear expectations, ISMS training, managers spending time in the field, coaching and mentoring, Behavior Based Safety (BBS), VPP, Six Sigma, the problem identification process, and HPI.**** The Final Report also mentioned High Reliability Organization (HRO), Safety Conscious Work Environment (SCWE) and Differing Professional Opinion (DPO). (Final Report, p. 9) Whew.
The results of a one-year pilot program at multiple contractors were evaluated and the lessons learned were incorporated in the final report.
Our Assessment
Given the diversity of the DOE complex, it’s obvious that no “one size fits all” approach is likely to be effective. But it’s not clear that what the team has provided will be all that effective either. The team’s product is really a collection of concepts and tools culled from the work of outsiders, combined with DOE’s existing management programs, and repackaged as a combination of overall process and laundry lists. Users are left to determine for themselves exactly which sub-set of tools might be useful in their individual situations.
It’s not that the report is bad. For example, the general discussion of safety culture improvement emphasizes the importance of creating a learning organization focused on continuous improvement. In addition, a major point they got right was recognizing that safety can contribute to better mission performance. “The strong correlation between good safety performance with good mission performance (or productivity or reliability) has been observed in many different contexts, including industrial, chemical, and nuclear operations.” (Final Report, p. 20)
On the other hand, the team has adopted the works of others but does not appear to recognize how, in a systems sense, safety culture is interwoven into the fabric of an organization. For example, feedback loops from the multitude of possible interventions to overall safety culture are not even mentioned. And this is not a trivial issue. An intervention can provide an initial boost to safety culture but then safety culture may start to decay because of saturation effects, especially if the organization is hit with one intervention after another.
In addition, some of the major, omnipresent threats to safety culture do not get the emphasis they deserve. Goal conflict, normalization of deviance and institutional complacency are included in a list of issues from the Columbia, Davis-Besse and Texas City events (Final Report, p. 13-15) but the authors do not give them the overarching importance they merit. Goal conflict, often expressed as safety vs mission, should obviously be avoided but its insidiousness is not adequately recognized; the other two factors are treated in a similar manner.
Two final picky points: First, the report says it’s difficult to regulate behavior. That’s true but companies and government do it all the time. DOE could definitely promulgate a behavior-based safety culture regulatory requirement if it chose to do so. Second, the final report (p. 9) mentions leading (vs lagging) indicators as part of assessment but the guidelines do not provide any examples. If someone has some useful leading indicators, we’d definitely like to know about them.
Bottom line, the DOE effort draws from many sources and probably represents consensus building among stakeholders on an epic scale. However, the team provides no new insights into safety culture and, in fact, may not be taking advantage of the state of the art in our understanding of how safety culture interacts with other organizational attributes.
* Energy Facility Contractors Group (EFCOG)/DOE Integrated Safety Management System (ISMS) Safety Culture Task Team.
** J. McDonald, P. Worthington, N. Barker, G. Podonsky, “EFCOG/DOE ISMS Safety Culture Task Team Final Report” (Jun 4, 2010).
*** EFCOG/DOE ISMS Safety Culture Task Team, “Assessing Safety Culture in DOE Facilities,” EFCOG meeting handout (Jan 23, 2009).
**** EFCOG/DOE ISMS Safety Culture Task Team, “Activities to Improve Safety Culture in DOE Facilities,” EFCOG meeting handout (Jan 23, 2009).
Posted by
Lewis Conner
3
comments. Click to view/add.
Labels:
Complacency,
DOE,
HRO,
IAEA,
Management,
References,
Safety Culture,
SC Performance Indicators,
SC Survey,
SCWE
Wednesday, February 16, 2011
BP Exec Quit Over Safety Before Deepwater Disaster
Today’s Wall Street Journal has an interesting news item about a BP Vice President who quit prior to the Deepwater Horizon disaster because he felt BP "was not adequately committed to improving its safety protocols in offshore drilling to the level of its industry peers." The full article is available here.
Posted by
Bob Cudlin
0
comments. Click to view/add.
Saturday, February 12, 2011
“what people do, not why they do it…”
Our perseverance through over three hours of the web video of the Commission meeting on the proposed safety culture policy statement was finally rewarded in the very last minute of discussion. Commissioner Apostolakis reiterated some of his concerns with the direction of the policy statement, observing that the NRC is a performance-based agency and:
“...we really care about what people do and maybe not why they do it….”
Commissioner Apostolakis was amplifying his discomfort with the inclusion of values along with behaviors in the policy as values are inherently fuzzy, not measurable, and may or may not be a prerequisite to the right behaviors. Perhaps most of all, he believed omitting the reference to core values would not detract from the definition of safety culture.
Earlier in the meeting Commissioner Apostolakis had tried to draw out the staff on whether the definition of safety culture needed values in addition to behaviors [at time 2:34:58], and would it be a fatal flaw to omit “core values”. The staff response was illuminating. The justification offered for retaining values was “stakeholder consensus”, and extensive outreach efforts that supported inclusion. (But why was it so important to stakeholders?) The staff went on to clarify: “culture does not lend itself to be inspectable”, but “having values with behaviors is what culture is all about”. Frankly we’re not sure what that means, but we do know that safety culture behaviors are inspectable because they are observable and measurable.
That much of the staff’s justification for including values in the policy statement seemed to reside in the fact that all the stakeholders had agreed received positive endorsement by Chairman Jaczko when he observed: “...Commissioner Magwood I think made a profound point that there was value in this process here that may be tremendously more important than the actual policy statement was the fact that people got together and started talking about this and realized that across this wide variety of stakeholders, there was pretty good agreement about the kinds of things that we were talking about.”
Chairman Jaczko also weighed in on the values-behaviors contrast, coming down firmly on the inclusion of values and offering the following justification:
“...not all entities with a good safety culture will have necessarily the right values…”
Respectfully, we believe at a minimum this will further confuse the NRC’s policy on safety culture, and in all likelihood places emphasis in exactly the wrong place. Is the Chairman agreeing all that matters is what people do? Or is he suggesting that the NRC would find fault with a licensee that was acting consistent with safety but did not manifest the “right” values. And how would the NRC reach such a finding? More fundamentally, isn’t Commissioner Apostolakis correct in his blunt statement - that we [NRC] don’t care why they [licensees] do it?
“...we really care about what people do and maybe not why they do it….”
Commissioner Apostolakis was amplifying his discomfort with the inclusion of values along with behaviors in the policy as values are inherently fuzzy, not measurable, and may or may not be a prerequisite to the right behaviors. Perhaps most of all, he believed omitting the reference to core values would not detract from the definition of safety culture.
Earlier in the meeting Commissioner Apostolakis had tried to draw out the staff on whether the definition of safety culture needed values in addition to behaviors [at time 2:34:58], and would it be a fatal flaw to omit “core values”. The staff response was illuminating. The justification offered for retaining values was “stakeholder consensus”, and extensive outreach efforts that supported inclusion. (But why was it so important to stakeholders?) The staff went on to clarify: “culture does not lend itself to be inspectable”, but “having values with behaviors is what culture is all about”. Frankly we’re not sure what that means, but we do know that safety culture behaviors are inspectable because they are observable and measurable.
That much of the staff’s justification for including values in the policy statement seemed to reside in the fact that all the stakeholders had agreed received positive endorsement by Chairman Jaczko when he observed: “...Commissioner Magwood I think made a profound point that there was value in this process here that may be tremendously more important than the actual policy statement was the fact that people got together and started talking about this and realized that across this wide variety of stakeholders, there was pretty good agreement about the kinds of things that we were talking about.”
Chairman Jaczko also weighed in on the values-behaviors contrast, coming down firmly on the inclusion of values and offering the following justification:
“...not all entities with a good safety culture will have necessarily the right values…”
Respectfully, we believe at a minimum this will further confuse the NRC’s policy on safety culture, and in all likelihood places emphasis in exactly the wrong place. Is the Chairman agreeing all that matters is what people do? Or is he suggesting that the NRC would find fault with a licensee that was acting consistent with safety but did not manifest the “right” values. And how would the NRC reach such a finding? More fundamentally, isn’t Commissioner Apostolakis correct in his blunt statement - that we [NRC] don’t care why they [licensees] do it?
Posted by
Bob Cudlin
0
comments. Click to view/add.
Labels:
NRC,
Safety Culture,
SC Policy Statement
Monday, February 7, 2011
More Hope
Our prior post highlighted a comment early in the January 24, 2011 Commission meeting to review the proposed policy statement on nuclear safety culture.
In the context of her advocacy for regulations in addition to a policy statement, attorney Billie Garde stated she “hoped” that proceeding with just a policy statement was the right decision. We thought her warning of the fallout from a possible future nuclear event would get some attention. It did, at least with Commissioner Svinicki who sought some clarification of Garde’s concern. Just prior to this clip, Svinicki had observed that in her mind a policy statement can’t supplant an appropriate regulatory framework in terms of compelling certain behaviors. No matter what you think about the appropriateness of a policy statement versus other regulatory actions, Garde is certainly correct that the question will be asked in the future: Did the NRC do enough?
In the context of her advocacy for regulations in addition to a policy statement, attorney Billie Garde stated she “hoped” that proceeding with just a policy statement was the right decision. We thought her warning of the fallout from a possible future nuclear event would get some attention. It did, at least with Commissioner Svinicki who sought some clarification of Garde’s concern. Just prior to this clip, Svinicki had observed that in her mind a policy statement can’t supplant an appropriate regulatory framework in terms of compelling certain behaviors. No matter what you think about the appropriateness of a policy statement versus other regulatory actions, Garde is certainly correct that the question will be asked in the future: Did the NRC do enough?
Posted by
Bob Cudlin
0
comments. Click to view/add.
Labels:
NRC,
Safety Culture
Friday, February 4, 2011
“I Hope For All Our Sakes This is Right”
On January 24, 2011 the NRC Commissioners met to review the proposed policy statement on nuclear safety culture developed by the NRC staff. This most recent effort was chartered by the Commission more than 3 years ago and represents the next step in the process to publish the proposed statement for public comment.
“25 years is long enough to build a policy statement…” for nuclear safety culture. This observation by Billie Garde* in her opening remarks to the Commissioners, with her timeline referring to the Chernobyl and space shuttle Challenger accidents in 1986. She also emphasized that the need was to now focus on implementation of the policy statement. She maintained her position that a policy statement alone would not be sufficient and that regulation would be necessary to assure consistent and reliable implementation.
In that regard she lays claim to one of the more disconcerting observations made at the meeting, the gist of which can be summed up as, “I hope for all our sakes this is right…”
Here’s the video clip with the exchange between Garde and Commissioner Apostolakis.
We will be following up with additional posts with highlights from the Commission session.
“25 years is long enough to build a policy statement…” for nuclear safety culture. This observation by Billie Garde* in her opening remarks to the Commissioners, with her timeline referring to the Chernobyl and space shuttle Challenger accidents in 1986. She also emphasized that the need was to now focus on implementation of the policy statement. She maintained her position that a policy statement alone would not be sufficient and that regulation would be necessary to assure consistent and reliable implementation.
In that regard she lays claim to one of the more disconcerting observations made at the meeting, the gist of which can be summed up as, “I hope for all our sakes this is right…”
Here’s the video clip with the exchange between Garde and Commissioner Apostolakis.
We will be following up with additional posts with highlights from the Commission session.
Posted by
Bob Cudlin
0
comments. Click to view/add.
Labels:
Chernobyl,
NRC,
Safety Culture
Subscribe to:
Posts (Atom)