Showing posts with label Regulatory Capture. Show all posts
Showing posts with label Regulatory Capture. Show all posts

Sunday, December 20, 2015

Fukushima and Volkswagen: Systemic Similarities and Observations for the U.S. Nuclear Industry

Fukushima
VW Logo (Source: Wikipedia)
Recent New York Times articles* have described the activities, culture and context of Volkswagen, currently mired in scandal.  The series inspired a Yogi Berra moment: “It’s deja vu all over again.”  Let’s look at some of the circumstances that affected Fukushima and Volkswagen and see if they give us any additional insights into the risk profile of the U.S. commercial nuclear industry.

An Accommodating Regulator

The Japanese nuclear regulator did not provide effective oversight of Tokyo Electric Power Co.  One aspect of this was TEPCO’s relative power over the regulator because of TEPCO’s political influence at the national level.  This was a case of complete regulatory capture.

The German auto regulator doesn’t provide effective oversight either.  “[T]he regulatory agency for motor vehicles in Germany is deliberately starved for resources by political leaders eager to protect the country’s powerful automakers, . . .” (NYT 12-9-15)  This looks more like regulatory impotence than capture but the outcome is the same.

In the U.S., critics have accused the NRC of being captured by industry.  We disagree but have noted that the regulator and licensees working together over long periods of time, even across the table, can lead to familiarity, common language and indiscernible mutual adjustments. 

Deference to Senior Managers

Traditionally in Japan, people in senior positions are treated as if they have the right answers, no matter what the facts facing a lower-ranking employee might suggest.  Members of society go along to get along.  As we said in an Aug. 7, 2014 post, “harmony was so valued that no one complained that Fukushima site protection was clearly inadequate and essential emergency equipment was exposed to grave hazards.” 

The Volkswagen culture was a different but had the same effect.  The CEO managed through fear.  At VW, “subordinates were fearful of contradicting their superiors and were afraid to admit failure.”  A former CEO “was known for publicly dressing down subordinates . . .”  (NYT 12-13-15)

In the U.S., INPO’s singled-minded focus on the unrivaled importance of leadership can, if practiced by the wrong kind of people, lead to a suppression of dissent, facts that contradict the party line and the questioning attitude that is vital to maintain safe facilities.

Companies Not Responsible to All Legitimate Stakeholders

In the Fukushima plant design, TEPCO gave short shrift to local communities, their citizens, governments and first responders, ultimately exposing them to profound hazards.  TEPCO’s behavior also impacted the international nuclear power community, where any significant incident at one operator is a problem for them all.

Volkswagen’s isolation from public responsibilities is facilitated by its structure.  Only 12% of the company is held by independent shareholders.  Like other large German companies, the labor unions hold half the seats on VW’s board.  Two more seats are held by the regional government (a minority owner) which in practice cannot vote against labor. So the union effectively controls the board. (NYT 12-13-15)

We have long complained about the obsessive secrecy practiced by the U.S. nuclear industry, particularly in its relations with its self-regulator, INPO.  It is not a recipe for building trust and confidence with the public, an affected and legitimate stakeholder.

Our Perspective

The TEPCO safety culture (SC) was unacceptably weak.  And its management culture simply ignored inconvenient facts.

Volkswagen’s culture has valued technical competence and ambition, and apparently has lower regard for regulations (esp. foreign, i.e., U.S. ones) and other rules of the game.

We are not saying the gross problems of either company infect the U.S. nuclear industry.  But the potential is there.  The industry has experienced events that suggest the presence of human, technical and systemic shortcomings.  For a general illustration of inadequate management effectiveness, look at Entergy’s series of SC problems.  For a specific case, remember Davis-Besse, where favoring production over safety took the plant to the brink of a significant failure.  Caveat nuclear.


*  See, for example: J. Ewing and G. Bowley, “The Engineering of Volkswagen’s Aggressive Ambition,” New York Times (Dec. 13, 2015).  J. Ewing, “Volkswagen Terms One Emissions Problem Smaller Than Expected,” New York Times (Dec. 9, 2015).

Monday, September 21, 2015

Notes on Regulatory Capture

NRC Public Meeting
A couple of recent local news items discuss a too-cozy relationship between regulators and the supposedly regulated, to the detriment of ratepayers and ordinary citizens.  Neither is nuclear-related but they may give us some ideas on how regulatory capture might (or does) manifest in the nuclear industry.

PG&E and the CPUC

First up is an article* about Pacific Gas and Electric Co. (PG&E) and the California Public Utilities Commission (CPUC).  PG&E is responsible for the deadly 2010 gas main explosion in San Bruno, CA.  It was later revealed that PG&E was involved in private, i.e., secret, lobbying to get the CPUC judge it wanted to handle the case.  An as investigation later concluded, such ex parte discussions gives the utilities an advantage over other participants in the regulatory process.

The article concentrates on remedial legislation working its way through the system.  One bill would close the loophole that allows secret meetings between the CPUC and a regulated entity under certain conditions.  Another would create an independent inspector general for the agency.

Berkeley Zoning Adjustment Board 


This editorial** focuses on a city zoning board that is stuffed with members whose background is in the development industry.  It quotes at length local resident James McFadden who has some excellent observations about the nature of regulatory capture in this situation.

“Although many people are quick to assume that capture means corruption, they really are different things.

“Capture is more of an aligning of economic world views, not necessarily to any monetary advantage, often just to make one's job easier or more pleasant in dealing with people on a day to day basis . . . .

“Captured individuals . . . don't see their behavior as incorrect.  They have forgotten that their role is to provide oversight and protection to the public . . . Their public meetings evolve into patronizing facades of democracy.

“. . . For the most part, capture is about creating a pleasant working environment with those in industry who they deal with on a daily basis.  It is a slow and insidious process that strikes at the heart of human psychology which allows us to work in groups. . . . When we-the-public show up and complain, we become the opponent to be ignored.

“. . . The [public] meeting becomes a dance of false empowerment where getting through the meeting on time is more important than focusing on important issues or input from the public.”

Our Perspective

Do you see any of the above behavior in the nuclear industry?  Here’s a clue to get you started: the mental model for all federally regulated or controlled activities, viz., the infamous “iron triangle” of special interests, Congress and federal bureaucrats.  In the nuclear space, utility lobbyists and industry organizations encourage/pressure Congress for favorable treatment in exchange for support at election time.  Congress leans on the NRC when job losses are threatened because of a lengthy plant shutdown or costly “over regulation.”  The NRC listens to or cooperates with industry “experts” when it is considering new policies, regulations or interpretations.  We believe the iron triangle is alive and well in the nuclear industry but is nowhere near as scurrilous as, say, the welfare system.

(Now the anti-nukes also lobby Congress and certain members of Congress are relentless critics of the NRC.  Do the scales balance?  And where does the clash of lobbying titans leave Joe Citizen?)

Expanding on one side of the triangle, nuclear utilities make efforts to build organizational, professional and personal relationships with the NRC because it’s in their direct economic interest to do so.  In the other direction, don’t NRC personnel try to get along with utility people they see on a regular basis?  Who wants to alienate everybody all the time?  The NRC tries to avoid being too cozy with the utilities but they can’t completely avoid it.  They are in the same business and speak the same language.  However, it’s far from scandalous, like the relationship between the former Minerals Management Service and the offshore drilling industry.  And there is no overactive revolving door between the NRC and industry.

What about outsiders who try to influence policy?  At the top, gadflies who address agency-wide issues or work with HQ personnel may eventually get a seat at the table.  But in the field, Jane Citizen making a statement at a meeting concerning the local plant probably doesn’t have as much leverage.  Consider how difficult it is for the average whistleblower to have an impact.

The Wikipedia entry on regulatory capture cites Princeton professor Frank von Hippel, Barack Obama, Joe Biden, Greenpeace, the Union of Concerned Scientists and the Associated Press to support the position that the NRC has been “captured.”  Has the NRC been too accommodating to the industry?  You be the judge.

There is an old saying: “Familiarity breeds contempt.”  That’s true in some cases.  In other situations, familiarity breeds—greater familiarity.


*  J. Van Derbeken, “CPUC reform bills on governor’s desk,” San Francisco Chronicle (Sept. 15, 2015).  Questionable conduct flowed both ways.  It also came to light that the then-President of the CPUC appeared to offer his support for PG&E’s (and other utilities’) positions on regulatory cases in return for their contributions to his favorite political causes.  That’s called influence peddling.

**  B. O'Malley, “Berkeley's Zoning Board Slouches Toward Birthing Its Monster,” Berkeley Daily Planet (Sept. 13, 2015).  The Daily Planet is an online progressive (lefty) newspaper in Berkeley, CA.

Wednesday, July 30, 2014

National Research Council: Safety Culture Lessons Learned from Fukushima

The National Research Council has released a report* on lessons learned from the Fukushima nuclear accident that may be applicable to U.S. nuclear plants.  The report begins with a recitation of various Fukushima aspects including site history, BWR technology, and plant failure causes and consequences.  Lessons learned were identified in the areas of Plant Operations and Safety Regulations, Offsite Emergency Management, and Nuclear Safety Culture (SC).  This review focuses on the SC aspects of the report.

Spoiler alert: the report reflects the work of a 24-person committee, with the draft reviewed by two dozen other individuals.**  We suggest you adjust your expectations accordingly.

The SC chapter of the report provides some background on SC and echoes the by-now familiar cultural issues at both Tokyo Electric Power Company (TEPCO) and Japan’s Nuclear Energy Agency.  Moving to the U.S., the committee summarizes the current situation in a finding: “The U.S. nuclear industry, acting through the Institute of Nuclear Power Operations, has voluntarily established nuclear safety culture programs and mechanisms for evaluating their implementation at nuclear plants. The U.S. Nuclear Regulatory Commission has published a policy statement on nuclear safety culture, but that statement does not contain implementation steps or specific requirements for industry adoption.” (p. 7-8)  This is accurate as far as it goes.

After additional discussion of the U.S. nuclear milieu, the chapter concludes with two recommendations, reproduced below along with associated commentary.

An Effective, Independent Regulator

“RECOMMENDATION 7.2A: The U.S. Nuclear Regulatory Commission and the U.S. nuclear power industry must maintain and continuously monitor a strong nuclear safety culture in all of their safety-related activities. Additionally, the leadership of the U.S. Nuclear Regulatory Commission must maintain the independence of the regulator. The agency must ensure that outside influences do not compromise its nuclear safety culture and/or hinder its discussions with and disclosures to the public about safety-related matters.” (pp. S-9, 7-17)

In the lead up to this recommendation, there was some lack of unanimity on the subject of whether the NRC was sufficiently independent and if some degree of regulatory capture has occurred.  The debate covered industry involvement in rule-making, Davis-Besse and other examples.

We saw one quote worth repeating here: “The president and Senate of the United States also play important roles in helping to maintain the USNRC’s regulatory independence by nominating and appointing highly qualified agency leaders (i.e., commissioners) and working to ensure that the agency is free from undue influences.” (pp. 7-14/15)  We’ll leave it to the reader to determine if the executive and legislative branches met that standard with the previous NRC chairman and the two current commissioner nominees, both lawyers—one an NRC lifer and the other a former staffer on the Hill.

Snarky comment notwithstanding, the first recommendation is a motherhood statement and borderline tautology (who can envision the effective negation of any of the three imperative statements?)  More importantly, it appears only remotely related to the concept of SC; even at its simplest, SC consists of values and artifacts and there’s not much of either in the recommendation.

Increased Industry Transparency

“RECOMMENDATION 7.2B: The U.S. nuclear industry and the U.S. Nuclear Regulatory Commission should examine opportunities to increase the transparency of and communication about their efforts to assess and improve their nuclear safety cultures.” (pp. S-9, 7-17)

The discussion includes a big kiss for INPO.  “INPO has taken the lead for promoting a strong nuclear safety culture in the U.S. nuclear industry through training and evaluation programs.” (p. 7-10)  The praise for INPO continues in an attachment to the SC chapter but it eventually gets to the elephant in the room: “The results of INPO’s inspection program are shared among INPO membership, but such information is not made available to the public. . . . Releases of summaries of these inspections by management to the public would help increase transparency.” (p. 7-21)

The committee recognizes that implementing the recommendation “would require that the industry and regulators disclose additional information to the public about their efforts to assess safety culture effectiveness, remediate deficiencies, and implement improvements.” (p. 7-17)

At least transparency is a cultural attribute.  We have long opined that the nuclear industry’s penchant for secrecy is a major contributor to the industry being its own worst enemy in the court of public opinion. 

Our Perspective

This report looks like what it is: a crowd sourced effort by a focus group of academics using the National Academy of Sciences’ established bureaucratic processes.  The report is 367 pages long, with over 350 references and a bunch of footnotes.  The committee’s mental model of SC focuses on organizational processes that influence SC. (p. 7-1)  I think it's fair to infer that their notion of improvement is to revise the rules that govern the processes, then maximize compliant behavior.  Because of the committee’s limited mental model, restricted mission*** and the real or perceived need to document every factoid, the report ultimately provides no new insights into how U.S. nuclear plants might actually realize stronger SC.


*  National Research Council Committee on Lessons Learned from the Fukushima Nuclear Accident for Improving Safety and Security of U.S. Nuclear Plants, “Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants” Prepublication Copy.  Downloaded July 26, 2014.  The National Research Council is part of the National Academy of Sciences (NAS).  Thanks to Bill Mullins for bringing this report to our attention.

**  The technical advisor to the committee was Najmedin Meshkati from the University of Southern California.  If that name rings a bell with Safetymatters readers, it may be because he and his student, Airi Ryu, published an op-ed last March contrasting the culture of Tohoru Electric with the culture of TEPCO.  We posted our review of the op-ed here.

***  The committee was tasked to consider causes of the Fukushima accident, conclusions from previous NAS studies and lessons that can be learned to improve nuclear plant safety in certain specified areas.  The committee was directed to not make any policy recommendations that involved non-technical value judgments. (p. S-10)

Wednesday, March 19, 2014

Safety Culture at Tohoku Electric vs. Tokyo Electric Power Co. (TEPCO)

Fukushima No. 1 (Daiichi)
An op-ed* in the Japan Times asserts that the root cause of the Fukushima No. 1 (Daiichi) plant’s failures following the March 11, 2011 earthquake and tsunami was TEPCO’s weak corporate safety culture (SC).  This post summarizes the op-ed then provides some background information and our perspective.

Op-Ed Summary 

According to the authors, Tohoku Electric had a stronger SC than TEPCO.  Tohoku had a senior manager who strongly advocated safety, company personnel participated in seminars and panel discussions about earthquake and tsunami disaster prevention, and the company had strict disaster response protocols in which all workers were trained.  Although their Onagawa plant was closer to the March 11, 2011 quake epicenter and experienced a higher tsunami, it managed to shut down safely.

SC-related initiatives like Tohoku’s were not part of TEPCO’s culture.  Fukushima No. 1’s problems date back to its original siting and early construction.  TEPCO removed 25 meters off the 35 meter natural seawall of the plant site and built its reactor buildings at a lower elevation of 10 meters (compared to 14.7m for Onagawa).  Over the plant’s life, as research showed that tsunami levels had been underestimated, TEPCO “resorted to delaying tactics, such as presenting alternative scientific studies and lobbying”** rather than implementing countermeasures.

Background and Our Perspective

The op-ed is a condensed version of the authors’ longer paper***, which was adapted from a research paper for an engineering class, presumably written by Ms. Ryu.  The op-ed is basically a student paper based on public materials.  You should read the longer paper, review the references and judge for yourself if the authors have offered conclusions that go beyond the data they present.

I suggest you pay particular attention to the figure that supposedly compares Tohoku and TEPCO using INPO’s ten healthy nuclear SC traits.  Not surprisingly, TEPCO doesn’t fare very well but note the ratings were based on “the author’s personal interpretations and assumptions” (p. 26)

Also note that the authors do not mention Fukushima No. 2 (Daini), a four-unit TEPCO plant about 15 km south of Fukushima No. 1.  Fukushima No. 2 also experienced damage and significant challenges after being hit by a 9m tsunami but managed to reach shutdown by March 18, 2011.  What could be inferred from that experience?  Same corporate culture but better luck?

Bottom line, by now it’s generally agreed that TEPCO SC was unacceptably weak so the authors plow no new ground in that area.  However, their description of Tohoku Electric’s behavior is illuminating and useful.


*  A. Ryu and N. Meshkati, “Culture of safety can make or break nuclear power plants,” Japan Times (Mar. 14, 2014).  Retrieved Mar. 19, 2014.

**  Quoted in the op-ed but taken from “The official report of the Fukushima Nuclear Accident Independent Investigation Commission [NAIIC] Executive Summary” (The National Diet of Japan, 2012), p. 28.  The NAIIC report has a longer Fukushima root cause explanation than the op-ed, viz, “the root causes were the organizational and regulatory systems that supported faulty rationales for decisions and actions, . . .” (p. 16) and “The underlying issue is the social structure that results in “regulatory capture,” and the organizational, institutional, and legal framework that allows individuals to justify their own actions, hide them when inconvenient, and leave no records in order to avoid responsibility.” (p. 21)  IMHO, if this were boiled down, there wouldn’t be much SC left in the bottom of the pot.

***  A. Ryu and N. Meshkati, “Why You Haven’t Heard About Onagawa Nuclear Power Station after the Earthquake and Tsunami of March 11, 2011” (Rev. Feb. 26, 2014).

Tuesday, June 25, 2013

Regulatory Creep

The NRC's assessment of safety culture (SC) is an example of regulatory creep.  It began with the requirement that licensees determine whether specific safety-related performance problems or cross-cutting issues were caused, in whole or in part, by SC deficiencies.  Then the 2011 SC Policy Statement attempted to put a benign face on NRC intrusiveness because a policy statement is not a regulation.  However, licensees are “expected” to comply with the policy statement's goals and guidance; the NRC “expectations” become de facto regulations.

We have griped about this many times.*  But why does regulatory creep occur?  Is it inevitable?  We'll start with some background then look at some causes.

In the U.S., Congress passes and the President approves major legislative acts.  These are top-level policy statements characterized by lofty goals and guiding principles.  Establishing the detailed rules (which have the force of law) for implementing these policies falls to government bureaucrats in regulatory agencies.  There are upwards of 50 such agencies in the federal government, some part of executive branch departments (headed by a Cabinet level officer), others functioning independently, i.e., reporting to Congress with the President appointing, subject to Congressional approval, their governing boards (commissioners).  The NRC is one of the independent federal regulatory agencies.

Regulatory rules are proposed and approved following a specified, public process.  But once they are in place, multiple forces can lead to the promulgation of new rules or an expanded interpretation or application of existing rules (creep).  The forces for change can arise internal or external to the agency.  Internal forces include the perceived need to address new real or imagined issues, a fear of losing control as the regulated entities adapt and evolve, or a generalized drive to expand regulatory authority.  Even bureaucrats can have a need for more power or a larger budget.

External sources include interest groups (and their lobbyists), members of Congress who serve on oversight committees, highly motivated members of the public or the agency's own commissioners.  We classify commissioners as external because they are not really part of an agency; they are political appointees of the President, who has a policy agenda.  In addition, a commissioner may owe a debt or allegiance to a Congressional sponsor who promoted the commissioner's appointment.

Given all the internal and external forces, it appears that new rules and  regulatory creep are inevitable absent the complete capture of the agency by its nominally regulated entities.  Creep means a shifting boundary of what is required, what is allowed, what is tolerated and what will be punished—without a formal rule making.  The impact of creep on the regulated entities is clear: increased uncertainty and cost.  They may not care for increased regulatory intrusiveness but they know the penalty may be high if they fail to comply.  When regulated entities perceive creep, they must make a business decision: comply or fight.  They often choose to comply simply because if they fight and lose, they risk even more punitive formal regulation and higher costs.  If they fight and win, they risk alienating career bureaucrats who will then wait for an opportunity to exact retribution.  A classic lose-lose situation.  

Our perspective

Years ago I took a poli-sci seminar where the professor said public policy forces could be boiled down to: Who's mad?  How mad?  And who's glad?  How glad?  I sometimes refer to that simple mental model when I watch the ongoing Kabuki between the regulator, its regulated entities and many, many political actors.  Regulatory creep is one of the outcomes of such dynamics.


*  For related posts, click the "Regulation of Safety Culture" label.

Regulatory creep is not confined to the NRC.  The motivation for this post was an item forwarded by a reader on reported Consumer Product Safety Commission (CPSC) activity.  Commenting on a recent settlement, a CPSC Commissioner “expressed concern that . . . the CPSC had insisted on a comprehensive compliance program absent evidence of widespread noncompliance and that “the compliance program language in [the] settlement is another step toward just such a de facto rule.””  C.G. Thompson, “Mandated Compliance Programs as the New Normal?” American Conference Institute blog.  Retrieved June 6, 2013.

Wednesday, December 5, 2012

Drift Into Failure by Sydney Dekker

Sydney Dekker's Drift Into Failure* is a noteworthy effort to provide new insights into how accidents and other bad outcomes occur in large organizations. He begins by describing two competing world views, the essentially mechanical view of the world spawned by Newton and Descartes (among others), and a view based on complexity in socio-technical organizations and a systems approach. He shows how each world view biases the search for the “truth” behind how accidents and incidents occur.

Newtonian-Cartesian (N-C) Vision

Issac Newton and Rene Descartes were leading thinkers during the dawn of the Age of Reason. Newton used the language of mathematics to describe the world while Descartes relied on the inner process of reason. Both believed there was a single reality that could be investigated, understood and explained through careful analysis and thought—complete knowledge was possible if investigators looked long and hard enough. The assumptions and rules that started with them, and were extended by others over time, have been passed on and most of us accept them, uncritically, as common sense, the most effective way to look at the world.

The N-C world is ruled by invariant cause-and-effect; it is, in fact, a machine. If something bad happens, then there was a unique cause or set of causes. Investigators search for these broken components, which could be physical or human. It is assumed that a clear line exists between the broken part(s) and the overall behavior of the system. The explicit assumption of determinism leads to an implicit assumption of time reversibility—because system performance can be predicted from time A if we know the starting conditions and the functional relationships of all components, then we can start from a later time B (the bad outcome) and work back to the true causes. (p. 84) Root cause analysis and criminal investigations are steeped in this world view.

In this view, decision makers are expected to be rational people who “make decisions by systematically and consciously weighing all possible outcomes along all relevant criteria.” (p. 3) Bad outcomes are caused by incompetent or worse, corrupt decision makers. Fixes include more communications, training, procedures, supervision, exhortations to try harder and criminal charges.

Dekker credits Newton et al for giving man the wherewithal to probe Nature's secrets and build amazing machines. However, Newtonian-Cartesian vision is not the only way to view the world, especially the world of complex, socio-technical systems. For that a new model, with different concepts and operating principles, is required.

The Complex System

Characteristics

The sheer number of parts does not make a system complex, only complicated. A truly complex system is open (it interacts with its environment), has components that act locally and don't know the full effects of their actions, is constantly making decisions to maintain performance and adapt to changing circumstances, and has non-linear interactions (small events can cause large results) because of multipliers and feedback loops. Complexity is a result of the ever-changing relationships between components. (pp.138-144)

Adding to the myriad information confronting a manager or observer, system performance is often optimized at the edge of chaos, where competitors are perpetually vying for relative advantage at an affordable cost.** The system is constantly balancing its efforts between exploration (which will definitely incur costs but may lead to new advantages) and exploitation (which reaps benefits of current advantages but will likely dissipate over time). (pp. 164-165)

The most important feature of a complex system is that it adapts to its environment over time in order to survive. And its environment is characterized by resource scarcity and competition. There is continuous pressure to maintain production and increase efficiency (and their visible artifacts: output, costs, profits, market share, etc) and less visible outputs, e.g., safety, will receive less attention. After all, “Though safety is a (stated) priority, operational systems do not exist to be safe. They exist to provide a service or product . . . .” (p. 99) And the cumulative effect of multiple adaptive decisions can be an erosion of safety margins and a changed response of the entire system. Such responses may be beneficial or harmful—a drift into failure.

Drift by a complex system exhibits several characteristics. First, as mentioned above, it is driven by environmental factors. Second, drift occurs in small steps so changes can be hardly noticed, and even applauded if they result in local performance improvement; “. . . successful outcomes keep giving the impression that risk is under control” (p. 106) as a series of small decisions whittle away at safety margins. Third, these complex systems contain unruly technology (think deepwater drilling) where uncertainties exist about how the technology may be ultimately deployed and how it may fail. Fourth, there is significant interaction with a key environmental player, the regulator, and regulatory capture can occur, resulting in toothless oversight.

“Drifting into failure is not so much about breakdowns or malfunctioning of components, as it is about an organization not adapting effectively to cope with the complexity of its own structure and environment.” (p. 121) Drift and occasionally accidents occur because of ordinary system functioning, normal people going about their regular activities making ordinary decisions “against a background of uncertain technology and imperfect information.” Accidents, like safety, can be viewed as an emergent system property, i.e., they are the result of system relationships but cannot be predicted by examining any particular system component.

Managers' roles

Managers should not try to transform complex organizations into merely complicated ones, even if it's possible. Complexity is necessary for long-term survival as it maximizes organizational adaptability. The question is how to manage in a complex system. One key is increasing the diversity of personnel in the organization. More diversity means less group think and more creativity and greater capacity for adaptation. In practice, this means validation of minority opinions and encouragement of dissent, reflecting on the small decisions as they are made, stopping to ponder why some technical feature or process is not working exactly as expected and creating slack to reduce the chances of small events snowballing into large failures. With proper guidance, organizations can drift their way to success.

Accountability

Amoral and criminal behavior certainly exist in large organizations but bad outcomes can also result from normal system functioning. That's why the search for culprits (bad actors or broken parts) may not always be appropriate or adequate. This is a point Dekker has explored before, in Just Culture (briefly reviewed here) where he suggests using accountability as a means to understand the system-based contributors to failure and resolve those contributors in a manner that will avoid recurrence.

Application to Nuclear Safety Culture

A commercial nuclear power plant or fleet is probably not a complete complex system. It interacts with environmental factors but in limited ways; it's certainly not directly exposed to the Wild West competition of say, the cell phone industry. Group think and normalization of deviance*** is a constant threat. The technology is reasonably well-understood but changes, e.g., uprates based on more software-intensive instrumentation and control, may be invisibly sanding away safety margin. Both the industry and the regulator would deny regulatory capture has occurred but an outside observer may think the relationship is a little too cozy. Overall, the fit is sufficiently good that students of safety culture should pay close attention to Dekker's observations.

In contrast, the Hanford Waste Treatment Plant (Vit Plant) is almost certainly a complex system and this book should be required reading for all managers in that program.

Conclusion

Drift Into Failure is not a quick read. Dekker spends a lot of time developing his theory, then circling back to further explain it or emphasize individual pieces. He reviews incidents (airplane crashes, a medical error resulting in patient death, software problems, public water supply contamination) and descriptions of organization evolution (NASA, international drug smuggling, “conflict minerals” in Africa, drilling for oil, terrorist tactics, Enron) to illustrate how his approach results in broader and arguably more meaningful insights than the reports of official investigations. Standing on the shoulders of others, especially Diane Vaughan, Dekker gives us a rich model for what might be called the “banality of normalization of deviance.” 


* S. Dekker, Drift Into Failure: From Hunting Broken Components to Understanding Complex Systems (Burlington VT: Ashgate 2011).

** See our Sept. 4, 2012 post onCynefin for another description of how the decisions an organization faces can suddenly slip from the Simple space to the Chaotic space.

*** We have posted many times about normalization of deviance, the corrosive organizational process by which the yesterday's “unacceptable” becomes today's “good enough.”