Showing posts with label Simulation. Show all posts
Showing posts with label Simulation. Show all posts

Tuesday, June 9, 2015

Training....Yet Again

U.S. Navy SEALS in Training
We have beat the drum on the value of improved and innovative training techniques for improving safety management performance for some time.  Really since the inception of this blog where our paper, “Practicing Nuclear Safety Management,”* was one of the seminal perspectives we wanted to bring to our readers.  We continue to encounter knowledgeable sources that advocate practice-based approaches and so continue to bring them to our readers’ attention.  The latest is an article from the Harvard Business Review that calls attention to, and distinguishes, “training” as an essential dimension of organizational learning.  The article is “How the Navy SEALS Train for Leadership Excellence.”**  The author, Michael Schrage,*** is a research fellow at MIT who reached out to a former SEAL, Brandon Webb, who transformed SEAL training.  The author contends that training, as opposed to just education or knowledge, is necessary to promote deep understanding of a business or market or process.  Training in this sense refers to actually performing and practicing necessary skills.  It is the key to achieving high levels of performance in complex environments. 

One of Webb’s themes that really struck a chord was: “successful training must be dynamic, open and innovative…. ‘It’s every teacher’s job to be rigorous about constantly being open to new ideas and innovation’, Webb asserts.”  It is very hard to think about much of the training in the nuclear industry on safety culture and related issues as meeting these criteria.  Even the auto industry has recently stepped up to require the conduct of decision simulations to verify the effectiveness of corrective actions - in the wake of the ignition switch-related accidents. (see our
May 22, 2014 post.)

In particular the reluctance of the nuclear industry and its regulator to address the presence and impact of goal conflicts on safety continues to perplex us and, we hope, many others in the industry.   It was on the mind of Carlo Rusconi more than a year ago when he observed: “Some of these conflicts originate high in the organization and are not really amenable to training per se” (see our
Jan. 9, 2014 post.)  However a certain type of training could be very effective in neutralizing such conflicts - practicing making safety decisions against realistic fact-based scenarios.  As we have advocated on many occasions, this process would actualize safety culture principles in the context of real operational situations.  For the reasons cited by Rusconi it builds teamwork and develops shared viewpoints.  If, as we have also advocated, both operational managers and senior managers participated in such training, senior management would be on the record for its assessment of the scenarios including how they weighed, incorporated and assessed conflicting goals in their decisions.  This could have the salutary effect of empowering lower level managers to make tough calls where assuring safety has real impacts on other organizational priorities.  Perhaps senior management would prefer to simply preach goals and principles, and leave the tough balancing that is necessary to implement the goals to their management chain.  If decisions become shaded in the “wrong” direction but there are no bad outcomes, senior management looks good.  But if there is a bad outcome, lower level managers can be blamed, more “training” prescribed, and senior management can reiterate its “safety is the first priority” mantra.


*  In the paper we quote from an article that highlighted the weakness of “Most experts made things worse.  Those managers who did well gathered information before acting, thought in terms of complex-systems interactions instead of simple linear cause and effect, reviewed their progress, looked for unanticipated consequences, and corrected course often. Those who did badly relied on a fixed theoretical approach, did not correct course and blamed others when things went wrong.”  Wall Street Journal, Oct. 22, 2005, p. 10 regarding Dietrich Dörner’s book, The Logic of Failure.  For a comprehensive review of the practice of nuclear safety, see our paper “Practicing Nuclear Safety Management”, March 2008.

**  M. Schrage, "How the Navy SEALS Train for Leadership Excellence," Harvard Business Review (May 28, 2015).

***  Michael Schrage, a research fellow at MIT Sloan School’s Center for Digital Business, is the author of the book Serious Play among others.  Serious Play refers to experiments with models, prototypes, and simulations.

Thursday, January 9, 2014

Safety Culture Training Labs

Not a SC Training Lab
This post highlights a paper* Carlo Rusconi presented at the American Nuclear Society meeting last November.  He proposes the use of “training labs” to develop improved safety culture (SC) through the use of team-building exercises, e.g., role play, and table-top simulations.  Team building increases (a) participants' awareness of group dynamics, e.g., feedback loops, and how a group develops shared beliefs and (b) sensitivity to the viewpoints of others, viewpoints that may differ greatly based on individual experience and expectations.  The simulations pose evolving scenarios that participants must analyze and develop a team approach for addressing.  A key rationale for this type of training is “team interactions, if properly developed and trained, have the capacity to counter-balance individual errors.” (p. 2155)

Rusconi's recognition of goal conflict in organizations, the weakness of traditional methods (e.g., PRA) for anticipating human reactions to emergent issues, the need to recognize different perspectives on the same problem and the value of simulation in training are all familiar themes here at Safetymatters.

Our Perspective

Rusconi's work also reminds us how seldom new approaches for addressing SC concepts, issues, training and management appear in the nuclear industry.  Per Rusconi, “One of the most common causes of incidents and accidents in the industrial sector is the presence of hidden or clear conflicts in the organization. These conflicts can be horizontal, in departments or in working teams, or vertical, between managers and workers.” (p. 2156)  However, we see scant evidence of the willingness of the nuclear industry to acknowledge and address the influence of goal conflicts.

Rusconi focuses on training to help recognize and overcome conflicts.  This is good but one needs to be careful to clearly identify how training would do this and its limitations. For example, if promotion is impacted by raising safety issues or advocating conservative responses, is training going to be an effective remedy?  The truth is there are some conflicts which are implicit (but very real) and hard to mitigate. Such conflicts can arise from corporate goals, resource allocation policies and performance-based executive compensation schemes.  Some of these conflicts originate high in the organization and are not really amenable to training per se.

Both Rusconi's approach and our NuclearSafetySim tool attempt to stimulate discussion of conflicts and develop rules for resolving them.  Creating a measurable framework tied to the actual decisions made by the organization is critical to dealing with conflicts.  Part of this is creating measures for how well decisions embody SC, as done in NuclearSafetySim.

Perhaps this means the only real answer for high risk industries is to have agreement on standards for safety decisions.  This doesn't mean some highly regimented PRA-type approach.  It is more of a peer type process incorporating scales for safety significance, decision quality, etc.  This should be the focus of the site safety review committees and third-party review teams.  And the process should look at samples of all decisions not just those that result in a problem and wind up in the corrective action program (CAP).

Nuclear managers would probably be very reluctant to embrace this much transparency.  A benign view is they are simply too comfortable believing that the "right" people will do the "right" thing.  A less charitable view is their lack of interest in recognizing goal conflicts and other systemic issues is a way to effectively deny such issues exist.

Instead of interest in bigger-picture “Why?” questions we see continued introspective efforts to refine existing methods, e.g., cause analysis.  At its best, cause analysis and any resultant interventions can prevent the same problem from recurring.  At its worst, cause analysis looks for a bad component to redesign or a “bad apple” to blame, train, oversee and/or discipline.

We hate to start the new year wearing our cranky pants but Dr. Rusconi, ourselves and a cadre of other SC analysts are all advocating some of the same things.  Where is any industry support, dialogue, or interaction?  Are these ideas not robust?  Are there better alternatives?  It is difficult to understand the lack of engagement on big-picture questions by the industry and the regulator.


*  C. Rusconi, “Training labs: a way for improving Safety Culture,” Transactions of the American Nuclear Society, Vol. 109, Washington, D.C., Nov. 10–14, 2013, pp. 2155-57.  This paper reflects a continuation of Dr. Rusconi's earlier work which we posted on last June 26, 2013.

Monday, October 14, 2013

High Reliability Management by Roe and Schulman

This book* presents a multi-year case study of the California Independent System Operator (CAISO), the government entity created to operate California's electricity grid when the state deregulated its electricity market.  CAISO's travails read like The Perils of Pauline but our primary interest lies in the authors' observations of the different grid management strategies CAISO used under various operating conditions; it is a comprehensive description of contingency management in the real world.  In this post we summarize the authors' management model, discuss the application to nuclear management and opine on the implications for nuclear safety culture.

The High Reliability Management (HRM) Model

The authors call the model they developed High Reliability Management and present it in a 2x2 matrix where the axes are System Volatility and Network Options Variety. (Ch. 3)  System Volatility refers to the magnitude and rate of change of  CAISO's environmental variables including generator and transmission availability, reserves, electricity prices, contracts, the extent to which providers are playing fair or gaming the system, weather, temperature and electricity demand (regional and overall).  Network Options Variety refers to the range of resources and strategies available for meeting demand (basically in real time) given the current inputs. 

System Volatility and Network Options Variety can each be High or Low so there are four possible modes and a distinctive operating management approach for each.  All modes must address CAISO's two missions of matching electricity supply and demand, and protecting the grid.  Operators must manage the system inside an acceptable or tolerable performance bandwidth (invariant output performance is a practical impossibility) in all modes.  Operating conditions are challenging: supply and demand are inherently unstable (p. 34), inadequate supply means some load cannot be served and too much generation can damage the grid. (pp. 27, 142)

High Volatility and High Options mean both generation (supply) and demand are changing quickly and the operators have multiple strategies available for maintaining balance.  Some strategies can be substituted for others.  It is a dynamic but manageable environment.

High Volatility and Low Options mean both generation and demand are changing quickly but the operators have few strategies available for maintaining balance.  They run from pillar to post; it is highly stressful.  Sometimes they have to create ad hoc (undocumented and perhaps untried) approaches using trail and error.  Demand can be satisfied but regulatory limits may be exceeded and the system is running closer to the edge of technical capabilities and operator skills.  It is the most unstable performance mode and untenable because the operators are losing control and one perturbation can amplify into another. (p. 37)

Low Volatility and Low Options mean generation and demand are not changing quickly.  The critical feature here is demand has been reduced by load shedding.  The operators have exhausted all other strategies for maintaining balance.  It is a command-and-control approach, effected by declaring a  Stage 3 grid situation and run using formal rules and procedures.  It is the least desirable domain because one primary mission, to meet all demand, is not being accomplished. 

Low Volatility and High Options is an HRM's preferred mode.  Actual demand follows the forecast, generators are producing as expected, reserves are on hand, and there is no congestion on transmission lines or backup routes are available.  Procedures based on analyzed conditions exist and are used.  There are few, if any, surprises.  Learning can occur but it is incremental, the result of new methods or analysis.  Performance is important and system behavior operates within a narrow bandwidth.  Loss of attention (complacency) is a risk.  Is this starting to sound familiar?  This is the domain of High Reliability Organization (HRO) theory and practice.  Nuclear power operations is an example of an HRO. (pp. 60-62)          

Lessons for Nuclear Operations 


Nuclear plants work hard to stay in the Low Volatility/High Options mode.  If they stray into the Low Options column, they run the risks of facing unanalyzed situations and regulatory non-compliance. (p. 62)  In their effort to optimize performance in the desired mode, plants examine their performance risks to ever finer granularity through new methods and analyses.  Because of the organizations' narrow focus, few resources are directed at identifying, contemplating and planning for very low probability events (the tails of distributions) that might force a plant into a different mode or have enormous potential negative consequences.**  Design changes (especially new technologies) that increase output or efficiency may mask subtle warning signs of problems; organizations must be mindful to performance drift and nascent problems.   

In an HRO, trial and error is not an acceptable method for trying out new options.  No one wants cowboy operators in the control room.  But examining new options using off-line methods, in particular simulation, is highly desirable. (pp. 111, 233)  In addition, building reactive capacity in the organization can be a substitute for foresight to accommodate the unexpected and unanalyzed. (pp. 116-17)  

The focus on the external changes that buffeted CAISO leads to a shortcoming when looking for lessons for nuclear.  The book emphasizes CAISO's adaptability to new environmental demands, requirements and constraints but does not adequately recognize the natural evolution of the system.  In nuclear, it's natural evolution that may quietly lead to performance drift and normalization of deviance.  In a similar vein, CAISO has to worry about complacency in just one mode, for nuclear it's effectively the only mode and complacency is an omnipresent threat. (p. 126) 

The risk of cognitive overload occurs more often for CAISO operators but it has visible precursors; for nuclear operators the risk is overload might occur suddenly and with little or no warning.*** Anticipation and resilience are more obvious needs at CAISO but also necessary in nuclear operations. (pp. 5, 124)

Implications for Safety Culture

Both HRMs and HROs need cultures that value continuous training, open communications, team players able to adjust authority relationships when facing emergent issues, personal responsibility for safety (i.e., safety does not inhere in technology), ongoing learning to do things better and reduce inherent hazards, rewards for achieving safety and penalties for compromising it, and an overall discipline dedicated to failure-free performance. (pp. 198, App. 2)  Both organizational types need a focus on operations as the central activity.  Nuclear is good at this, certainly better than CAISO where entities outside of operations promulgated system changes and the operators were stuck with making them work.

The willingness to report errors should be encouraged but we have seen that is a thin spot in the SC at some plants.  Errors can be a gateway into learning how to create more reliable performance and error tolerance vs. intolerance is a critical cultural issue. (pp. 111-12, 220) 

The simultaneous needs to operate within a prescribed envelope while considering how the envelope might be breached has implications for SC.  We have argued before that a nuclear organization is well-served by having a diversity of opinions and some people who don't subscribe to group think and instead keep asking “What's the worst case scenario and how would we manage it to an acceptable conclusion?” 

Conclusion

This review gives short shrift to the authors' broad and deep description and analysis of CAISO.****  The reason is that the major takeaway for CAISO, viz., the need to recognize mode shifts and switch management strategies accordingly as the manifestation of “normal” operations, is not really applicable to day-to-day nuclear operations.

The book describes a rare breed, the socio-technical-political start-up, and has too much scope for the average nuclear practitioner to plow through searching for newfound nuggets that can be applied to nuclear management.  But it's a good read and full of insightful observations, e.g., the description of  CAISO's early days (ca. 2001-2004) when system changes driven by engineers, politicians and regulators, coupled with changing challenges from market participants, prevented the organization from settling in and effectively created a negative learning curve with operators reporting less confidence in their ability to manage the grid and accomplish the mission in 2004 vs. 2001. (Ch. 5)

(High Reliability Management was recommended by a Safetymatters reader.  If you have a suggestion for material you would like to see promoted and reviewed, please contact us.)

*  E. Roe and P. Schulman, High Reliability Management (Stanford Univ. Press, Stanford, CA: 2008)  This book reports the authors' study of CAISO from 2001 through 2006. 

**  By their nature as baseload generating units, usually with long-term sales contracts, nuclear plants are unlikely to face a highly volatile business environment.  Their political and social environment is similar: The NRC buffers them from direct interference by politicians although activists prodding state and regional authorities, e.g., water quality boards, can cause distractions and disruptions.

The importance of considering low-probability, major consequence events is argued by Taleb (see here) and Dédale (see here).

***  Over the course of the authors' investigation, technical and management changes at CAISO intended to make operations more reliable often had the unintended effect of moving the edge of the prescribed performance envelope closer to the operators' cognitive and skill capacity limits. 

The Cynefin model describes how organizational decision making can suddenly slip from the Simple domain to the Chaotic domain via the Complacent zone.  For more on Cynefin, see here and here.

****  For instance, ch. 4 presents a good discussion of the inadequate or incomplete applicability of Normal Accident Theory (Perrow, see here) or High Reliability Organization theory (Weick, see here) to the behavior the authors observed at CAISO.  As an example, tight coupling (a threat according to NAT) can be used as a strength when operators need to stitch together an ad hoc solution to meet demand. (p. 135)

Ch. 11 presents a detailed regression analysis linking volatility in selected inputs to volatility in output, measured by the periods when electricity made available (compared to demand) fell outside regulatory limits.  This analysis illustrated how well CAISO's operators were able to manage in different modes and how close they were coming to the edge of their ability to control the system, in other words, performance as precursor to the need to go to Stage 3 command-and-control load shedding.

Friday, September 27, 2013

Four Years of Safetymatters

Aztec Calendar
Over the four plus years we have been publishing this blog, regular readers will have noticed some recurring themes in our posts.  The purpose of this post is to summarize our perspective on these key themes.  We have attempted to build a body of work that is useful and insightful for you.

Systems View

We have consistently considered safety culture (SC) in the nuclear industry to be one component of a complicated socio-technical system.  A systems view provides a powerful mental model for analyzing and understanding organizational behavior. 

Our design and explicative efforts began with system dynamics as described by authors such as Peter Senge, focusing on characteristics such as feedback loops and time delays that can affect system behavior and lead to unexpected, non-linear changes in system performance.  Later, we expanded our discussion to incorporate the ways systems adapt and evolve over time in response to internal and external pressures.  Because they evolve, socio-technical organizations are learning organizations but continuous improvement is not guaranteed; in fact, evolution in response to pressure can lead to poorer performance.

The systems view, system dynamics and their application through computer simulation techniques are incorporated in the NuclearSafetySim management training tool.

Decision Making

A critical, defining activity of any organization is decision making.  Decision making determines what will (or will not) be done, by whom, and with what priority and resources.  Decision making is  directed and constrained by factors including laws, regulations, policies, goals, procedures and resource availability.  In addition, decision making is imbued with and reflective of the organization's values, mental models and aspirations, i.e., its culture, including safety culture.

Decision making is intimately related to an organization's financial compensation and incentive program.  We've commented on these programs in nuclear and non-nuclear organizations and identified the performance goals for which executives received the largest rewards; often, these were not safety goals.

Decision making is part of the behavior exhibited by senior managers.  We expect leaders to model desired behavior and are disappointed when they don't.  We have provided examples of good and bad decisions and leader behavior. 

Safety Culture Assessment


We have cited NRC Commissioner Apostolakis' observation that “we really care about what people do and maybe not why they do it . . .”  We sympathize with that view.  If organizations are making correct decisions and getting acceptable performance, the “why” is not immediately important.  However, in the longer run, trying to identify the why is essential, both to preserve organizational effectiveness and to provide a management (and mental) model that can be transported elsewhere in a fleet or industry.

What is not useful, and possibly even a disservice, is a feckless organizational SC “analysis” that focuses on a laundry list of attributes or limits remedial actions to retraining, closer oversight and selective punishment.  Such approaches ignore systemic factors and cannot provide long-term successful solutions.

We have always been skeptical of the value of SC surveys.  Over time, we saw that others shared our view.  Currently, broad-scope, in-depth interviews and focus groups are recognized as preferred ways to attempt to gauge an organization's SC and we generally support such approaches.

On a related topic, we were skeptical of the NRC's SC initiatives, which culminated in the SC Policy Statement.  As we have seen, this “policy” has led to back door de facto regulation of SC.

References and Examples

We've identified a library of references related to SC.  We review the work of leading organizational thinkers, social scientists and management writers, attempt to accurately summarize their work and add value by relating it to our views on SC.  We've reported on the contributions of Dekker, Dörner, Hollnagel, Kahneman, Perin, Perrow, Reason, Schein, Taleb, Vaughan, Weick and others.

We've also posted on the travails of organizations that dug themselves into holes that brought their SC into question.  Some of these were relatively small potatoes, e.g., Vermont Yankee and EdF, but others were actual disasters, e.g., Massey Energy and BP.  We've also covered DOE, especially the Hanford Waste Treatment and Immobilization Plant (aka the Vit plant).

Conclusion

We believe the nuclear industry is generally well-managed by well-intentioned personnel but can be affected by the natural organizational ailments of complacency, normalization of deviation, drift, hubris, incompetence and occasional criminality.  Our perspective has evolved as we have learned more about organizations in general and SC in particular.  Channeling John Maynard Keynes, we adapt our models when we become aware of new facts or better ways of looking at the data.  We hope you continue to follow Safetymatters.  

Tuesday, September 17, 2013

Even Macy’s Does It

We have long been proponents of looking for innovative ways to improve safety management training for nuclear professionals.  We’ve taken the burden to develop a prototype management simulator, NuclearSafetySim, and made it available to our readers to experience for themselves (see our July 30, 2013 post).  In the past we have also noted other industries and organizations that have embraced simulation as an effective management training tool.

An August article in the Wall Street Journal* cites several examples of new approaches to manager training.  Most notable in our view is Macy’s use of simulations to have managers gain decision making experience.  As the article states:

“The simulation programs aim to teach managers how their daily decisions can affect the business as a whole.”

We won’t revisit all the arguments that we’ve made for taking a systems view of safety management, focusing on decisions as the essence of safety culture and using simulation to allow personnel to actualize safety values and priorities.  All of these could only enrich, challenge and stimulate training activities. 

A Clockwork Magenta

 
On the other hand what is the value of training approaches that reiterate INPO slide shows, regulatory policy statements and good practices in seemingly endless iterations?  Brings to mind the character Alex, the incorrigible sociopath in A Clockwork Orange with an unusual passion for classical music.**  He is the subject of “reclamation treatment”, head clamped in a brace and eyes pinned wide open, forced to watch repetitive screenings of anti-social behavior to the music of Beethoven’s Fifth.  We are led to believe this results in a “cure” but does it and at what cost?

Nuclear managers may not be treated exactly like Alex but there are some similarities.  After plant problems occur and are diagnosed, managers are also declared “cured” after each forced feeding of traits, values, and the need for increased procedure adherence and oversight.  Results still not satisfactory?  Repeat.



*  R. Feintzeig, "Building Middle-Manager Morale," Wall Street Journal (Aug. 7, 2013).  Retrieved Sept. 24, 2013.

**  M. Amis, "The Shock of the New:‘A Clockwork Orange’ at 50,"  New York Times Sunday Book Review (Aug. 31, 2013).  Retrieved Sept. 24, 2013.

Tuesday, July 30, 2013

Introducing NuclearSafetySim

We have referred to NuclearSafetySim and the use of simulation tools on a regular basis in this blog.  NuclearSafetySim is our initiative to develop a new approach to safety management training for nuclear professionals.  It utilizes a simulator to provide a realistic nuclear operations environment within which players are challenged by emergent issues - where they must make decisions balancing safety implications and other priorities - over a five year period.  Each player earns an overall score and is provided with analyses and data on his/her decision making and performance against goals.  It is clearly a different approach to safety culture training, one that attempts to operationalize the values and traits espoused by various industry bodies.  In that regard it is exactly what nuclear professionals must do on a day to day basis. 

At this time we are making NuclearSafetySim available to our readers through a web-based demo version.  To get started you need to access the NuclearSafetySim website.  Click on the Introduction tab at the top of the Home page.  Here you will find a link to a narrated slide show that provides important background on the approach used in the simulation.  It runs about 15 minutes.  Then click on the Simulation tab.  Here you will find another video which is a demo of NuclearSafetySim.  While this runs about 45 minutes (apologies) it does provide a comprehensive tutorial on the sim and how to interact with it.  We urge you to view it.  Finally...at the bottom of the Simulation page is a link to the NuclearSafetySim tool.  Clicking on the link brings you directly to the Home screen and you’re ready to play.

As you will see on the website and in the sim itself, there are reminders and links to facilitate providing feedback on NuclearSafetySim and/or requesting additional information.  This is important to us and we hope our readers will take the time to provide thoughtful input, including constructive criticism.  We welcome all comments. 

Wednesday, June 26, 2013

Dynamic Interactive Training

The words dynamic and interactive always catch our attention as they are intrinsic to our world view of nuclear safety culture learning.  Carlo Rusconi’s presentation* at the recent IAEA International Experts’ Meeting on Human and Organizational Factors in Nuclear Safety in the Light of the Accident at the Fukushima Daiichi Nuclear Power Plant in Vienna in May 2013 is the source of our interest.

While much of the training described in the presentation appeared to be oriented to the worker level and the identification of workplace type hazards and risks, it clearly has implications for supervisory and management levels as well.

In the first part of the training students are asked to identify and characterize safety risks associated with workplace images.  For each risk they assign an index based on perceived likelihood and severity.  We like the parallel to our proposed approach for scoring decisions according to safety significance and uncertainty.**

“...the second part of the course is focused on developing skills to look in depth at events that highlight the need to have a deeper and wider vision of safety, grasping the explicit and implicit connections among technological, social, human and organizational features. In a nutshell: a systemic vision.” (slide 13, emphasis added)  As part of the training students are exposed to the concepts of complexity, feedback and internal dynamics of a socio-technical system.  As the author notes, “The assessment of culture within an organization requires in-depth knowledge of its internal dynamics”.  (slide 15)

This part of the training is described as a “simulation” as it provides the opportunity for students to simulate the performance of an investigation into the causes of an actual event.  Students are organized into three groups of five persons to gain the benefit of collective analysis within each group followed by sharing of results across groups.  We see this as particularly valuable as it helps build common mental models and facilitates integration across individuals.  Last, the training session takes the student’s results and compares them to the outcomes from a panel of experts.  Again we see a distinct parallel to our concept of having senior management within the nuclear organization pre-analyze safety issues to establish reference values for safety significance, uncertainty and preferred decisions.  This provides the basis to compare trainee outcomes for the same issues and ultimately to foster alignment within the organization.

Thank you Dr. Rusconi.



*  C. Rusconi, “Interactive training: A methodology for improving Safety Culture,” IAEA International Experts’ Meeting on Human and Organizational Factors in Nuclear Safety in the Light of the Accident at the Fukushima Daiichi Nuclear Power Plant, Vienna May 21-24, 2013.

**  See our blog posts dated April 9 and June 6, 2013.  We also remind readers of Taleb’s dictate to decision makers to focus on consequences versus probability in our post dated June 18, 2013.

Thursday, June 6, 2013

Implementing Safety Culture Policy Part 2

This post continues our discussion of the implementation of safety culture policy in day-to-day nuclear management decision making, started in our post dated April 9, 2013.   In that post we introduced several parameters for quantitatively scoring decisions: decision quality, safety significance and significance uncertainty.  At this time we want to update the decision quality label, using instead “decision balance”.

To illustrate the application of the scoring method we used a set of twenty decisions based on issues taken from actual U.S. nuclear operating experience, typically those that were reported in LERs.  As a baseline, we scored each issue for safety significance and uncertainty.  Each issue identified 3 to 4 decision options for addressing the problem - and each option was annotated with the potential impacts of the decision on budgets, generation (e.g. potential outage time) and the corrective action program.   We scored each decision option for its decision balance (how well the decision option balances safety priority) and then identified the preferred decision option for each issue.  This constitutes what we refer to as the “preferred decision set”.  A pdf file of one example issue with decision choices and scoring inputs is available here

Our assumption is that the preferred decision set would be established/approved by senior management based on their interpretation of the issues and their expectations for how organizational decisions should reflect safety culture.  The set of issues would then be used in a training environment for appropriate personnel.  For purposes of this example, we incorporated the preferred decision set into our NuclearSafetySim* simulator to illustrate the possible training experience.  The sim provides an overall operational context tracking performance for cost, plant generation and CAP program and incorporating performance goals and policies.

Chart 1
In the sim application a trainee would be tasked with assessing an issue every three months over a 60 month operational period.  The trainee would do this while attempting to manage performance results to achieve specified goals.  For each issue the trainee would review the issue facts, assign values for significance and uncertainty, and select a decision option.  Chart 1 compares the actual decisions (those by the trainee) to those in the preferred set for our prototype session.   Note that approximately 40% of the time the actual decision matched the preferred decision (orange data points).  For the remainder of the issues the trainee’s selected decisions differed.  Determining and understanding why the differences occurred is one way to gain insight into how culture manifests in management actions.

As we indicated in the April 9 post, each decision is evaluated for its safety significance and uncertainty in accordance with quantified scales.  These serve as key inputs to determining the appropriate balance to be achieved in the decision.  In prior work in this area, reported in our posts dated July 15, 2011 and October 14, 2011 we solicited readers to score two issues for safety significance.  The reported scores ranged from 2 to 10 (most scores between 4 to 6) for one issue and ranged 5 to 10 (most scores 6 to 8) for the other issue.  This reflects the reality that perceptions of safety significance are subject to individual differences.  In the current exercise, similar variations in scoring were expected and led to differences between the trainee’s scores and the preferred decision set.  The variation may be due to the inherent subjective nature of assessing these attributes and other factors such as experience, expertise, biases, and interpretations of the issue.  So this could be one source of difference in the trainee decision selections versus the preferred set, as the decision process attempts to match action to significance. 

Another source could be in the decision options themselves.   The decision choice by a trainee could have focused on what the trainee felt was the “best” (i.e., most efficacious) decision versus an explicit consideration of safety priority commensurate with safety significance.  Additionally decision choices may have been influenced by their potential impacts, particularly under conditions where performance was not on track to meet goals. 


Chart 2
Taking this analysis a bit further, we looked at how decision balance varied over the course of the simulation.  As discussed in our April 9 post we use decision balance to create a quantitative measure of how well the goal of safety culture is being incorporated in a specific decision - the extent to which the decision accords the priority for safety commensurate with its safety significance.  In the instant exercise, each decision option for each issue has been assigned a balance value as part of the preferred scoresheet.**  Chart 2 shows a timeline of decision balances - one for the preferred decision set and the other for the actual decisions made by the trainee.  A smoothing function has been applied to the discrete values of balance to provide a continuous track. 

The plots illustrate how decision balance may vary over time, with specific decisions reflecting greater or lesser emphasis on safety.  During the first half of the sim the decision balances are in fairly close agreement, reflecting in part that in 5 of 8 cases the actual decisions matched the preferred decisions.  However in the second half of the sim significant differences emerge, primarily in the direction of weaker balances associated with the trainee decisions.  Again, understanding why these differences emerge could provide insight into how safety culture is actually being practiced within the organization. Chart 3 adds in some additional context.

Chart 3
The yellow line is a plot of “goal pressure” which is simply a sum of the differences in actual performance in the sim to goals for cost, generation and CAP program.  Higher values of pressure are associated with performance lagging the goals.  Inspection of the plot indicates that goal pressure was mostly modest in the first half of the sim before an initial spike up and further increases with time.  The blue line, the decision balance of the trainee, does not show any response to the initial spike, but later in the sim the high goal pressure could be seen as a possible contributor to decisions trending to lower balances.  A final note is that over the course of the entire sim, the average values of preferred and actual balance are fairly close for this player, perhaps suggesting reasonable overall alignment in safety priorities notwithstanding decision to decision variations. 

A variety of training benefits can flow from the decision simulation.  Comparisons of actual to preferred decisions provide a baseline indication of how well expected safety balances are being achieved in realistic decisions.  Consideration of contributing factors such as goal pressure may illustrate challenges for decision makers.  Comparisons of results among and across groups of trainees could provide further insights.  In all cases the results would provide material for discussion, team building and alignment on safety culture.

In our post dated November 4, 2011 we quoted the work of Kahneman, that organizations are “factories for producing decisions”.  In nuclear safety, the decision factory is the mechanism to actualize safety culture into specific priorities and actions.  A critical element of achieving strong safety culture is to be able to identify differences between espoused values for safety (i.e., the traits typically associated with safety culture) and de facto values as revealed in actual decisions. We believe this can be achieved by capturing decision data explicitly, including the judgments on significance and uncertainty, and the operational context of the decisions.

The next step is synthesizing the decision and situational parameters to develop a useful systems-based measure of safety culture.  A quantity that could be tracked in a simulation environment to illustrate safety culture response and provide feedback and/or during nuclear operations to provide a real time pulse of the organization’s culture.



* For more information on using system dynamics to model safety culture, please visit our companion website, nuclearsafetysim.com.

** It is possible for some decision options to have the same value of balance even though they incorporate different responses to the issue and different operational impacts. 

Wednesday, May 8, 2013

Safety Management and Competitiveness

Jean-Marie Rousseau
We recently came across a paper that should be of significant interest to nuclear safety decision makers.  “Safety Management in a Competitiveness Context” was presented in March 2008 by Jean-Marie Rousseau of the Institut de Radioprotection et de Surete Nucleaire (IRSN).  As the title suggests the paper examines the effects of competitive pressures on a variety of nuclear safety management issues including decision making and the priority accorded safety.  Not surprisingly:

“The trend to ignore or to deny this phenomenon is frequently observed in modern companies.” (p. 7)

The results presented in the paper came about from a safety assessment performed by IRSN to examine safety management of EDF [Electricite de France] reactors including:

“How real is the ‘priority given to safety’ in the daily arbitrations made at all nuclear power plants, particularly with respect to the other operating requirements such as costs, production, and radiation protection or environmental constraints?” (p. 2)

The pertinence is clear as “priority given to safety” is the linchpin of safety culture policy and expected behaviors.  In addition the assessment focused on decision-making processes at both the strategic and operational levels.  As we have argued, decisions can provide significant insights into how safety culture is operationalized by nuclear plant management. 

Rousseau views nuclear operations as a “highly complex socio-technical system” and his paper provides a brief review of historical data where accidents or near misses displayed indications of the impact of competing priorities on safety.  The author notes that competitiveness is necessary just as is safety and as such it represents another risk that must be managed at organizational and managerial levels.  This characterization is intriguing and merits further reflection particularly by regulators in their pursuit of “risk informed regulation”.  Nominally regulators apply a conceptualization of risk that is hardware and natural phenomena centric.  But safety culture and competitive pressures also could be justified as risks to assuring safety - in fact much more dynamic risks - and thus be part of the framework of risk informed regulation.*  Often, as is the case with this paper, there is some tendency to assert that achievement of safety is coincident with overall performance excellence - which in a broad sense it is - but notwithstanding there are many instances where there is considerable tension - and potential risk.

Perhaps most intriguing in the assessment is the evaluation of EDF’s a posteriori analyses of its decision making processes as another dimension of experience feedback.**   We quote the paper at length:

“The study has pointed out that the OSD***, as a feedback experience tool, provides a priori a strong pedagogic framework for the licensee. It offers a context to organize debates about safety and to share safety representations between actors, illustrated by a real problematic situation. It has to be noticed that it is the only tool dedicated to “monitor” the safety/competitiveness relationship.

"But the fundamental position of this tool (“not to make judgment about the decision-maker”) is too restrictive and often becomes “not to analyze the decision”, in terms of results and effects on the given situation.

"As the existence of such a tool is judged positively, it is necessary to improve it towards two main directions:
- To understand the factors favouring the quality of a decision-making process. To this end, it is necessary to take into account the decision context elements such as time pressure, fatigue of actors, availability of supports, difficulties in identifying safety requirements, etc.
- To understand why a “qualitative decision-making process” does not always produce a “right decision”. To this end, it is necessary to analyze the decision itself with the results it produces and the effects it has on the situation.” (p. 8)

We feel this is a very important aspect that currently receives insufficient attention.  Decisions can provide a laboratory of safety management performance and safety culture actualization.  But how often are decisions adequately documented, preserved, critiqued and shared within the organization?  Decisions that yield a bad (reportable) result may receive scrutiny internally and by regulators but our studies indicate there is rarely sufficient forensic analysis - cause analyses are almost always one dimensional and hardware and process oriented.  Decisions with benign outcomes - whether the result of “good” decision making or not - are rarely preserved or assessed.  The potential benefits of detailed consideration of decisions have been demonstrated in many of the independent assessments of accidents (Challenger, Columbia, BP Texas Oil Refinery, etc.) and in research by Perin and others. 

We would go a step further than proposed enhancements to the OSD.  As Rousseau notes there are downsides to the routine post-hoc scrutiny of actual decisions - for one it will likely identify management errors even in the absence of a bad decision outcome.  This would be one more pressure on managers already challenged by a highly complex decision environment.  An alternative is to provide managers the opportunity to “practice” making decisions in an environment that supports learning and dialogue on achieving the proper balances in decisions - in other words in a safety management simulator.  The industry requires licensed operators to practice operations decisions on a simulator for similar reasons - why not nuclear managers charged with making safety decisions?



*  As the IAEA has noted, “A danger of concentrating too much on a quantitative risk value that has been generated by a PSA [probabilistic safety analysis] is that...a well-designed plant can be operated in a less safe manner due to poor safety management by the operator.”  IAEA-TECDOC-1436, Risk Informed Regulation of Nuclear Facilities: Overview of the Current Status, February 2005.

**  EDF implemented safety-availability-Radiation-Protection-environment observatories (SAREOs) to increase awareness of the arbitration between safety and other performance factors. SAREOs analyze in each station the quality of the decision-making process and propose actions to improve it and to guarantee compliance with rules in any circumstances [“Nuclear Safety: our overriding priority” EDF Group‟s file responding to FTSE4Good nuclear criteria] 


***  Per Rousseau, “The OSD (Observatory for Safety/Availability) is one of the “safety management levers” implemented by EDF in 1997. Its objective is to perform retrospective analyses of high-stake decisions, in order to improve decision-making processes.” (p. 7)

Thursday, December 20, 2012

The Logic of Failure by Dietrich Dörner

This book was mentioned in a nuclear safety discussion forum so we figured this is a good time to revisit Dörner's 1989 tome.* Below we provide a summary of the book followed by our assessment of how it fits into our interest in decision making and the use of simulations in training.

Dörner's work focuses on why people fail to make good decisions when faced with problems and challenges. In particular, he is interested in the psychological needs and coping mechanisms people exhibit. His primary research method is observing test subjects interact with simulation models of physical sub-worlds, e.g., a malfunctioning refrigeration unit, an African tribe of subsistence farmers and herdsmen, or a small English manufacturing city. He applies his lessons learned to real situations, e.g, the Chernobyl nuclear plant accident.

He proposes a multi-step process for improving decision making in complicated situations then describes each step in detail and the problems people can create for themselves while executing the step. These problems generally consist of tactics people adopt to preserve their sense of competence and control at the expense of successfully achieving overall objectives. Although the steps are discussed in series, he recognizes that, at any point, one may have to loop back through a previous step.

Goal setting

Goals should be concrete and specific to guide future steps. The relationships between and among goals should be specified, including dependencies, conflicts and relative importance. When people don't to do this, they can become distracted by obvious or unimportant (although potentially achievable) goals, or peripheral issues they know how to address rather than important issues that should be resolved. Facing performance failure, they may attempt to turn failure into success with doublespeak or blame unseen forces.

Formulate models and gather information

Good decision-making requires an adequate mental model of the system being studied—the variables that comprise the system and the functional relationships among them, which may include positive and negative feedback loops. The model's level of detail should be sufficient to understand the interrelationships among the variables the decision maker wants to influence. Unsuccessful test subjects were inclined to use a “reductive hypothesis,” which unreasonably reduces the model to a single key variable, or overgeneralization.

Information gathered is almost always incomplete and the decision maker has to decide when he has enough to proceed. The more successful test subjects asked more questions and made fewer decisions (then the less successful subjects) in the early time periods of the sim.

Predict and extrapolate

Once a model is formulated, the decision maker must attempt to determine how the values of variables will change over time in response to his decisions or internal system dynamics. One problem is predicting that outputs will change in a linear fashion, even as the evidence grows for a non-linear, e.g., exponential function. An exponential variable may suddenly grow dramatically then equally suddenly reverse course when the limits on growth (resources) are reached. Internal time delays mean that the effects of a decision are not visible until some time in the future. Faced with poor results, unsuccessful test subjects implement or exhibit “massive countermeasures, ad hoc hypotheses that ignore the actual data, underestimations of growth processes, panic reactions, and ineffectual frenetic activity.” (p. 152) Successful subjects made an effort to understand the system's dynamics, kept notes (history) on system performance and tried to anticipate what would happen in the future.

Plan and execute actions, check results and adjust strategy

The essence of planning is to think through the consequences of certain actions and see whether those actions will bring us closer to our desired goal.” (p. 153) Easier said than done in an environment of too many alternative courses of action and too little time. In rapidly evolving situations, it may be best to create rough plans and delegate as many implementing decisions as possible to subordinates. A major risk is thinking that planning has been so complete than the unexpected cannot occur. A related risk is the reflexive use of historically successful strategies. “As at Chernobyl, certain actions carried out frequently in the past, yielding only the positive consequences of time and effort saved and incurring no negative consequences, acquire the status of an (automatically applied) ritual and can contribute to catastrophe.” (p. 172)

In the sims, unsuccessful test subjects often exhibited “ballistic” behavior—they implemented decisions but paid no attention to, i.e, did not learn from, the results. Successful subjects watched for the effects of their decisions, made adjustments and learned from their mistakes.

Dörner identified several characteristics of people who tended to end up in a failure situation. They failed to formulate their goals, didn't recognize goal conflict or set priorities, and didn't correct their errors. (p. 185) Their ignorance of interrelationships among system variables and the longer-term repercussions of current decisions set the stage for ultimate failure.

Assessment

Dörner's insights and models have informed our thinking about human decision-making behavior in demanding, complicated situations. His use and promotion of simulation models as learning tools was one starting point for Bob Cudlin's work in developing a nuclear management training simulation program. Like Dörner, we see simulation as a powerful tool to “observe and record the background of planning, decision making, and evaluation processes that are usually hidden.” (pp. 9-10)

However, this book does not cover the entire scope of our interests. Dörner is a psychologist interested in individuals, group behavior is beyond his range. He alludes to normalization of deviance but his references appear limited to the flaunting of safety rules rather than a more pervasive process of slippage. More importantly, he does not address behavior that arises from the system itself, in particular adaptive behavior as an open system reacts to and interacts with its environment.

From our view, Dörner's suggestions may help the individual decision maker avoid common pitfalls and achieve locally optimum answers. On the downside, following Dörner's prescription might lead the decision maker to an unjustified confidence in his overall system management abilities. In a truly complex system, no one knows how the entire assemblage works. It's sobering to note that even in Dörner's closed,** relatively simple models many test subjects still had a hard time developing a reasonable mental model, and some failed completely.

This book is easy to read and Dörner's insights into the psychological traps that limit human decision making effectiveness remain useful.


* D. Dörner, The Logic of Failure: Recognizing and Avoiding Error in Complex Situations, trans. R. and R. Kimber (Reading, MA: Perseus Books, 1998). Originally published in German in 1989.

** One simulation model had an external input.

Friday, July 27, 2012

Modeling Safety Culture (Part 4): Simulation Results 2


As we introduced in our prior post on this subject (Results 1), we are presenting some safety culture simulation results based on a highly simplified model.  In that post we illustrated how management might react to business pressure caused by a reduction in authorized budget dollars.  The actions of management result in shifting of resources from safety to business and lead to changes in the state of safety culture.

In this post we continue with the same model and some other interesting scenarios.  In each of the following charts three outputs are plotted: safety culture in red, management action level in blue and business pressure in dark green.  The situation is an organization with a somewhat lower initial safety culture and confronted with a somewhat smaller budget reduction than the example in Results 1. 

Figure 1
Figure 1 shows an overly reactive management. The blue line shows management’s actions in response to the changes in business pressure (green) associated with the budget change.  Note that management’s actions are reactive, shifting priorities immediately and directly in response. The behavior leads to a cyclic outcome where management actions temporarily alleviate business pressure, but when actions are relaxed, pressure rises again, followed by another cycle of management response.  This could be a situation where management is not addressing the source of the problem, shifting priorities back and forth between business and safety.  Also of interest is that the magnitude of the cycle is actually increasing with time indicating that the system is essentially unstable and unsustainable.  Safety culture (red) declines throughout the time frame.

Figure 2
Figure 2 shows the identical conditions but where management implements a more restrained approach, delaying its response to changes in business.  The overall system response is still cyclic, but now the magnitude of the cycles is decreasing and converging on a stable outcome.






Figure 3
Figure 3 is for the same conditions, but the management response is restrained further.  Management takes more time to assess the situation and respond to business pressure conditions.  This approach starts to filter out the cyclic type of response seen in the first two examples and will eventually result in a lower business gap.

Perhaps the most important takeaway from these three simulations is that the total changes in safety culture are not significantly different.  A certain price is being paid for shifting priorities away from safety, however the ability to reduce and maintain lower business pressure is much better with the last management strategy.

Figure 4
The last example in this set is shown in Figure 4.  This is a situation where business pressure is gradually ramped up due to a series of small step reductions in budget levels.  Within the simulation we have also set a limit on extent of management actions.  Initially management takes no action to shift priorities - business pressure is within a value that safety culture can resist.  Consequently safety culture remains stable.  After the third “bump” in business pressure, the threshold resistance of safety culture is broken and management starts to modestly shift priorities.  Even though business pressure continues to ramp up, management response is capped and does not “chase” closing the business gap.  As a result safety culture suffers only a modest reduction before stabilizing.  This scenario may be more typical of an organization with a fairly strong safety culture - under sufficient pressure it will make modest tradeoffs in priorities but will resist a significant compromise in safety.

Sunday, July 15, 2012

Modeling Safety Culture (Part 3): Simulation Results 1

As promised in our June 29, 2012 post, we are taking the next step to incorporate our mental models of safety culture and decision making in a simple simulation program.  The performance dynamic we described viewed safety culture as a “level”, and the level of safety culture determines its ability to resist pressure associated with competing business priorities. If business performance is not meeting goals, pressure on management is created which can be offset by sufficiently strong safety culture. However if business pressure exceeds the threshold for a given safety culture level, management decision making can be affected, resulting in a shift of resources from safety to business needs. This may relieve some business pressure but create a safety gap that can degrade safety culture, making it potentially even more vulnerable to business pressure.

It is worth expanding on the concept of safety culture as a “level” or in systems dynamics terms, a “stock” - an analogy might be the level of liquid in a reservoir which may increase or decrease due to flows into and out of the reservoir.  This representation causes safety culture to respond less quickly to changes in system conditions than other factors.  For example, an abrupt cut in an organization’s budget and its pressure on management to respond may occur quite rapidly - however its impact on organizational safety culture will play out more gradually.  Thus “...stocks accumulate change.  They are kind of a memory, storing the results of past actions...stocks cannot be adjusted instantaneously no matter how great the organizational pressures…This vital inertial characteristic of stock and flow networks distinguishes them from simple causal links.”* 

Let’s see this in action in the following highly simplified model.  The model considers just two competing priorities: safety and business.  When performance in these categories differs from goals, pressure is created on management and may result in actions to ameliorate the pressure.  In this model management action is limited to shifting resources from one priority to the other.  Safety culture, per our June 29, 2012 post, is an organization’s ability to resist and then respond to competing priorities.  At time zero, a reduction in authorized budget is imposed resulting in a gap (current spending versus authorized spending) and creating business pressure on management to respond.

Figure 1
Figure 1 shows the response of management.  Actions are initiated very quickly and start to reduce safety resources to relieve budget pressure.  The plot tracks the initial response, a plateauing to allow effectiveness to be gauged, followed by escalation of action to further reduce the budget gap.




Figure 2
Figure 2 overlays the effect of the management actions on the budget gap and the business
pressure associated with the gap.  Immediately following the budget reduction, business pressure rapidly increases and quickly reaches a level sufficient to cause management to start to shift priorities.  The first set of management actions brings some pressure relief, the second set of actions further reduces pressure.  As expected there is some time lag in the response of business pressure to the actions of management.

Figure 3
In Figure 3, the impact of these changes in business pressure and management actions are
accumulated in the safety culture.  Note first the gradual changes that occur in culture versus the faster and sharper changes in management actions and business pressure.  As management takes action there is a loss of safety priority and safety culture slowly degrades. When further escalation of management action occurs it is at a point where culture is already lower, making the organization more susceptible to compromising safety priorities.  Safety culture declines further. This type of response is indicative of a feedback loop which is an important dynamic feature of the system.  Business pressure causes management actions, those actions degrade safety culture, degraded culture reduces resistance to further actions.

We invite comments and questions from our readers.


*  John Morecroft, Strategic Modelling and Business Dynamics (John Wiley & Sons, 2007) pp. 59-61.

Monday, March 7, 2011

Culture Wars

We wanted to bring to our readers attention an article from the McKinsey Quarterly (March 2011) that highlights the ability of management simulators to be powerful business tools.  The context is the use of such “war games” in assisting management teams to accomplish their business goals; but we would allow that their utility extends to other challenges such as managing safety culture.

“Well-designed war games, though not a panacea, can be powerful learning experiences that allow managers to make better decisions.”

“...the company designed a game to answer the more strategic question: how can we win market share given the budget pressures on the Department of Defense and the moves of competitors? The game tested levers such as pricing, contracting, operational improvements, and partnerships.  The outcome wasn’t a tactical playbook—a list of things to execute and monitor—but rather strategic guidance on the industry’s direction, the most promising types of moves, the company’s competitive strengths and weaknesses, and where to focus further analysis.” (p. 3)  We have often used the term “levers” to bring attention to the need for managers to understand when and how to take actions to bring about a desired safety culture result.  Levers connote control and, as with any control system, control must be based on an understanding of the system’s dynamics.  Importantly the above quote distinguishes the outcome of the simulated experience is not a “playbook”, but “guidance” (we would add a deeper understanding and developed skills) that can be applied in the real world.

Interestingly the article mentions the use of games to facilitate or achieve organizational alignment around a strategic decision.  This treads very close to our contention that using a safety culture simulator offers a powerful environment within which managers can interact including developing common mental models and understanding of culture dynamics.  As noted in the article, “This shared experience...has continued to stimulate discussions across the company…” (p. 4)  What could be more valuable for reinforcing safety culture than informed and broad based discussion within the organization?  As Horn says, “It’s often beneficial, however, to repeat a game for the sake of organizational alignment ... usually, the wider group of employees who will implement the decision. Most people learn better by doing, and when they have shared experiences, they are more likely to embrace change.”

Wednesday, September 22, 2010

Games Theory

In the September 15, 2010 New York Times there is an interesting article* about the increasing recognition within school environments that game-base learning has great potential.  We cite this article as further food for thought about our initiatives to bring simulation-based games to training for nuclear safety management.

The benefits of using games as learning spaces is based on the insight that games are systems, and systems thinking is really the curriculum, bringing a nuanced and rich way of looking at real world situations. 

“Games are just one form of learning from experience. They give learners well-designed experiences that they cannot have in the real world (like being an electron or solving the crisis in the Middle East). The future for learning games, in my view, is in games that prepare people to solve problems in the world.” **

“A game….is really just a “designed experience,” in which a participant is motivated to achieve a goal while operating inside a prescribed system of boundaries and rules.” ***  The analogy in nuclear safety management is to have the game participants manage a nuclear operation - with defined budgets and performance goals - in a manner that achieves certain safety culture attributes even as achievement of those attributes comes into conflict with other business needs.  The game context brings an experiential dimension that is far more participatory and immersive than traditional training environments.  In the NuclearSafetySim simulation, the players’ actions and decisions also feedback into the system, impacting other factors such as  organizational trust and the willingness of personnel to identify deviations.  Experiencing the loss of trust in the simulation is likely to be a much more powerful lesson than simply the admonition to “walk the talk” burned into a Powerpoint slide.

* Sara Corbett, "Learning by Playing: Video Games in the Classroom," New York Times (Sep 15, 2010).

** J.P. Gee, "Part I: Answers to Questions About Video Games and Learning," New York Times (Sep 20, 2010).

*** "Learning by Playing," p. 3 of retrieved article.