Friday, August 28, 2009

Bernhard Wilpert

As mentioned in a prior post we will be highlighting some of the work of the late Bernhard Wilpert, a leading figure in research on the role of human behavior in high reliability organizations. 


Professor Wilpert emphasized the interaction of human, technology, and organizational dynamics.  His tools for human factors event analysis have become the standard practice in German and Swiss nuclear plants.  He is the author of several leading books including Safety Culture in Nuclear Power Operations; System Safety: Challenges; Pitfalls of Intervention; Emerging Demands for Nuclear Safety of Nuclear Power Operations: Challenge and Response; and Nuclear Safety: A Human Factors Perspective.

Professor Wilpert was also a principal contributor to the LearnSafe project conducted in Europe from 2001 – 2004.  See the following link for information about the project team and its results and look to us for future posts on the LearnSafe research.

Link to LearnSafe project.

Wednesday, August 26, 2009

Can Assessments Identify Complacency? Can Assessments Breed Complacency?

To delve a little deeper into this question, on Slide 10 of the NEI presentation there is a typical summary graphic of assessment results.  The chart catalogs the responses of members of the organization by the eight INPO principles of safety culture.  This summary indicates a variety of responses to the individual principles – for 3 or 4 of the principles there seems to be a fairly strong consensus that the right things are happening.  But 5 of the 8 principles show greater than a 20 score negative responses and 2 of the principles show greater than a 40 score negatives. 

First, what can or should one conclude about the overall state of safety culture in this organization given these results?  One wonders if these results were shown to a number of experts, whether their interpretations would be consistent or whether they would even purport to associate the results with a finding.  As discussed in a prior post, this issue is fundamental to the nature of safety culture, whether it is amenable to direct measurement, and whether assessment results really say anything about the safety health of the organization.

But the more particular question for this post is whether an assessment can detect complacency in an organization and its potential for latent risk to the organization’s safety performance.  In a post dated July 30, 2009 I referred to the problems presented by complacency, particularly in organizations experiencing few operational challenges.  That environment can be ripe for a weak culture to develop or be sustained. Could that environment also bias the responses to assessment questions, reinforcing the incorrect perception that safety culture is healthy?  It may be that this type of situation is of most relevance in today’s nuclear industry where the vast majority of plants are operating at high capacity factors and experiencing few significant operational events.  It is not clear to this commentator that assessments can be designed to explicitly detect complacency, and even the use of assessment results in conjunction with other data (data likely to look normal when overall performance is good) may not be credible in raising an alarm.

Link to NEI presentation.

Monday, August 24, 2009

Assessment Results – A Rose is a Rose

The famous words of Gertrude Stein are most often associated with the notion that when all is said and done, a thing is what it is.  We offer this idea as we continue to look at the meaning of safety culture assessment results – are the results just the results, or do they signify some meaning or interpretation beyond the results?

To illustrate some of the issues I will use an NEI presentation made to the NRC on February 3, 2009.  On Slide 2 there is a statement that the USA methodology (for safety culture surveys and assessments) has been used successfully for five years.   One question is what does it mean that an assessment was successful?  The intent is not to pick on this particular methodology but to open the question of exactly what is the expected result of performing an assessment.

It may be that “successful” means that the organizations being assessed have found the process and results to be useful or interesting, e.g., by stimulating discussion or furthering exploration of issues associated with the results.  There are many, myself included, who believe anything that stimulates an organization to discuss and contemplate safety management issues is beneficial.  On the other hand it may be that organizations (and regulators??) believe assessments are successful because they can use the results to make a determination that a safety culture is “acceptable” or “strong” or “needs improvement”.  Can assessments really carry the weight of this expectation?  Or is a rose just a rose?

Slide 11 highlights these questions by indicating a validation of the assessment methodology is to be carried out.  “Validation” seems to suggest that assessments mean something beyond their immediate results.  It may also suggest that assessment results can be compared to some “known” value to determine whether the assessment accurately measured or predicted that value.  We will have to wait and see what is intended and how the validation is performed.  At the same time we will be keeping in mind the observation of Professor Wilpert in my post of August 17, 2009 that “culture is not a quantifiable phenomenon”.

Link to presentation
.

Monday, August 17, 2009

Safety Culture Assessment

A topic that we will visit regularly is the use of safety culture assessments to assign quantitative values to the condition of a specific organization and even the individual departments and working groups within the organization.  One reason for this focus is the emphasis on safety culture assessments as a response to situations where organizational performance does not meet expectations and “culture” is believed to be a factor.  Both the NRC and the nuclear industry appear aligned on the use of assessments as a response to performance issues and even as an ongoing prophylactic tool.  But, are these assessments useful?  Or accurate?  Do they provide insights into the origins of cultural deficiencies?

One question that frequently comes to mind is, can safety culture be separated from the manifestation of culture in terms of the specific actions and decisions taken by an organization?  For example, if an organization makes some decisions that are clearly at odds with “safety being the overriding priority”, can the culture of the organization not be deficient?  But if an assessment of the culture is performed, and the espoused beliefs and priorities are generally supportive of safety, what is to be made of those responses? 

The reference material for this post comes from some work led by the late Bernhard Wilpert of the Berlin University of Technology.  (We will sample a variety of his work in the safety management area in future posts.)   It is a brief slide presentation titled, “Challenges and Opportunities of Assessing Safety Culture”.  Slide 3 for example revisits E. H. Schein’s multi-dimensional formulation of safety culture which suggests that assessments must be able to expose all levels of culture and their integrated effect. 

Two observations from these slides seem of particular note.  They are both under Item 4, Methodological Challenges.  The first observation is that culture is not a quantifiable phenomenon and does not lend itself easily to benchmarking.  This bears consideration as most assessment methods being used today employ some statistical comparisons to assessments at other plants, including percentile type ranking.   The other observation in the slide is that culture results from the learning experience of its members.  This is of particular interest to us as it supports some of the thinking associated with a systems dynamics approach.  A systems view involves the development of shared “mental models” of how safety management “works”; the goal being that individual actions and decisions can be understood within a commonly understood framework.  The systems process becomes, in essence, the mechanism for translating beliefs into actions.


Link to slide presentation

Thursday, August 13, 2009

Primer on System Dynamics

System Dynamics is a concept for seeing the world in terms of inputs and outputs, where internal feedback loops and time delays can affect system behavior and lead to complex, non-linear changes in system performance.

The System Dynamics worldview was originally developed by Prof. Jay Forrester at MIT. Later work by other thinkers, e.g., Peter Senge, author of The Fifth Discipline, expanded the original concepts and made them available to a broader audience. An overview of System Dynamics can be found on Wikipedia.

Our NuclearSafetySim program uses System Dynamics to model managerial behavior in an environment where maintaining the nuclear safety culture is a critical element. NuclearSafetySim is built using isee Systems iThink software. isee Systems has educational materials available on their website that explain some basic concepts.

There are other vendors in the System Dynamics software space, including Ventana Systems and their Vensim program. They also provide some reference materials, available here.

Thursday, August 6, 2009

Signs of a Reactive Organization (MIT #6)

One of the most important insights to be gained from a systems perspective of safety management is the effectiveness of various responses to changes in system conditions.  Recall that in our post #3 on the MIT paper, we talked about single versus double loop learning.  Single loop response included short term, local responses to perceived problems while double loop referred to gaining an understanding of the underlying reasons for the problems and finding long term solutions.  As you might guess, single loop responses tend to be reactive.  “An oscillating incident rate is the hallmark of a reactive organization, where successive crises lead to short term fixes that persist only until the next crisis.” [pg 22]  We can use our NuclearSafetySim model to illustrate differing approaches to managing problems.

The figure below illustrates how the number of problems/issues (we use the generic term "challenges" in NuclearSafetySim) might vary with time when the response is reactive.  The blue line indicates the total number of issues, the pink line the number of new issues being identified and the green line, the resolution rate for issues, e.g., through a corrective action program.  Note that the blue line initially increases and then oscillates while the pink line is relatively constant.  The oscillation derives from the management response, reflected in the green line, where there is an initial delay in responding to an increased numbers of issues, then resolution rates are greatly increased to address higher backlogs, then reduced (due to budgetary pressures and other priorities) when backlogs start to fall, precipitating another cycle of increasing issues.




Compare the oscillatory response above to the next figure where an increase in issues results immediately in higher resolution rates that are maintained over a period sufficient to return the system to a lower level of backlogs.  In parallel, budgets are increased to address the underlying causes of issues, driving down the occurrence rate of new issues and ultimately bringing backlog down to a long-term sustainable level.



The last figure shows some of the ramifications of system management on safety culture and employee trust.  The significant increase in issues backlog initially leads to a degradation of employee trust (the pink line) and an erosion in safety culture (blue line).  However the nature and effectiveness of the management response in bringing down backlogs and reducing new issues reverses the trust trend line and rebuilds safety culture over time.  Note the red line, representing plant performance, is relatively unchanged over the same period indicating that performance issues may exist under the cover of a consistently operating plant.

Tuesday, August 4, 2009

The Economist on Computer Simulation

The Economist has occasional articles on the practical applications of computer simulation. Following are a couple of items that have appeared in the last year.

Agent-based simulation is used to model the behavior of crowds. "Agent-based" means that each individual has some capacity to ascertain what is going on in the environment and act accordingly. This approach is being used to simulate the movement of people in a railroad station or during a building fire. On a much larger scale, each of the computer-generated orcs in the "Lord of the Rings" battle scenes moved independently based on his immediate surroundings.

Link to
article.

The second article is a brief review of simulation's use in business applications, including large-scale systems (e.g., an airline), financial planning, forecasting, process mapping and Monte Carlo analysis. This is a quick read on the ways simulation is used to illustrate and analyze a variety of complex situations.

Link to
article
.

Other informational resources that discuss simulation are included on our
References
page.

Monday, August 3, 2009

Reading List: Just Culture by Sidney Dekker

Thought I would share with you a relatively recent addition to the safety management system bookshelf, Just Culture by Sidney Dekker, Professor of Human Factors and System Safety at Lund University in Sweden.  In Dekker’s view a “just culture” is critical for the creation of safety culture.  A just culture will not simply assign blame in response to a failure or problem, it will seek to use accountability as a means to understand the system-based contributors to failure and resolve those in a manner that will avoid recurrence.  One of the reasons we believe so strongly in safety simulation is the emphasis on system-based understanding, including a shared organizational mental model of how safety management happens.  One reviewer (D. Sillars) of this book on the amazon.com website summarizes, “’Just culture’ is an abstract phrase, which in practice, means . . . getting to an account of failure that can both satisfy demands for accountability while contributing to learning and improvement.” 


Question for nuclear professionals:  Does your organization maintain a library of resources such as Just Culture or Dianne Vaughan’s book, The Challenger Launch Decision, that provide deep insights into organizational performance and culture?  Are materials like this routinely the subject of discussions in training sessions and topical meetings?