Thursday, August 30, 2012

Failure to Learn

In this post we call your attention to a current research paper* and Wall Street Journal summary article** that sheds some light on how people make decisions to protect against risk.  The specific subject of the research involves response to imminent risk of house damage due to hurricanes.  As the author of the paper states, “The purpose of this paper is to attempt to resolve the question of whether there are, in fact, inherent limits to our ability to learn from experience about the value of protection against low-probability, high-consequence, events.” (p.3)  Also of interest is how the researchers used several simulations to gain insight and quantify how the decisions compared to optimal risk mitigation.

Are these results directly applicable to nuclear safety decisions?  We think not.  But they are far from irrelevant.  They illustrate the value of careful and thoughtful research into the how and why of decisions, the impact of the decision environment and the opportunities for learning to produce better decisions.  It also raises the question, Where is the nuclear industry on this subject?  Nuclear managers are making routinely what are probably the most safety significant decisions of any industry.  But how good are these decisions, and what determines their decision quality?  The industry might contend that the emphasis on safety culture (meaning values and traits) is the sine qua non for assuring decisions that adequately reflect safety.  Bad decision?  Must have been bad culture.  Reiterate culture, assume better decisions to follow. Is this right or is safety culture the wrong blanket or just too small a blanket to try to cover a decision process evolving from a complex adaptive system? 

The basic construct for the first simulation was a contest among participants (college students) with the potential to earn a small cash bonus based on achieving certain performance results.  Each participant was made the owner of a house in a coastal area subject to hurricane intrusion.  During the simulation animation, a series of hurricanes would materialize in the ocean and approach land.  The position, track and strength of the hurricane were continuously updated.  Prior to landfall participants had the choice of purchasing protection against damage for that specific storm, either partial or full protection.  The objective was to maximize total net asset; i.e., the value of the house, less any uncompensated damage and less the cost of any purchased protection.

While the first simulation focused on recurrent short term mitigation decisions, in the second simulation participants had the option to purchase protection that would last at least for the full season but had to purchased prior to a storm occurring.  (A comprehensive description of the simulation and test data are provided in the referenced paper.)

The results indicated that participants significantly under-protected their homes leading to actual losses higher than a “rational” approach to purchasing protection.  While part of the losses was due to purchasing protection unnecessarily, most was due to under protection.  The main driver, according to the researchers, appeared to be that participants over relied on their most recent experience instead of an objective assessment of current risk.  In other words, if in a prior hurricane they experienced no damage, either due to the track of the hurricane or because they had purchased protection, they were less inclined to purchase protection for the next hurricane. 

The simulations reveal limitations in the ability to achieve improved decisions in what was, in essence, a trial and error environment.  Feedback occurred after each storm, but participants did not necessarily use the feedback in an optimal manner “due to a tendency to excessively focus on the immediate disutility of cost outlays” (p.10)  In any event it is clear that the nuclear safety decision making environment is “not ideal for learning—…[since] feedback is rare and noisy…” (p.5)  In fact most feedback in nuclear operations might appear to be affirming since rarely do decisions to take short term risks result in bad outcomes.  It is an environment susceptible to complacency more than learning.

The author concludes with a final question as to whether non-optimal decision making, such as observed in the simulations, can be overcome.  He concludes, “This is may be a difficult since the psychological mechanisms that lead to the biases may be hard-wired; as long as we remain present-focused, prone to chasing short-term rewards and avoiding short term punishment, it is unlikely that individuals and institutions will learn to undertake optimal levels of protective investment by experience alone. The key, therefore, is introducing decision architectures that allow individuals to overcome these biases through, for example, creative use of defaults…” (pp. 30-31)

*  R.J. Meyer, “Failing to Learn from Experience about Catastrophes: The Case of Hurricane Preparedness,” The Wharton School, University of Pennsylvania Working Paper 2012-05 (March 2012).

** C. Shea, “Failing to Learn From Hurricane Experience, Again and Again,” Wall Street Journal (Aug. 17, 2012).

Tuesday, August 28, 2012

Confusion of Properties and Qualities

Dave Snowden
In this post we highlight a provocative, and we believe, accurate criticism of the approach taken by many management scientists in focusing on behaviors as the determinant of desired outcomes.  The source is Dave Snowden, a Welsh lecturer, consultant and researcher in the field of knowledge management.  For those of you interested in finding out more about him, the website for Cognitive Edge, founded by Snowden, contains an abundant amount of accessible content.

Snowden is a proponent of applying complexity science to inform managers’ decision making and actions.  He is perhaps best known for developing the Cynefin framework which is designed to help managers understand their operational context - based on four archetypes: simple, complicated, complex and chaotic. In considering the archetypes one can see how various aspects of nuclear operations might fit within the simple or complicated frameworks; frameworks where tools such as best practices and root cause analysis are applicable.  But one can also see the limitations of these frameworks in more complex situations, particularly those involving nuanced safety decisions which are at the heart of nuclear safety culture.  Snowden describes “complex adaptive systems” as ones where the system and its participants evolve together through ongoing interaction and influence, and system behavior is “emergent” from that process.  Perhaps most provocatively for nuclear managers is his contention that CDA systems are “non-causal” in nature, meaning one shouldn’t think in terms of linear cause and effect and shouldn’t expect that root cause analysis will provide the needed insight into system failures.

With all that said, we want to focus on a quote from one of Snowden’s lectures in 2008 “Complexity Applied to Systems”.*  In the lecture at approximately the 15:00 minute mark, he comments on a “fundamental error of logic” he calls “confusion of properties and qualities”.  He says:

“...all of management science, they observe the behaviors of people who have desirable properties, then try to achieve those desirable properties by replicating the behaviors”.

By way of a pithy illustration Snowden says, “...if I go to France and the first ten people I see are wearing glasses, I shouldn’t conclude that all Frenchmen wear glasses.  And I certainly shouldn’t conclude if I put on glasses, I will become French.”

For us Snowden’s observation generated an immediate connection to the approach being implemented around the nuclear enterprise.  Think about the common definitions of safety culture adopted by the NRC and industry.  The NRC definition specifies “... the core values and behaviors…” and “Experience has shown that certain personal and organizational traits are present in a positive safety culture. A trait, in this case, is a pattern of thinking, feeling, and behaving that emphasizes safety, particularly in goal conflict situations, e.g., production, schedule, and the cost of the effort versus safety.”**

The INPO definition defines safety culture as “An organization's values and behaviors – modeled by its leaders and internalized by its members…”***

In keeping with these definitions the NRC and industry rely heavily on the results of safety culture surveys to ascertain areas in need of improvement.  These surveys overwhelmingly focus on whether nuclear personnel are “modeling” the definitional traits, values and behaviors.  This seems to fall squarely in the realm described by Snowden of looking to replicate behaviors in hopes of achieving the desired culture and results.  Most often, identified deficiencies are subject to retraining to reinforce the desired safety culture traits.  But what seems to be lacking is a determination of why the traits were not exhibited in the first place.  Followup surveys may be conducted periodically, again to measure compliance with traits.  This recipe is considered sufficient until the next time there are suspect decisions or actions by the licensee. 

Bottom Line

The nuclear enterprise - NRC and industry - appear to be locked into a simplistic and linear view of safety culture.  Values and traits produce desired behaviors; desired behaviors produce appropriate safety management.  Bad results?  Go back to values and traits and retrain.  Have management reiterate that safety is their highest priority.  Put up more posters. 

But what if Snowden’s concept of complex adaptive systems is really an applicable model, and the safety management system is a much more complicated, continuously, self-evolving process?  It is a question well worth pondering - and may have far more impact than much of the hardware centric issues currently being pursued.

Footnote: Snowden is an immensely informative and entertaining lecturer and a large number of his lectures are available via podcasts on the Cognitive Edge website and through YouTube videos.  They could easily provide a stimulating input to safety culture training sessions.

*  Podcast available at 

**  NRC Safety Culture Policy Statement (June 14, 2011).

***  INPO Definition of Safety Culture (2004).

Tuesday, July 31, 2012

Regulatory Influence on Safety Culture

In September, 2011 the Nuclear Energy Agency (NEA) and the International Atomic Energy (IAEA) held a workshop for regulators and industry on oversight of licensee management.  “The principal aim of the workshop was to share experience and learning about the methods and approaches used by regulators to maintain oversight of, and influence, nuclear licensee leadership and management for safety, including safety culture.”*

Representatives from several countries made presentations.  For example, the U.S. presentation by NRC’s Valerie Barnes and INPO’s Ken Koves discussed work to define safety culture (SC) traits and correlate them to INPO principles and ROP findings (we previously reviewed this effort here).  Most other presentations also covered familiar territory. 

However, we were very impressed by Prof. Richard Taylor’s keynote address.  He is from the University of Bristol and has studied organizational and cultural factors in disasters and near-misses in both nuclear and non-nuclear contexts.  His list of common contributors includes issues with leadership, attitudes, environmental factors, competence, risk assessment, oversight, organizational learning and regulation.  He expounded on each factor with examples and additional detail. 

We found his conclusion most encouraging:  “Given the common precursors, we need to deepen our understanding of the complexity and interconnectedness of the socio-political systems at the root of organisational accidents.”  He suggests using system dynamics modeling to study archetypes including “maintaining visible convincing leadership commitment in the presence of commercial pressures.”  This is totally congruent with the approach we have been advocating for examining the effects of competing business and safety pressures on management. 

Unfortunately, this was the intellectual high point of the proceedings.  Topics that we believe are important to assessing and understanding SC got short shrift thereafter.  In particular, goal conflict, CAP and management compensation were not mentioned by any of the other presenters.

Decision-making was mentioned by a few presenters but there was no substantive discussion of this topic (the U.K. presenter had a motherhood statement that “Decisions at all levels that affect safety should be rational, objective, transparent and prudent”; the Barnes/Kove presentation appeared to focus on operational decision making).  A bright spot was in the meeting summary where better insight into licensees’ decision making process was mentioned as desirable and necessary by regulators.  And one suggestion for future research was “decision making in the face of competing goals.”  Perhaps there is hope after all.

(If this post seems familiar, last Dec 5 we reported on a Feb 2011 IAEA conference for regulators and industry that covered some of the same ground.  Seven months later the bureaucrats had inched the football a bit down the field.)

*  Proceedings of an NEA/IAEA Workshop, Chester, U.K. 26-28 Sept 2011, “Oversight and Influencing of Licensee Leadership and Management for Safety, Including Safety Culture – Regulatory Approaches and Methods,” NEA/CSNI/R(2012)13 (June 2012).

Friday, July 27, 2012

Modeling Safety Culture (Part 4): Simulation Results 2

As we introduced in our prior post on this subject (Results 1), we are presenting some safety culture simulation results based on a highly simplified model.  In that post we illustrated how management might react to business pressure caused by a reduction in authorized budget dollars.  The actions of management result in shifting of resources from safety to business and lead to changes in the state of safety culture.

In this post we continue with the same model and some other interesting scenarios.  In each of the following charts three outputs are plotted: safety culture in red, management action level in blue and business pressure in dark green.  The situation is an organization with a somewhat lower initial safety culture and confronted with a somewhat smaller budget reduction than the example in Results 1. 

Figure 1
Figure 1 shows an overly reactive management. The blue line shows management’s actions in response to the changes in business pressure (green) associated with the budget change.  Note that management’s actions are reactive, shifting priorities immediately and directly in response. The behavior leads to a cyclic outcome where management actions temporarily alleviate business pressure, but when actions are relaxed, pressure rises again, followed by another cycle of management response.  This could be a situation where management is not addressing the source of the problem, shifting priorities back and forth between business and safety.  Also of interest is that the magnitude of the cycle is actually increasing with time indicating that the system is essentially unstable and unsustainable.  Safety culture (red) declines throughout the time frame.

Figure 2
Figure 2 shows the identical conditions but where management implements a more restrained approach, delaying its response to changes in business.  The overall system response is still cyclic, but now the magnitude of the cycles is decreasing and converging on a stable outcome.

Figure 3
Figure 3 is for the same conditions, but the management response is restrained further.  Management takes more time to assess the situation and respond to business pressure conditions.  This approach starts to filter out the cyclic type of response seen in the first two examples and will eventually result in a lower business gap.

Perhaps the most important takeaway from these three simulations is that the total changes in safety culture are not significantly different.  A certain price is being paid for shifting priorities away from safety, however the ability to reduce and maintain lower business pressure is much better with the last management strategy.

Figure 4
The last example in this set is shown in Figure 4.  This is a situation where business pressure is gradually ramped up due to a series of small step reductions in budget levels.  Within the simulation we have also set a limit on extent of management actions.  Initially management takes no action to shift priorities - business pressure is within a value that safety culture can resist.  Consequently safety culture remains stable.  After the third “bump” in business pressure, the threshold resistance of safety culture is broken and management starts to modestly shift priorities.  Even though business pressure continues to ramp up, management response is capped and does not “chase” closing the business gap.  As a result safety culture suffers only a modest reduction before stabilizing.  This scenario may be more typical of an organization with a fairly strong safety culture - under sufficient pressure it will make modest tradeoffs in priorities but will resist a significant compromise in safety.

Friday, July 20, 2012

Cognitive Dissonance at Palisades

“Cognitive dissonance” is the tension that arises from holding two conflicting thoughts in one’s mind at the same time.  Here’s a candidate example, a single brief document that presents two different perspectives on safety culture issues at Palisades.

On June 26, 2012, the NRC requested information on Palisades’ safety culture issues, including the results of a 2012 safety culture assessment conducted by an outside firm, Conger & Elsea, Inc (CEI).  In reply, on July 9, 2012 Entergy submitted a cover letter and the executive summary of the CEI assessment.*  The cover letter says “Areas for Improvement (AFls) identified by CEI over1apped many of the issues already identified by station and corporate leadership in the Performance Recovery Plan. Because station and corporate management were implementing the Performance Recovery Plan in April 2012, many of the actions needed to address the nuclear safety culture assessment were already under way.”

Further, “Gaps identified between the station Performance Recovery Plan and the safety culture assessment are being addressed in a Safety Culture Action Plan. . . . [which is] a living document and a foundation for actively engaging station workers to identify, create and complete other actions deemed to be necessary to improve the nuclear safety culture at PNP.”

Seems like management has matters in hand.  But let’s look at some of the issues identified in the CEI assessment.

“. . . important decision making processes are governed by corporate procedures. . . .  However, several events have occurred in recent Palisades history in which deviation from those processes contributed to the occurrence or severity of an event.”

“. . . there is a lack of confidence and trust by the majority of employees (both staff and management) at the Plant in all levels of management to be open, to make the right decisions, and to really mean what they say. This is indicated by perceptions [of] the repeated emphasis of production over safety exhibited through decisions around resources.” [emphasis added]

“There is a lack in the belief that Palisades Management really wants problems or concerns reported or that the issues will be addressed. The way that CAP is currently being implemented is not perceived as a value added process for the Plant.”

The assessment also identifies issues related to Safety Conscious Work Environment and accountability throughout the organization.

So management is implying things are under control but the assessment identified serious issues.  As our Bob Cudlin has been explaining in his series of posts on decision making, pressures associated with goal conflict permeate an entire organization and the problems that arise cannot be fixed overnight.  In addition, there’s no reason for a plant to have an ineffective CAP but if the CAP isn’t working, that’s not going to be quickly fixed either.

*  Letter, A.J. Vitale to NRC, “Reply to Request for Information” (July 9,2012) ADAMS ML12193A111.

Sunday, July 15, 2012

Modeling Safety Culture (Part 3): Simulation Results 1

As promised in our June 29, 2012 post, we are taking the next step to incorporate our mental models of safety culture and decision making in a simple simulation program.  The performance dynamic we described viewed safety culture as a “level”, and the level of safety culture determines its ability to resist pressure associated with competing business priorities. If business performance is not meeting goals, pressure on management is created which can be offset by sufficiently strong safety culture. However if business pressure exceeds the threshold for a given safety culture level, management decision making can be affected, resulting in a shift of resources from safety to business needs. This may relieve some business pressure but create a safety gap that can degrade safety culture, making it potentially even more vulnerable to business pressure.

It is worth expanding on the concept of safety culture as a “level” or in systems dynamics terms, a “stock” - an analogy might be the level of liquid in a reservoir which may increase or decrease due to flows into and out of the reservoir.  This representation causes safety culture to respond less quickly to changes in system conditions than other factors.  For example, an abrupt cut in an organization’s budget and its pressure on management to respond may occur quite rapidly - however its impact on organizational safety culture will play out more gradually.  Thus “...stocks accumulate change.  They are kind of a memory, storing the results of past actions...stocks cannot be adjusted instantaneously no matter how great the organizational pressures…This vital inertial characteristic of stock and flow networks distinguishes them from simple causal links.”* 

Let’s see this in action in the following highly simplified model.  The model considers just two competing priorities: safety and business.  When performance in these categories differs from goals, pressure is created on management and may result in actions to ameliorate the pressure.  In this model management action is limited to shifting resources from one priority to the other.  Safety culture, per our June 29, 2012 post, is an organization’s ability to resist and then respond to competing priorities.  At time zero, a reduction in authorized budget is imposed resulting in a gap (current spending versus authorized spending) and creating business pressure on management to respond.

Figure 1
Figure 1 shows the response of management.  Actions are initiated very quickly and start to reduce safety resources to relieve budget pressure.  The plot tracks the initial response, a plateauing to allow effectiveness to be gauged, followed by escalation of action to further reduce the budget gap.

Figure 2
Figure 2 overlays the effect of the management actions on the budget gap and the business
pressure associated with the gap.  Immediately following the budget reduction, business pressure rapidly increases and quickly reaches a level sufficient to cause management to start to shift priorities.  The first set of management actions brings some pressure relief, the second set of actions further reduces pressure.  As expected there is some time lag in the response of business pressure to the actions of management.

Figure 3
In Figure 3, the impact of these changes in business pressure and management actions are
accumulated in the safety culture.  Note first the gradual changes that occur in culture versus the faster and sharper changes in management actions and business pressure.  As management takes action there is a loss of safety priority and safety culture slowly degrades. When further escalation of management action occurs it is at a point where culture is already lower, making the organization more susceptible to compromising safety priorities.  Safety culture declines further. This type of response is indicative of a feedback loop which is an important dynamic feature of the system.  Business pressure causes management actions, those actions degrade safety culture, degraded culture reduces resistance to further actions.

We invite comments and questions from our readers.

*  John Morecroft, Strategic Modelling and Business Dynamics (John Wiley & Sons, 2007) pp. 59-61.

Tuesday, July 3, 2012

NRC Non-Regulation of Safety Culture: Second Quarter Update

NRC SC poster, ADAMS ML120810464.
On March 17th we published a post on NRC safety culture (SC) related activities with individual licensees since the SC policy statement was issued in June, 2011.  This post is an update, highlighting selected NRC actions from mid-March through June. 

Our earlier post mentioned Browns Ferry, Fort Calhoun and Palisades as plants where the NRC was undertaking SC related activities.  It looks like none of those plants has resolved its SC issues. 

For Browns Ferry we reported that the NRC was reviewing the plant’s 2011 SC surveys.  Turns out that was just the tip of the iceberg.  A recent PI&R inspection report indicates that the plant’s SC problems have existed for years and are deep-rooted.  Over time, Browns Ferry has reported SC issues including production and schedule taking priority over safety (2008), “struggling” with SC issues (2010) and a decline in SC (2011).  All of this occurred in spite of multiple licensee interventions and corrective actions.  The NRC’s current view is “Despite efforts to address SC issues at the site, the inspectors concluded that the lack of full confidence in the CAP has contributed to a decline in the SC since the last PI&R inspection.”*  We don’t expect this one to go away anytime soon.

Fort Calhoun management had said that SC deficiencies had contributed to problems in their CAP.  During the quarter, they presented actions planned or taken to remediate their SC deficiencies.  On June 11th, the NRC issued a Confirmatory Action Letter with a lengthy list of actions to be completed prior to plant restart.  One item is “OPPD will conduct a third-party safety culture assessment . . . and implement actions to address the results . . . .”**  It looks like Fort Calhoun is making acceptable progress on the SC front and we’d be surprised if SC ends up being an item that prevents restart.  Last April we provided some additional information on Fort Calhoun here.

In Palisades’ case, the NRC is asking for an extensive set of information on the actions being taken to improve SC at the site.  The last item on the long list requests the latest SC assessment for Entergy’s corporate office.  (This is not simply a fishing expedition.  Entergy is in trouble at other nuclear sites for problems that also appear related to SC deficiencies.)  After the information is provided and reviewed, the NRC “believe[s] that a public meeting on the safety culture assessment and your subsequent actions would be beneficial to ensure a full understanding by the NRC, your staff, and the public.”***  Back in January, we provided our perspective on Palisades here and here.

New NRC SC activity occurred at Susquehanna as part of a supplemental inspection related to a White finding and a White performance indicator.  The NRC conducted an “assessment of whether any safety culture component caused or significantly contributed to the white finding and PI.”  The assessment was triggered by PPL’s report that SC issues may have contributed to the plant’s performance problems.  The NRC inspectors reviewed documents and interviewed focus groups, individual managers and groups involved in plant assessments.  They concluded “components of safety culture identified by PPL did not contribute to the White PI or finding, and that the recently implemented corrective actions appear to being well received by the work force.”****  We report this item because it illustrates the NRC’s willingness and ability to conduct its own SC assessments where the agency believes they are warranted.

Our March post concluded: “It’s pretty clear the NRC is turning the screw on licensee safety culture effectiveness, even if it’s not officially “regulating” safety culture.”  That still appears to be the case.

*  V.M. McCree (NRC) to J.W. Shea (TVA), Browns Ferry Nuclear Plant - NRC Problem Identification and Resolution Inspection Report 05000259/2012007, 05000260/2012007 and 05000296/2012007 and Exercise of Enforcement Discretion (May 28, 2012) ADAMS ML12150A219.

**  E.E. Collins (NRC) to D.J. Bannister (OPPD), Confirmatory Action Letter – Fort Calhoun Station (June 11, 2012)  ADAMS ML12163A287.

***  G.L. Shear (NRC) to A. Vitale (Entergy), Request for Information on SC Issues at Palisades Nuclear Plant (June 26, 2012) ADAMS ML12179A155.

**** D.J. Roberts (NRC) to T.S. Rausch (PPL Susquehanna), Susquehanna Steam Electric Station – Assessment Follow-Up Letter and Interim NRC 95002 Supplemental Inspection Report 05000387/2012008 (May 7, 2012) ADAMS ML12125A374.

Friday, June 29, 2012

Modeling Safety Culture (Part 2): Safety Culture as Pressure Boundary

No, this is not an attempt to incorporate safety culture into the ASME code.  As introduced in Part 1 we want to offer a relatively simple construct for safety culture - hoping to provide a useful starting point for a model of safety culture and a bridge between safety culture as amorphous values and beliefs, and safety culture that helps achieve desired balances in outcomes.

We propose that safety culture be considered “the willingness and ability of an organization to resist undue pressure on safety from competing business priorities”.  Clearly this is a 30,000 foot view of safety culture and does not try to address the myriad ways in which it materializes within the organization. This is intentional since there are so many possible moving parts at the individual level making it too easy to lose sight of the macro forces. 

The following diagram conceptualizes the boundary between safety priorities (i.e., safety culture) and other organizational priorities (business pressure).  The plotted line is essentially a threshold where the pressure for maintaining safety priorities (created by culture) may start to yield to increasing amounts of pressure to address other business priorities.  In the region to the left of the plot line, safety and business priorities exist in an equilibrium.  To the right of the line business pressure exceeds that of the safety culture and can lead to compromises.  Note that this construct supports the view that strong safety performance is consistent with strong overall performance.  Strong overall performance, in areas such as production, cost and schedule, ensure that business pressures are relatively low and in equilibrium with reasonably strong safety culture.  (A larger figure with additional explanatory notes is available here.)

The arc of the plot line suggests that the safety/business threshold increases (requires greater business pressure) as safety culture becomes stronger.  It also illustrates that safety priorities may be maintained even at lower safety culture strengths when there is little competing business pressure.  This aspect seems particularly consistent with determinations at certain plants that safety culture is “adequate” but still requires strengthening.  It also provides an appealing explanation for how complacency can over time erode a relatively strong safety culture . If overall performance is good, resulting in minimal business pressures, the culture might not be “challenged” or noticed even as culture becomes degraded.

Another perspective on safety culture as pressure boundary is what happens when business pressure elevates to a point where the threshold is crossed.  One reason that organizations with strong culture may be able to resist more pressure is a greater ability to manage business challenges that arise and/or a willingness to adjust business goals before they become overwhelming.  And even at the threshold such organizations may be better able to identify compensatory actions that have only minimal and short term safety impacts.  For organizations with weaker safety culture, the threshold may lead to more immediate and direct tradeoffs of safety priorities.  In addition, the feedback effects of safety compromises (e.g., larger backlogs of unresolved problems) can compound business performance deficiencies and further increase business pressure.  One possible insight from the pressure model is that in some cases, perceived safety culture issues may be more a situation of reasonably strong safety culture being over matched by excessive business pressures.  The solution may be more about relieving business pressures than exclusively trying to reinforce culture.

In Part 3 we hope to further develop this approach through some simple simulations that illustrate the interaction of managing resources and balancing pressures.  In the meantime we would like to hear reactions from readers to this concept.