Showing posts with label FAA. Show all posts
Showing posts with label FAA. Show all posts

Thursday, May 29, 2014

A Systems View of Two Industries: Nuclear and Air Transport

We have long promoted a systems view of nuclear facilities and the overall industry.  One consequence of that view is an openness to possible systemic problems as the root causes of incidents in addition to searching for malfunctioning components, both physical and human.

One system where we see this openness is the air transport industry—the air carriers and the Federal Aviation Administration (FAA).  The FAA has two programs for self-reporting of incidents and problems: the Voluntary Disclosure Reporting Program (VDRP) and the Aviation Safety Action Program (ASAP).  These programs are discussed in a recent report* by the FAA’s Office of Inspector General (OIG) and are at least superficially similar to the NRC’s Licensee Event Reporting and Employee Concerns Program.

What’s interesting is that VDRP is receptive to the reporting of both individual and systemic issues.  The OIG report says the difference between individual and systemic is “important because if the issue is systemic, the carrier will have to develop a detailed fix to address the system as a whole—whereas if the issue is more isolated or individual, the fix will be focused more at the employee level, such as providing counseling or training.” (p. 7)  In addition, it appears both FAA programs  are imbued with the concept of a “just culture,” another topic we have posted about on several occasions and which is often associated with a systems view.  A just culture is one where people are encouraged to provide essential safety-related information, the blame game is aggressively avoided, and a clear line exists between acceptable and unacceptable behavior.

Now the implementation of the FAA programs is far from perfect.  As the OIG points out, the FAA doesn't ensure root causes are identified or corrective actions are sufficient and long-lived, and safety data is not analyzed to identify trends that represent risks.  Systemic issues may not always be reported by the carriers or recognized by the FAA.  But overall, there appears to be an effort at open, comprehensive communication between the regulator and the regulated.

So why does the FAA encourage a just culture while the nuclear industry seems fixated on a culture of blame?  One factor might be the NRC’s focus on hardware-centric performance measures.  If these are improving over time, one might infer that any incidents are more likely caused by non-hardware, i.e., humans. 

But perhaps we can gain greater insight into why one industry is more accepting of systemic issues by looking at system-level factors, specifically the operational (or actual) coupling among industry participants versus their coupling as perceived by external observers.**

As a practical matter, the nuclear industry is loosely coupled, i.e., each plant operates more or less independently of the others (even though plants with a common owner are subject to the same policies as other members of the fleet).  There is seldom any direct competition between plants.  However, the industry is viewed by many external observers, especially anti-nukes, as a singular whole, i.e, tightly coupled.  Insiders reinforce this view when they say things like “an accident at one plant is an accident for all.”  And, in fact, one incident (e.g., Davis-Besse) can have industry-wide implications although the physical risk may be entirely local.  In such a socio-political environment, there is implicit pressure to limit or encapsulate the causes of any incidents or irregularities to purely local sources and avoid the mention of possible systemic issues.  The leads to a search for the faulty component, the bad employee, a failure to update a specific procedure or some other local problem that can be fixed by improved leadership and oversight, clearer expectations, more attention to detail, training etc.  The result of this approach (plus other industry-wide factors, e.g., the lack of transparency in certain oversight practices*** and the “special and unique” mantra) is basically a closed system whose client, i.e., the beneficiary of system efforts, is itself.

In contrast, the FAA’s world has two parts, the set of air carriers whose relationship with each another is loosely coupled, similar to the nuclear industry, and the air traffic control (ATC) sub-system, which is more tightly coupled because all the carriers share the same airspace and ATC.  Because of loose coupling, a systemic problem at a single carrier affects only that carrier and does not infect the rest of the industry.  What is most interesting is that a single airline accident (in the tightly coupled portion of the system) does not lead to calls to shut down the industry.  Air transport has no organized opposition to its existence.  Air travel is such an integral part of so many people’s lives that pressure exists to keep the system running even in the face of possible hazards.  As a consequence, the FAA has to occasionally reassert its interest in keeping safety risks from creeping into the system.  Overall, we can say the air transport industry is relatively open, able to admit the existence of problems, even systemic ones, without taking an inadvertent existential risk. 

The foregoing is not intended to be a comprehensive comparison of the two industries.  Rather it is meant to illustrate how one can apply a simple systems concept to gain some insights into why participants in different industries behave differently.  While both the FAA and NRC are responsible for identifying systemic issues in their respective industries, it appears FAA has an easier time of it.  This is not likely to change given the top-level factors described above. 


*  FAA Office of Inspector General, “Further Actions are Needed to Improve FAA’s Oversight of the Voluntary Disclosure Reporting Program” Report No. AV-2014-036 (April 10, 2014).  Thanks to Bill Mullins for pointing out this report to us.

“VDRP provides air carriers the opportunity to voluntarily report and correct areas of non-compliance without civil penalty. The program also provides FAA important safety information that might not otherwise come to its attention.“ (p. 1)  ASAP “allows individual aviation employees to disclose possible safety violations to air carriers and FAA without fear that the information will be used to take enforcement or disciplinary action against them.” (p. 2)

**  “Coupling” refers to the amount of slack, buffer or give between two items in a system.

***  For example, INPO’s board of directors is comprised of nuclear industry CEOs, INPO evaluation reports are delivered in confidence to its members and INPO has basically unfettered access to the NRC.  This is not exactly a recipe for gaining public trust.  See J.O. Ellis Jr. (INPO CEO), Testimony before the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling (Aug. 25, 2010).  Retrieved from NEI website May 27, 2014.

Sunday, April 10, 2011

On the Other Hand

Our prior post on the award of safety performance bonuses at Transocean may have left you, and us, wondering about the ability of large corporations to walk the talk.  Well, better news today with an article from the Wall Street Journal* recounting the decision by Southwest Airlines to preemptively ground its 737s after a fuselage tear on one of the planes.  

As told in the article, the Southwest management appears to have rapidly responded to the event (over a weekend) with technical assessment including advice from Boeing.  The bottom line on the technical side was uncertainty regarding the cause of the failure and the implications for other similar 737s.  It was also clear that Southwest placed the burden on an affirmative showing that the planes were safe rather than requiring evidence that they weren’t.  With the issue “up in the air” the CEO acted quickly and decisively with the grounding order and the conduct of inspections as recommended by Boeing.  

The decision resulted in the cancellation of over 600 flights and no doubt inconvenienced many Southwest passengers, and will have a substantial cost impact to the airline.  The action by Southwest was described as “unusual” as it did not wait for a directive from the government or Boeing to remove planes from service. 

(Ed. note:  Southwest’s current approach is even more remarkable in light of how recently their practices were not exactly on the side of the angels.  In 2008, the FAA fined Southwest $7.5 million for knowingly flying planes that were overdue for mandatory structural inspections.)


*  T.W. Martin, A. Pasztor and P. Sanders, "Southwest's Solo Flight in Crisis," wsj.com (Apr 8, 2011).

Thursday, October 28, 2010

Safety Culture Surveys in Aviation

Like nuclear power, commercial aviation is a high-reliability industry whose regulator (the FAA) is interested in knowing the state of safety culture.  At an air carrier, the safety culture needs to support cooperation, coordination, consistency and integration across departments and at multiple physical locations.

And, like nuclear power, employee surveys are used to assess safety culture.  We recently read a report* on how one aviation survey process works.  The report is somewhat lengthy so we have excerpted and summarized points that we believe will be interesting to you.

The survey and analysis tool is called the Safety Culture Indicator Scale Measurement System (SCISMS), “an organizational self-assessment instrument designed to aid operators in measuring indicators of their organization’s safety culture, targeting areas that work particularly well and areas in need of improvement.” (p. 2)  SCISMS provides “an integrative framework that includes both organizational level formal safety management systems, and individual level safety-related behavior.” (p. 8)

The framework addresses safety culture in four main factors:  Organizational Commitment to Safety, Operations Interactions, Formal Safety Indicators, and Informal Safety Indicators.  Each factor is further divided into three sub-factors.  A typical survey contains 100+ questions and the questions usually vary for different departments.

In addition to assessing the main factors, “The SCISMS contains two outcome scales: Perceived Personal Risk/Safety Behavior and Perceived Organizational Risk . . . . It is important to note that these measures reflect employees’ perceptions of the state of safety within the airline, and as such reflect the safety climate. They should not be interpreted as absolute or objective measures of safety behavior or risk.” (p. 15)  In other words, the survey factors and sub-factors are not related to external measurements of safety performance, but the survey-takers’ perceptions of risk in their work environment.

Summary results are communicated back to participating companies in the form of a two-dimensional Safety Culture Grid.  The two dimensions are employees’ perceptions of safety vs management’s perceptions of safety.  The grid displays summary measures from the surveys; the measures can be examined for consistency (one factor or department vs others), direction (relative strength of the safety culture) and concurrence of employee and management survey responses.

Our Take on SCISMS

We have found summary level graphics to be very important in communicating key information to clients and the Safety Culture Grid appears like it could be effective.  One look at the grid shows the degree to which the various factors have similar or different scores, the relative strength of the safety culture, and the perceptual alignment of managers and employees with respect to the organization’s safety culture.   Grids can be constructed to show findings across factors or departments within one company or across multiple companies for an industry comparison. 

Our big problem is with the outcome variables.  Given that the survey contains perceptions of both what’s going on and what it means in terms of creating safety risks, it is no surprise that the correlations between factor and outcome data are moderate to strong.  “Correlations with Safety Behavior range from r = .32 - .60 . . . . [and] Correlations between the subscales and Perceived Risk are generally even stronger, ranging from r = -.38 to -.71” (p. 25)  Given the structure of the instrument, one might ask why the correlations are not even larger.  We’d like to see some intelligent linkage between safety culture results and measures of safety performance, either objective measures or expert evaluations.

The Socio-Anthropological and Organizational Psychological Perspectives

We have commented on the importance of mental models (here, here and here) when viewing or assessing safety culture.  While not essential to understanding SCISMS, this report fairly clearly describes two different perspectives of safety culture: the socio-anthropological and organizational psychological.  The former “highlights the underlying structure of symbols, myths, heroes, social drama, and rituals manifested in the shared values, norms, and meanings of groups within an organization . . . . the deeper cultural structure is often not immediately interpretable by outsiders. This perspective also generally considers that the culture is an emergent property of the organization . . . and therefore cannot be completely understood through traditional analytical methods that attempt to break down a phenomenon in order to study its individual components . . . .”

In contrast, “The organizational psychological perspective . . . . assumes that organizational culture can be broken down into smaller components that are empirically more tractable and more easily manipulated . . . and in turn, can be used to build organizational commitment, convey a philosophy of management, legitimize activity and motivate personnel.” (pp.7-8) 

The authors characterize the difference between the two viewpoints as qualitative vs quantitative and we think that is a fair description.


*  T.L. von Thaden and A.M. Gibbons, “The Safety Culture Indicator Scale Measurement System (SCISMS)” (Jul 2008) Technical Report HFD-08-03/FAA-08-02. Savoy, IL: University of Illinois, Human Factors Division.

Thursday, September 3, 2009

FAA Moves Away from Blame and Punishment

The Federal Aviation Administration (FAA) took another step toward a new safety culture by reducing the emphasis on blame in the reporting of operational errors by air traffic controllers.  “We’re moving away from a culture of blame and punishment,” said FAA Administrator Randy Babbitt. “It’s important to note that controllers remain accountable for their actions, but we’re moving toward a new era that focuses on why these events occur and what can be done to prevent them.” 
 
Effective immediately, the names of controllers will not be included in reports sent to FAA headquarters on operational errors…. Removing names on the official report will allow investigators to focus on what happened rather than who was at fault.

Link to FAA press release.