We came across an academic journal article* that purports to describe the current state of research into organizational culture (OC). It’s interesting because it includes a history of OC research and practice, and a critique of several methods used to assess it. Following is a summary of the article and our perspective on it, focusing on any applicability to nuclear safety culture (NSC).
History
In the late 1970s scholars studying large organizations began to consider culture as one component of organizational identity. In the same time frame, practicing managers also began to show an interest in culture. A key driver of their interest was Japan’s economic ascendance and descriptions of Japanese management practices that depended heavily on cultural factors. The notion of a linkage between culture and organizational performance inspired non-Japanese managers to seek out assistance in developing culture as a competitive advantage for their own companies. Because of the sense of urgency, practical applications (usually developed and delivered by consultants) were more important than developing a consistent, unified theory of OC. Practitioners got ahead of researchers and the academic world has yet to fully catch up.
Consultant models only needed a plausible, saleable relationship between culture and organizational performance. In academic terms, this meant that a consultant’s model relating culture to performance only needed some degree of predictive validity. Such models did not have to exhibit construct validity, i.e., some proof that they described, measured, or assessed a client organization’s actual underlying culture. A second important selling point was the consultants’ emphasis on the singular role of the senior leaders (i.e., the paying clients) in molding a new high-performance culture.
Over time, the emphasis on practice over theory and the fragmented efforts of OC researchers led to some distracting issues, including the definition of OC itself, the culture vs. climate debate, and qualitative vs. quantitative models of OC.
Culture assessment methods
The authors provide a detailed comparison of four quantitative approaches for assessing OC: the Denison Organizational Culture Survey (used by more than 5,000 companies), the Competing Values Framework (used in more than 10,000 organizations), the Organizational Culture Inventory (more than 2,000,000 individual respondents), and the Organizational Culture Profile (OCP, developed by the authors and used in a “large number” of research studies). We’ll spare you the gory details but unsurprisingly, the authors find shortcomings in all the approaches, even their own.
Some of this criticism is sour grapes over the more popular methods. However, the authors mix their criticism with acknowledgement of functional usefulness in their overall conclusion about the methods: because they lack a “clear definition of the underlying construct, it is difficult to know what is being measured even though the measure itself has been shown to be reliable and to be correlated with organizational outcomes.” (p. 15)
Building on their OCP, the authors argue that OC researchers should start with the Schein three-level model (basic assumptions and beliefs, norms and values, and cultural artifacts) and “focus on the norms that can act as a social control system in organizations.” (p. 16) As controllers, norms can be descriptive (“people look to others for information about how to act and feel in a given situation”) or injunctive (how the group reacts when someone violates a descriptive norm). Attributes of norms include content, consensus (how widely they are held), and intensity (how deeply they are held).
Our Perspective
So what are we to make of all this? For starters, it’s important to recognize that some of the topics the academics are still quibbling over have already been settled in the NSC space. The Schein model of culture is accepted world-wide. Most folks now recognize that a safety survey, by itself, only reflects respondents’ perceptions at a specific point in time, i.e., it is a snapshot of safety climate. And a competent safety culture assessment includes both qualitative and quantitative data: surveys, focus groups, interviews, observations, and review of artifacts such as documents.
However, we may still make mistakes. Our mental models of safety culture may be incomplete or misassembled, e.g., we may see a direct connection between culture and some specific behavior when, in reality, there are intervening variables. We must acknowledge that OC can be a multidimensional sub-system with complex internal relationships interacting with a complicated socio-technical system surrounded by a larger legal-political environment. At the end of the day, we will probably still have some unknown unknowns.
Even if we follow the authors’ advice and focus on norms, it remains complicated. For example, it’s fairly easy to envision that safety could be a widely agreed upon, but not intensely held, norm; that would define a weak safety culture. But how about safety and production and cost norms in a context with an intensely held norm about maintaining good relations with and among long-serving coworkers? That could make it more difficult to predict specific behaviors. However, people might be more likely to align their behavior around the safety norm if there was general consensus across the other norms. Even if safety is the first among equals, consensus on other norms is key to a stronger overall safety culture that is more likely to sanction deviant behavior.
The authors claim culture, as defined by Schein, is not well-investigated. Most work has focused on correlating perceptions about norms, systems, policies, procedures, practices and behavior (one’s own and others’) to organizational effectiveness with a purpose of identifying areas for improvement initiatives that will lead to increased effectiveness. The manager in the field may not care if diagnostic instruments measure actual culture, or even what culture he has or needs; he just wants to get the mission accomplished while avoiding the opprobrium of regulators, owners, bosses, lawmakers, activists and tweeters. If your primary focus is on increasing performance, then maybe you don’t need to know what’s under the hood.
Bottom line: This is an academic paper with over 200 citations but is quite readable although it contains some pedantic terms you probably don’t hear every day, e.g., the ipsative approach to ranking culture attributes (ordinary people call this “forced choice”) and Q factor analysis.** Some of the one-sentence descriptions of other OC research contain useful food for thought and informed our commentary in this write-up. There is a decent dose of academic sniping in the deconstruction of commercially popular “culture” assessment methods. However, if you or your organization are considering using one of those methods, you should be aware of what it does, and doesn’t, incorporate.
* J.A. Chatman and C.A. O’Reilly, “Paradigm lost: Reinvigorating the study of organizational culture,” Research in Organizational Behavior (2016). Retrieved May 28, 2019.
** “Normal factor analysis, called "R method," involves finding correlations between variables (say, height and age) across a sample of subjects. Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which are claimed to represent shared ways of thinking.” Wikipedia, “Q methodology.” Retrieved May 28, 2019.
Tuesday, May 28, 2019
Subscribe to:
Posts (Atom)