Wednesday, January 12, 2011

ESP and Safety Culture

A recent New York Times article* on an extrasensory perception (ESP) study and the statistical methods used therein caught our attention.  The article’s focus is on the controversy surrounding statistical significance testing.  “A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent.”  We have all seen such analyses.

However, critics of classical significance testing say a finding based on such a test “could overstate the significance of the finding by a factor of 10 or more,” a sort of super false positive.  The critics claim a better approach is to apply the methods of Bayesian analysis, which incorporates known probabilities, if available, from outside the study.  Check out the comments on the article, especially the reader recommended ones, for more information on statistical methods and issues.  (You can ignore the ESP-related comments unless you have some special interest in the topic). 

What has this got to do with safety culture?

Recall that last October we reported on an INPO study that, among other things, calculated correlations between safety culture survey factors and various safety-related performance measures.  We expressed reservations about the overall approach and results even though a few correlations supported points we have been making in our blog.

The controversy over the ESP study and its associated statistical methods reminds us that analysts in many fields are under pressure to find something “significant.”  This pressure comes from bosses, funding agencies, editors and tenure committees.  Studies that find no effects, or ones not aligned with higher-level organizational objectives, are less likely to be publicized and their authors rewarded.  In addition, I fear some (many?) social science researchers don’t fully understand the statistical methods they are using, i.e., their built-in biases and limitations.  So, once again, caveat emptor.   

By the way, we are not saying or implying the INPO study was biased in any way; we have no information on it other than what was presented at the NRC meeting referenced in our original blog post. 

*  B. Carey, “You Might Already Know This ...,” New York Times (Jan 11, 2011).

No comments:

Post a Comment

Thanks for your comment. We read them all. We would like to display them under their respective posts on our main page but that is not how Blogger works.