Incentives, Assessment, and the Reliability of Statistical Significance Assessments of Evidence

Bill Cready First Author
University of Texas at Dallas
 
Bill Cready Presenting Author
University of Texas at Dallas
 
Tuesday, Aug 5: 3:05 PM - 3:20 PM
1025 
Contributed Papers 
Music City Center 

Description

This analysis evaluates the implications of researcher hypothesis selection incentives on the inferential value of empirical analyses. It illustrates the strength with which a "statistically significant" outcome objective incentivizes researcher aversion to testing possibly true null hypotheses. Mechanically, such aversion reduces the number of true nulls selected for testing, which in turn reduces type I error rates (i.e., erroneous rejections of true null hypotheses). Left unfettered, it leads to settings wherein researchers almost always opt to test false null hypotheses. That is, studies routinely produce reliable "falsifications" of a priori false hypotheses, a practice that transparently lacks inferential relevance. Collectively, the analysis illustrates the importance of comprehensive understanding of researcher incentives and research assessment practices when evaluating the reliability and relevance of findings obtained from Null Hypothesis Significance Test based examinations of evidence.

Keywords

Priors, Incentives, Statistical Significance, Error 

Main Sponsor

Health Policy Statistics Section