Pedde: The Danger of Statistics
By Jonathan Pedde, Staff Columnist
Published on Thursday, May 24, 2012
In our day and age, empirical evidence from scientific study is held in high regard. Perhaps as a result, supposedly “scientific” data are often used in political debates to show how one political position is “better” than another. One psychology study purported to show that as people become drunker, they become more politically conservative. Another claimed to show that conservatives better understand liberals’ opinions than vice versa. However, when one uses empirical studies to justify these kinds of claims, one must make implicit assumptions that can often be very wrong. Especially in the social sciences, there are several problems that can arise when empirical papers are reported in popular media and then used as fodder in political debates.
First, problems can arise due to the nature of the academic publication process. Statistical inference is, by nature, probabilistic. Even if academic researchers make no mistakes, there is always the possibility that in any given empirical paper, a true hypothesis will be rejected as false or that a false hypothesis will not be rejected as false. Given the sheer volume of academic research being produced today, there will be many papers that come to false conclusions. Thus, as George Mason University economist Alex Tabarrok put it, “Every crackpot theory will have at least one scientific study that it can cite in its support.”
Given the 5 percent significance levels that are common in social science, it is tempting to conclude that only one in every 20 published empirical papers has erroneous conclusions. This conclusion, however, would probably be incorrect. In fact, as economists Bradford DeLong and Kevin Lang argued in a paper provocatively titled, “Are All Economic Hypotheses False?,” this conclusion is most certainly incorrect with regards to published papers that fail to reject their “null” hypothesis. DeLong and Lang showed that it is highly probable that more than two out of three of these kinds of papers come to erroneous conclusions.
Second, especially when laboratory experiments are used in social science, experimental flaws can sometimes be significant. Consider a famous psychology study titled, “Automaticity of Social Behavior.” In this study, the experimenters had the treatment group read words associated with old people and then timed how long it took the study participants to walk down the hall. The experimenters determined that the subjects who read words associated with old people took longer to walk down the hall than the control group. However, several psychologists recently failed to replicate these findings using experimenters who were unaware of the expected result. The real twist, however, is that when these psychologists told the experimenters the results they were expecting, they were able to replicate the original results. Thus, it appears that laboratory-based evidence in social science can sometimes produce results that occur solely because the experimenter is expecting them to occur, not because the results are actually valid.
Third, media reports sometimes overstate the conclusions that can reasonably be drawn from an academic paper. Consider a recent research paper titled, “Exporting Obesity,” which was recently mentioned in The Dartmouth (“NAFTA enables export of obesity, report finds,” May 3). The research paper noted three facts: First, the North American Free Trade Agreement was implemented in 1994. Second, American food exports to and investment in Mexico have increased since 1994. Third, obesity rates in Mexico have increased since 1994. You don’t need to take an advanced class in statistics or econometrics to see that one cannot reasonably conclude anything from these three facts alone. Nonetheless, reports of this paper in popular media made it seem as if research had shown that NAFTA caused an increase in Mexican obesity rates, a conclusion that cannot be drawn from the evidence in this paper alone.
In short, no single academic paper can reasonably be used to settle a political debate. Instead, it would be wise to remember two things. First, understanding an individual paper’s methodology is often just as important as understanding its results. Second, looking at the broader literature on a topic can often give a more accurate view of that topic than looking at a single paper in isolation.