Skip to content

Bad Science Hurts Us All: A Call to End Man Bites Dog’-Style Publication

Too much research ignores a basic lesson: Correlation is not the same as causation.

A crowd in New York City
Photo by Mark Lennihan/The Associated Press

Say you had a neighborhood with high levels of crime and a high degree of police presence — in social science research terms, a high correlation” between police presence and crime. Would you immediately conclude that police officers cause crime? You probably would not, because you would recognize that the police presence in the neighborhood at issue was more likely to be a response to, and not a cause of, elevated crime in the area.

Let’s say you then did a study that found a strong correlation between ice cream sales and shark sightings. Would you think that either one causes the other? Clearly not. You would realize that during summertime, people buy more ice cream and also spend more time at the beach where they occasionally spot sharks.

Both of these examples demonstrate an adage that should be ingrained in the minds of every social science researcher: correlation is not the same as causation. If a researcher is analyzing a data set and all she sees in the data is that A and B are correlated, that researcher cannot possibly draw any further meaning from that correlation. A might be causing B, B might be causing A, both A and B might be caused by a third factor, or the correlation could just be an accident.

Unfortunately, it is a recurring — and even daily — phenomenon to see research studies that seem to ignore this basic lesson. These studies are then promoted and publicized by journals, universities and the studies’ own authors, often resulting in headlines like these:

The above headlines are typical examples of what has become a recurring and dangerous problem in social science — purely correlational studies touted as demonstrating causation.

The Bullying’ Study

A recent article from Seokjin Jeong, a criminology professor at University of Texas at Arlington, along with a graduate student at Michigan State, makes the headline-grabbing claim that anti-bullying programs could very well do more harm than good. In the study, the researchers relied on a survey that, in 2005-06, asked a total of 7,001 students in 195 schools in the United States a number of questions, including whether they had experienced bullying. Their schools’ leaders were also surveyed, with questions that included whether their school had an anti-bullying program.

The study then does a bit of econometric modeling and ultimately concludes that students attending schools with bullying prevention programs were more likely to have experienced peer victimization, compared to those attending schools without bullying prevention programs.” The authors then promote the study and, with the authoritative air of academics, assert that this study raises an alarm … [that there] is a possibility of negative impact from anti-bullying programs.” There is one fundamental problem with that statement: There is absolutely no way that a survey done in a single year can tell us anything whatsoever about the causal effect of an anti-bullying program. The only thing such a survey can tell us is the correlation between anti-bullying programs and actual bullying levels. But if anti-bullying programs and actual bullying are found to coexist, a very likely explanation is that schools with high levels of actual bullying feel the need to adopt anti-bullying programs, whereas schools with little or no bullying might not think it worth the bother.

Even if the study had been longitudinal — tracking schools and their rates of bullying over time — that would not have been enough to show that anti-bullying programs backfire. That is, even if we saw in the data that when schools adopt an anti-bullying program, reports of bullying skyrocket in subsequent years, we still would not know whether anti-bullying programs backfire. Why? Because after a good anti-bullying program is adopted in a given school, students will hopefully be both more aware of what constitutes bullying and more willing to report bullying to teachers or on surveys. When those good things occur, reports of bullying could stay the same or even go up, even if actual bullying has dropped. Indeed, this factor can complicate even a well-designed experiment.

What the researchers in this study did is the exact equivalent of saying that police cause crime. It is nothing short of absurd.

The Divorce and Preventable Accidents Study

A recent article from Rice University sociologist Justin Denney, along with a graduate student from the University of Pennsylvania, claimed to have found that divorced people are more likely to perish in preventable accidents (including fires, accidental poisoning, drowning, falls, and auto accidents) than are married people.

The study’s finding, drawn from a long-running survey, is of course purely a matter of correlation. It could not be otherwise, given that obviously people are not randomly assigned to be either married or divorced. Yet the study repeatedly uses causal language, as if divorce literally causes greater incidence of fires, poisoning and other preventable accidents. For example, the study claims to show that accidents are indeed influenced by social status,” that marital status independently impact[s] accidental death,” and that both socioeconomic status and marital status have been elevate[d]” from important contributors to essential features of accident mortality risk.” A Rice University press release quotes the study’s lead author as suggesting that social relationships … prolong life.”

It might somehow be the case that marriage can protect against accidental risks, but this study cannot show anything of the sort. As the study itself concedes in a brief paragraph, people select into — and out of — marriage. Thus, if married and divorced people seem to have different risk levels, that could well be because people who already lead riskier lives in any number of unobserved ways choose to get divorced as well, rather than because marriage inherently prevents fires. Hence, the causal language sprinkled throughout the study and press release is unjustified and at best misleading.

The Peanut Butter Study

This study, published in the journal Breast Cancer Research and Treatment, arose from a long-term project that has tracked over 9,000 females from 1996 (when they were all nine to 15 years old) through today. All study participants had filled out multiple surveys about their lives, diet, and health. After reviewing the responses, researchers concluded that women who ate more vegetable fat and protein (specifically including peanut butter) in 1996 – 98 seemed to have fewer non-cancerous lumps in more recent surveys.

But the mere fact that some women who ate more peanut butter in 1996 – 98 had fewer benign lumps in 2010 in no way means that eating peanut butter protected them from benign lumps. Unlike a randomized trial, there could be any number of other unknown factors that are different about such women, and this study has no way of identifying what those factors might be.

This did not stop the researchers from heavily insinuating that they had found causation. Indeed, they went even further: Even though they had only studied benign cysts, they actually told news media in an official press release that peanut butter might reduce the risk of breast cancer.”

Worse, the researchers arguably should not even have claimed to have found a valid correlation in the first place. The survey given to girls in 1996, for example, gave them five to seven choices to rank how often they consumed 141 types of food and drink, including everything from chocolate milk to alcohol to salami to celery to Twinkies. Then the survey given to women in 2010 asked them about 18 specific health conditions, plus an other” box at the end. Oddly enough, the women were specifically asked about cancer, so if there had been any correlation between peanut butter and actual cancer, we can be sure the researchers would have reported on that rather than on benign lumps.

Taking all of those possibilities into account, the researchers could have had their pick of 30,000+ correlations, making it virtually guaranteed that some are going to seem significant just by chance. Yet the researchers issued a press release suggesting that peanut butter could help reduce the risk of breast cancer.” If they had made this overstatement in a press release about an FDA-regulated drug, they would have found themselves facing federal prosecution.

* * *

The above are only three recent examples of what has become an endemic problem in academia — bad research designed to grab headlines. Scholars should pursue the truth as accurately as they can. One of their paramount truth-seeking duties is not to oversell their work, not to claim they have proven something that they can’t possibly have proven. But many scholars are sucked into the pressure to produce scholarship that draws public attention, which is easier to get with a man bites dog”-style story than the reverse. Thus, the temptation is to highlight counterintuitive findings without careful attention to whether they are true or false. And the temptation of media outlets is to publish these results as though they were undisputed facts.

Our society depends on academic scholars to provide accurate information that can guide everything from our education policies to our personal dietary choices. If we cannot trust scholars and journalists to acknowledge the difference between correlation and causation, we are doomed to fall prey to at best misleading, and at worst severely damaging, policy, health and lifestyle recommendations.