The debunking of an implausible study shows the need for viewpoint diversity in the academy.
By Robert P. George
March 5, 2019 6:23 pm ET
Humility can be hard to come by in professional research, which is why it’s worth noting the retraction last month of a major study on the social effects of attitudes toward sexuality. The journal Social Science & Medicine withdrew a 2014 analysis purporting to show that widespread traditional beliefs about sexual morality—or “structural stigma”—gravely imperil the health of people who don’t identify as straight, whom the study classified as “sexual minorities.” Lead author Mark Hatzenbuehler, a professor at Columbia’s Mailman School of Public Health, is a renowned expert on stigma and health. The study has been widely reported and frequently cited.
It was yanked because its key claim—that stigma reduces life expectancy for sexual minorities in the United States by an average of 12 years—came to naught. It was entirely the result of a coding error to which Mr. Hatzenbuehler himself, to his credit, owned up. The admission came after the same journal published an article online in November 2016, in which sociologist Mark Regnerus showed that there was no feasible way to replicate Mr. Hatzenbuehler’s key result. Ten different attempts yielded no effect whatever of stigma on life expectancy.
Yet, oddly, Mr. Hatzenbuehler’s original article remained in print, unretracted, for well over two years. In fact, it has been cited in published research over 100 times since November 2016. In contrast, the Regnerus correction has been cited a mere five times. As Jonathan Swift observed, “Falsehood flies, and the truth comes limping after it.”
How did all the researchers in this field except Mr. Regnerus fail to question how the critical attitudes of others could be worse for your health than, say, smoking a pack of cigarettes every day for decades (something that has been shown to shorten life by an average of 10 years)? Why was the error not caught in the original vetting process? And why, once the error was discovered, was the study not retracted immediately?
There are a few reasons for concern. First, Mr. Hatzenbuehler was an associate editor of the same journal in which his now-retracted study was published, and he co-edited the special issue in which it appeared. One must assume that the study was scrutinized in accordance with formal peer-review procedures that ordinarily can be counted on to raise red flags. If it was, however, then the question is how the error nevertheless escaped detection. Is the review process itself insufficiently rigorous and in need of reform?
Second, 2017 the National Institutes of Health awarded a multi-year, $2.8 million grant to Columbia University (including subcontracts to other institutions), with Mr. Hatzenbuehler as co-principal investigator, for further research on structural stigma. According to the NIH Reporter website, that grant paid out $1.2 million after Mr. Regnerus’s correction of Mr. Hatzenbuehler was published but before the latter’s article was retracted. Had the study been retracted immediately, perhaps the entire $2.8 million grant proposal would have come under greater scrutiny.
This case has the marks of confirmation bias—a problem that bedevils social science, especially when research concerns controversial issues. Remember when UCLA graduate student Michael LaCour’s study indicated that people’s minds could be changed about same-sex marriage merely by gay canvassers engaging them in a simple conversation? Columbia political-science professor Donald Green signed on as a co-author of that study without closely scrutinizing the data. When those data were exposed as having been fabricated by Mr. LaCour, Mr. Green commendably called for the study’s retraction, which came swiftly.
Mr. Regnerus, who blew the whistle on the Hatzenbuehler study, is no stranger to controversy. He is a fine but beleaguered sociologist, much disliked for his conservative personal beliefs. He endured torrents of abuse, and a pair of separate inquiries by his own university for his 2012 study of adult children of parents who had been in a same-sex relationship. Critics failed in their efforts to force the publication to retract the study. Refusing to be intimidated, the editor stood by it even while he was the subject of a lawsuit, because Mr. Regnerus’s critics had no case. They critiqued his modeling decisions, but made no substantial claim that his results were not replicable.
Confirmation bias—and its converse, the aggravated denial of unfavored results—flourishes when there is a lack of viewpoint diversity in scholarship. As such diversity has waned in the American academy, scholarly journals and federal funding agencies have too often become intellectually inbred. They sometimes constitute an academic version of interlocking directorates on corporate boards, in which decision makers who share the same outlook tend to view each other’s work with an insufficiently critical eye. Research that pleases everyone in the club sometimes doesn’t get enough scrutiny, even when its results are strikingly implausible.
“Prudent” scholars are often afraid even to mention the rise of confirmation bias, much less try to do anything about it. Yet following the example of Mr. Regnerus, any hope of rescuing social-science research from further disrepute will require a little less “prudence” and more guts.
Mr. George is a professor of jurisprudence at Princeton.