Skip to content
Lies, Damned Lies, and Statistics

COVID-19 Research Is Published Quickly but With Shaky Accuracy in Findings

MedRxiv (the preprint server) has 4,602 papers mentioning "COVID" as of 6/25/2020, while PubMed has archived over 25,000 such papers. Unfortunately, as we've seen, quite a few of the high-profile findings covered in the news turn out to be shaky or even deserve retraction.

In the past few months, COVID-19 research has been released at a breakneck pace: MedRxiv (the preprint server) has 4,602 papers mentioning "COVID" as of 6/25/2020, while PubMed has archived over 25,000 such papers. Unfortunately, as we've seen, quite a few of the high-profile findings covered in the news turn out to be shaky or even deserve retraction.

Some observers have criticized the rise of preprints, assuming that their lack of peer review makes them more unreliable or even dangerous. But is that really so? Some of the most egregious COVID papers so far have appeared in the top 5 peer-reviewed medical journals. And perhaps that is no surprise. In a Wired column, psychology professor Simine Vazire makes the case that "journals don't even pretend to ensure the validity of scientific findings" by reexamining data or code; instead, peer reviewers are stuck judging the surface-level write-up, which is "like asking a mechanic to evaluate a car without looking under the hood."

Perhaps worse, journals often can be overly influenced by factors that don't actually bear on scientific quality — the anticipated publicity a study might gather, or how famous a scholar is. Vazire has an alarming story from her editor days at a top psychology journal: She chose to reject a submission from a famous psychologist because of "serious methodological flaws." But the famous scholar complained to the journal committee that had hired Vazire; the committee then warned her about stepping on toes.

The jury is still out on how much value (if any) peer review adds once all the costs and benefits are taken into account. But that fact is itself surprising — with so little evidence that peer review actually works, why not experiment with new models of scientific publication?