Jennifer Raymond
STEVE FISCH STEVEFISCHPHOTO.COM

Gender bias is present in a variety of scientific settings including faculty hiring, scientific publishing, grant funding, and more. Yet the review processes that generate these skewed outcomes largely happen behind closed doors, making them difficult to study. 

A preprint posted on bioRxiv on August 29 takes a peek behind the curtain, providing insight into how bias creeps into the review process at the journal eLife—and suggests ways in which the review process could be improved to mitigate biases in the future.

The Scientist spoke with coauthors Jennifer Raymond, a neurobiologist at Stanford University and a reviewing editor at eLife, and Andrew Collings, executive editor at eLife, to learn more about their recent study.

TSWhat motivated you to look for gender biases in peer review?

Jennifer Raymond: Recently, there was a meeting at AAAS on implicit bias...

I came back realizing a couple of things. The first is that it is just really hard to get your hands on data to assess how bad the problem might be. Most journals and funders are worried about their reputations, so they hold their data quite close. The second thing is that other publishers were interested in the eLife review model, which is innovative because after independent reviews come in, the reviewers consult with each other. They know each other’s identity, and that holds them accountable a little bit. The editor then synthesizes the feedback into a consensus review, along with the decision to accept, reject, or revise. Many of the other publishers were interested in whether this might be a way to mitigate implicit bias. But we didn’t have any data on any of those things at eLife. 

TSWhat were the main findings?

JR: We see that authors with different demographic characteristics [gender and nationality] have different success rates. The ideal is that excellent science would be published regardless of where it comes from. The question of whether that science is all getting a fair shake in peer review is what we are trying to get at. An important point to make is that unequal success rates don’t necessarily mean that there’s bias, because maybe researchers from some countries have better funding, and you might expect them to have better success rates. 

But what we found suggested that there is bias in the peer review process, because the success rates are dependent on who reviewed the papers. Men were more successful than women when the reviewers were all male, and had more similar success rates as women when there was a mixed review panel. Similarly, we saw that an author was more likely to have their paper accepted if at least one of the reviewers was from the same country. In each case, we call this homophily—a preference of the reviewers for authors that share demographic characteristics.

Andrew Collings
 michael sim

Our study is probably underestimating bias, because there’s been lots of evidence that people from the group that is discriminated against often share the same biases. So women tend to under-rate women to some extent, just like men do. The ideal study would be to do a randomized, controlled trial of double blind review. The idea would be to have manuscripts come into a journal, randomly assign them to have reviewers who know who the authors are—or not—and then look for differences in success rates between authors from the different groups. It’s the obvious way to look for bias in the peer review process. Nobody has done that, to my knowledge. 

TSDo you think similar findings would apply to other journals, too?

JR: We’d love to have other journals follow suit and see if there are examples where the biases are smaller. That will provide important clues about how common this is across review processes. Maybe there’s certain types of peer review, certain editorial policies, or even administrative policies that reduce these effects. 

We’re going to do more analysis of the eLife data we have to see what’s happening in the process of review and consultation among reviewers. We’d like to see whether we can find evidence that the reviewer consultation process mitigates bias. It is also possible that it could go the other way. 

TSWhat is eLife planning to do to remedy gender biases in its peer review process?

Andrew Collings (via email): We’ll need more time to reflect on the findings, but in the first instance we are writing to all our editors in the next few days, so they can consider the findings as they assess submissions and select reviewers. We will certainly start to encourage editors to consider selecting a diverse group of reviewers whenever possible for the papers they handle. 

In addition, we will place a greater emphasis on increasing the gender balance and international representation on the editorial board. When we looked in early July this year, 29 percent of current Reviewing Editors were female, with 43 percent from outside of the US. While we haven’t set targets at this stage, we’d like to improve upon this as we recruit new editors.

Editor’s note: The interviews were edited for brevity. 

Update (September 17): We corrected the credit for Raymond’s photo. The Scientist regrets the error.

Interested in reading more?

The Scientist ARCHIVES

Become a Member of

Receive full access to more than 35 years of archives, as well as TS Digest, digital editions of The Scientist, feature stories, and much more!
Already a member?