Gender bias in peer review: opening up the black box

Peer review is supposed to judge ideas on their merits. But if the people submitting, reviewing, and editing papers are not gender-balanced, the process can quietly reproduce the gaps it claims to ignore. In March 2019 — published on International Women's Day — Alex Holmes and Sally Hardy of the Regional Studies Association opened up six years of submission data from Regional Studies to find out what the numbers actually showed.

The data

Holmes and Hardy pulled ScholarOne submission and peer review records for Regional Studies from 2011 to 2016. They used Genderize.io to assign gender to authors and reviewers based on first names, successfully classifying 83% of authors and 81% of reviewers in the deduplicated dataset.

What they found

The submission gap was stark. Men accounted for 67% of all submissions; women, 33%. Among corresponding authors the split was almost identical — 68% male, 32% female. For multi-author papers, all-male teams submitted 47% of papers, all-female teams just 16%, with mixed-gender teams making up the remaining 37%.

The acceptance gap was harder to explain away. Of papers submitted by men, 31% were accepted. For women, the acceptance rate was 24% — a seven-percentage-point difference.

What it meant — and what it didn't

Holmes and Hardy were careful not to leap to conclusions. They acknowledged the findings were "only a snapshot" and laid out several alternative explanations: women in the field skewed toward earlier career stages, which could mean less polished submissions; geographic and linguistic factors might correlate with gender in ways the data couldn't untangle; and women made up only about 35% of the Regional Studies Association's membership, so the submission gap partly reflected the pipeline.

But the acceptance gap was not so easily dismissed. Even if fewer women submitted, the question of why their papers were accepted at a lower rate demanded further investigation — into reviewer assignment patterns, author seniority, institutional prestige, and whether the topics women tended to study were valued differently by reviewers.

Why it resonated

The piece was published on the LSE Impact Blog, one of the most widely read platforms for academic publishing policy. By putting real numbers from a real journal into a public forum, Holmes and Hardy gave the peer review bias debate something it often lacks: specificity. The 67/33 submission split and the 31/24 acceptance gap became reference points for a conversation that had previously relied heavily on anecdote.

Author

Alex Holmes and Sally Hardy

Year

2019

Categories

Academia & Research Peer Review

Original article

https://blogs.lse.ac.uk/impactofsocialsciences/2019/03/08/gender-bias-in-peer-review-opening-up-the-black-box/