Landmark Study on Gender Bias in Science Challenged by New Research
A widely cited 2012 study suggesting men receive an unfair advantage in scientific careers has been overturned by a large-scale replication effort, revealing a surprising reversal: female applicants were perceived as more competent and hireable. The findings raise critical questions about the reproducibility of research and potential biases within the peer-review process.
A 2012 study, published to significant acclaim, has been referenced over 4,600 times – a remarkably high figure in academic circles where most papers garner fewer than 20 citations. The original experiment involved sending fictional CVs to science professors applying for a lab manager position. The CVs were identical except for the applicant’s name: “John” or “Jennifer.” Researchers found that professors consistently rated the male applicant as more competent, more likely to be hired, and more deserving of mentorship and a higher salary. This study became a cornerstone in discussions surrounding the underrepresentation of women in STEM (science, technology, engineering, and mathematics) fields.
Now, a new paper led by Nathan Honeycutt and Lee Jussim of Rutgers University in New Jersey, has challenged these conclusions. The researchers replicated the experiment on a far larger scale, contacting nearly 1,300 professors from over 50 American research institutions. Using the same application materials and evaluation criteria, they found the opposite result. The female applicant, “Jennifer,” was rated marginally more capable, more appealing to work with, and more hireable than “John.” She was also recommended for a slightly higher salary – $35,550 compared to $34,150 for the male applicant. While the differences were small, they were consistent and statistically significant.
The reversal is particularly striking given the difficulties the research team faced in getting their replication study published. According to the authors, resistance to repeating the original experiment may be as revealing as the results themselves. Honeycutt explained that they were “taken aback” when their proposal for a registered replication report (RRR) – a formal proposal to repeat a study – was rejected by Nature Human Behaviour, a leading scientific journal.
The rejection stemmed from concerns raised by peer reviewers, who, according to Honeycutt, appeared to favor the original findings and feared they wouldn’t hold up under scrutiny. “We can’t know for certain—but [that is our suspicion] given just the nature of their feedback and pushback,” he said. Lee Jussim further elaborated on the issue, writing on Substack that reviewers sometimes “concoct serious scientific-sounding reasons to reject the RRR” if they anticipate a study won’t replicate, effectively shielding favored research from challenge. He characterized this process as “a profoundly antiscientific attitude.”
One objection from reviewers was that professors might be aware of the original 2012 study, potentially influencing their responses. Honeycutt and Jussim proposed addressing this by asking participants about their familiarity with the earlier research and analyzing the data accordingly. This suggestion was dismissed. Jussim argued that the reviewers’ objections implied the original study was effectively “immune to replication,” ensuring its conclusions would remain unchallenged “for ever.”
Undeterred, the team preregistered their methods and conducted the study independently. Their findings, ultimately accepted by the journal Meta-Psychology, revealed that most professors had, in fact, not been familiar with the original 2012 paper.
The implications of these findings for the broader landscape of gender equality in science remain open to debate. While women now earn roughly half of all doctoral degrees in the life sciences and an increasing share in fields like chemistry and engineering, they remain underrepresented in senior academic positions. In the United Kingdom, for example, only about a quarter of professors in STEM fields are women. It’s important to note that this study focused specifically on perceptions of candidates for a relatively junior laboratory manager position.
The debate extends beyond this single study. As Erika Pastrana, vice-president of the Nature Research Journals portfolio, stated, editorial decisions regarding replication studies are based on “methodological rigour,” not on whether original conclusions are replicated. She added that scientific scrutiny takes many forms, including replication, commentary, and new research.
The case highlights the critical importance of replication in the scientific process and raises concerns about potential biases that can hinder the pursuit of objective truth. The initial study’s widespread acceptance, coupled with the resistance to replicating it, underscores the need for continued vigilance and a commitment to rigorous scientific inquiry.
