Skip to content
Longterm Wiki
Back

Gender, Race, and Intersectional Bias in AI Resume Screening via Language Model Retrieval

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Brookings Institution

Relevant to AI safety discussions around real-world deployment harms, fairness evaluation methodologies, and the governance of high-stakes AI systems in employment contexts; useful for policy and responsible deployment sections.

Metadata

Importance: 62/100organizational reportanalysis

Summary

This Brookings Institution study examines how AI-powered resume screening systems using large language models exhibit measurable gender and racial biases, with intersectional effects that compound disadvantages for certain demographic groups. The research demonstrates that retrieval-based LLM hiring tools can systematically rank candidates differently based on protected characteristics, raising concerns about fairness and legal compliance in automated hiring. It calls for greater scrutiny, auditing standards, and governance frameworks for AI deployment in high-stakes employment decisions.

Key Points

  • LLM-based resume screening systems show statistically significant bias against women and racial minorities, with intersectional combinations producing amplified disparate outcomes.
  • Retrieval-augmented AI hiring tools can embed and perpetuate historical hiring biases present in training data without explicit discriminatory intent.
  • Intersectional bias (e.g., Black women vs. white men) is often worse than additive individual biases, highlighting gaps in single-axis fairness testing.
  • The study recommends mandatory algorithmic audits and transparency requirements for AI tools used in employment screening decisions.
  • Findings have direct policy relevance for regulators, employers, and AI developers navigating emerging AI governance frameworks like the EU AI Act.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 9, 202630 KB
Gender, race, and intersectional bias in AI resume screening via language model retrieval | Brookings 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 Search 
 
 
 
 
 
 

 

 

 
 
 
 Home 
 
 
 
 

 
 

 
 
 Gender, race, and intersectional bias in AI resume screening via language model retrieval
 

 

 
 
 
 
 Contact 
 
 
 
 
 

 
 
 
 

 
 
 
 Contact
 

 
 Governance Studies Media Office
 
 
 [email protected] 
 
 
 202.540.7724
 
 
 
 

 
 
 
 

 
 
 Share 
 
 
 
 
 

 
 
 
 Share

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 Bluesky Streamline Icon: https://streamlinehq.com Bluesky 
 
 

 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 

 Search 
 
 
 
 
 
 
 
 

 
 
 

 
 

 
 
 

 
 
 Sections 
 
 
 
 
 Sections 
 
 
 
 

 
 
 
 
 
 
 
 
 Contact 
 
 
 
 
 

 
 
 

 
 
 
 Contact
 

 
 Governance Studies Media Office
 
 
 [email protected] 
 
 
 202.540.7724
 
 
 
 

 
 
 

 
 
 Share 
 
 
 
 
 

 
 
 
 Share

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 Bluesky Streamline Icon: https://streamlinehq.com Bluesky 
 
 

 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 

 
 
 
 
 
 
 
 
 Subscribe to the Center for Technology Innovation Newsletter

 
 
 
 
 
 
 
 
 
 
 
 

 Sign Up 
 
 
 
 
 
 

 

 
 
 
 
 
 

 
 
 
 
 Research
 

 Gender, race, and intersectional bias in AI resume screening via language model retrieval

 
 
 

 

 

 

 
 
 
 
 
 Kyra Wilson and 
 
 
 
 
 
 
 
 
 
 
 
 
 Kyra Wilson 
 
 
 Ph.D. Student 
 - University of Washington 
 
 
 
 

 
 
 
 
 Aylin Caliskan 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Aylin Caliskan 
 
 
 
 Nonresident Fellow 
 - Governance Studies , Center for Technology Innovation (CTI) 
 
 
 
 

 
 
 
 
 
 

 
 April 25, 2025

 
 
 
 Though the use of AI in the hiring process has continued to grow, few laws have been passed that require auditing of these systems to ensure they do not discriminate against some applicants.

 In a simulation of resume screening, some systems resulted in significant gender and racial discrimination, especially for Black men. 

 Increased protections and transparency with these systems could protect against harmful effects, especially with intersectional identities, and empower applicants to act in the event of discrimination. 

 

 
 
 

 
 
 
 
 
 
 
 
 
 The seal of the The United States Equal Employment Opportunity Commission (EEOC) is seen at their headquarters in Washington, D.C., U.S., on May 14, 2021. REUTERS/Andrew Kelly
 
 

 
 
 
 
 
 

 

 
 
 
 19 min read 
 

 
 

 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 Bluesky Streamline Icon: https://streamlinehq.com Bluesky 
 
 

 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 Print 
 
 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 Sections 
 
 
 
 

 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Toggle sectio

... (truncated, 30 KB total)
Resource ID: aa9bd39c247651f0 | Stable ID: sid_bLG5bUFP7O