Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Brookings Institution

A 2025 Brookings policy commentary by an MIT economist proposing that AI fairness frameworks explicitly address 'algorithmic exclusion' — when systems fail to produce outputs for data-sparse individuals — as a harm distinct from but equal to bias.

Metadata

Importance: 52/100homepagecommentary

Summary

Catherine Tucker (MIT Sloan) introduces 'algorithmic exclusion' as a distinct class of AI harm distinct from bias: when AI systems lack sufficient data on certain individuals to produce any output at all. The paper argues this form of exclusion disproportionately affects underrepresented populations and should be incorporated into AI fairness regulations alongside bias and discrimination.

Key Points

  • AI systems can fail by producing no meaningful output for certain individuals, not just biased outputs — termed 'algorithmic exclusion'.
  • Algorithmic exclusion occurs when insufficient data exists on an individual for the system to return a result.
  • This failure mode disproportionately harms already-marginalized or data-sparse populations.
  • The proposal calls for policy and regulatory frameworks to recognize algorithmic exclusion as a formal harm equal to bias/discrimination.
  • Published by Brookings Institution as part of its AI governance and economic studies research agenda.

Cited by 1 page

PageTypeQuality
AI-Induced EnfeeblementRisk91.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20267 KB
Artificial intelligence and algorithmic exclusion | Brookings 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 Search 
 
 
 
 
 
 

 

 

 
 
 
 Home 
 
 
 
 

 
 

 
 
 Artificial intelligence and algorithmic exclusion
 

 

 
 
 
 
 Downloads 
 
 
 
 
 
 
 

 
 
 
 
 
 Downloads 
 
 
 
 
 
 
 
 
 
 Full proposal
 
 

 
 
 
 
 
 
 
 Download the summary
 
 

 
 See More 
 

 
 
 
 
 
 
 

 
 
 Contact 
 
 
 
 
 

 
 
 
 

 
 
 
 Media inquiries
 

 
 Este Griffith
 
 
 [email protected] 
 
 
 202-238-3088
 
 
 
 

 
 
 
 

 
 
 Share 
 
 
 
 
 

 
 
 
 Share

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 Bluesky Streamline Icon: https://streamlinehq.com Bluesky 
 
 

 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 

 Search 
 
 
 
 
 
 
 
 

 
 
 

 
 

 
 
 

 
 
 Sections 
 
 
 
 
 Sections 
 
 
 
 

 
 
 
 
 
 
 
 
 Downloads 
 
 
 
 
 
 
 

 
 
 
 
 
 Downloads 
 
 
 
 
 
 
 
 
 
 Full proposal
 
 

 
 
 
 
 
 
 
 Download the summary
 
 

 
 See More 
 

 
 
 
 
 
 
 

 
 
 Contact 
 
 
 
 
 

 
 
 

 
 
 
 Media inquiries
 

 
 Este Griffith
 
 
 [email protected] 
 
 
 202-238-3088
 
 
 
 

 
 
 

 
 
 Share 
 
 
 
 
 

 
 
 
 Share

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 Bluesky Streamline Icon: https://streamlinehq.com Bluesky 
 
 

 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 

 
 
 
 
 
 
 
 
 Subscribe to the Economic Studies Bulletin

 
 
 
 
 
 
 
 
 
 
 
 

 Sign Up 
 
 
 
 
 
 

 

 
 
 
 
 
 

 
 
 
 
 Commentary
 

 Artificial intelligence and algorithmic exclusion

 
 
 

 

 
 

 

 
 
 
 
 
 Catherine Tucker 
 
 
 
 
 
 
 
 
 
 
 
 
 Catherine Tucker 
 
 
 Sloan Distinguished Professor of Management 
 - MIT Sloan School of Management 
 
 
 
 
 
 
 
 

 
 December 4, 2025

 
 
 Key takeaways: 

 
 AI systems can fail not only because they make biased predictions, but also because they make no meaningful predictions at all for certain individuals or populations.

 Algorithmic exclusion formally describes failure when an AI-driven system lacks enough data on an individual to return an output about them.

 This proposal suggests a concrete, policy-relevant addition to regulations and proposals on AI fairness: incorporate algorithmic exclusion as a class of algorithmic harm equal in importance to bias and discrimination.

 

 
 
 

 
 
 
 
 
 
 
 
 
 Shutterstock / CineVI
 
 

 
 
 
 
 
 

 

 
 
 
 3 min read 
 

 
 

 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 Bluesky Streamline Icon: https://streamlinehq.com Bluesky 
 
 

 
 
 

 
 

 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 Print 
 
 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 Sections 
 
 
 
 

 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Toggle section navigation 
 

 Sections 
 
 
 
 
 
 
 
 
 Downloads 
 
 
 
 
 
 
 
 
 
 Full proposal
 
 

 
 
 

... (truncated, 7 KB total)
Resource ID: 6d2a9aac6117b683 | Stable ID: sid_tEjPH4zzko