Skip to content
Longterm Wiki
Back

Algorithmic Justice League - Wikipedia

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

Describes the Algorithmic Justice League, a key organization in AI fairness and bias research, whose work on facial recognition bias directly informs AI safety discussions around deployment harms and accountability.

Metadata

Importance: 45/100wiki pagereference

Summary

The Algorithmic Justice League (AJL) is a nonprofit founded in 2016 by Joy Buolamwini to combat bias and harms in AI systems through research, art, and policy advocacy. Notable work includes the 'Gender Shades' study revealing racial and gender bias in commercial facial recognition systems, leading to policy changes at major tech companies. AJL also advocates for federal regulation of facial recognition and broader algorithmic accountability.

Key Points

  • Founded in 2016 by Joy Buolamwini after discovering facial detection software failed to recognize dark-skinned faces, highlighting systemic bias in AI.
  • The 'Gender Shades' study (with Timnit Gebru) showed IBM and Microsoft facial recognition was less accurate for dark-skinned and feminine faces.
  • AJL's advocacy contributed to Amazon and IBM temporarily banning police use of their facial recognition products in 2020.
  • Featured in the 2020 Netflix documentary 'Coded Bias,' raising public awareness of algorithmic bias in facial recognition.
  • Advocates for algorithmic auditing, equitable AI governance, and federal regulation of facial recognition technology.

3 FactBase facts citing this source

Cached Content Preview

HTTP 200Fetched Apr 7, 202619 KB
Algorithmic Justice League - Wikipedia 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Jump to content 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 From Wikipedia, the free encyclopedia 
 
 
 
 
 
 Digital advocacy non-profit organization 
 

 Algorithmic Justice League Abbreviation AJL Formation 2016 Founder Joy Buolamwini Purpose AI activism Location Cambridge, Massachusetts 
 Website www .ajl .org 
 The Algorithmic Justice League ( AJL ) is a digital advocacy non-profit organization based in Cambridge, Massachusetts . Founded in 2016 by computer scientist Joy Buolamwini , the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. [ 1 ] The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world. [ 2 ] [ 3 ] 

 
 History

 [ edit ] 
 Buolamwini founded the Algorithmic Justice League in 2016 as a graduate student in the MIT Media Lab . While experimenting with facial detection software in her research, she found that the software could not detect her "highly melanated" face until she donned a white mask. [ 4 ] After this incident, Buolamwini became inspired to found AJL to draw public attention to the existence of bias in artificial intelligence and the threat it can poses to civil rights. [ 4 ] Early AJL campaigns focused primarily on bias in face recognition software; recent campaigns have dealt more broadly with questions of equitability and accountability in AI, including algorithmic bias , algorithmic decision-making , algorithmic governance , and algorithmic auditing .

 Additionally there is a community of other organizations working towards similar goals, including Data and Society, Data for Black Lives , the Distributed Artificial Intelligence Research Institute (DAIR), and Fight for the Future . [ 5 ] [ 6 ] [ 7 ] 

 Notable work

 [ edit ] 
 Facial recognition

 [ edit ] 
 AJL founder Buolamwini collaborated with AI ethicist Timnit Gebru to release a 2018 study on racial and gender bias in facial recognition algorithms used by commercial systems from Microsoft , IBM , and Face++ . Their research, entitled "Gender Shades", determined that machine learning models released by IBM and Microsoft were less accurate when analyzing dark-skinned and feminine faces compared to performance on light-skinned and masculine faces. [ 8 ] [ 9 ] [ 10 ] The "Gender Shades" paper was accompanied by the launch of the Safe Face Pledge, an initiative designed with the Georgetown Cent

... (truncated, 19 KB total)
Resource ID: kb-a1a16e8abf276cdb