Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: AI Now Institute

Published by the AI Now Institute at NYU, this annual report is a key reference for understanding sociotechnical AI risks and governance gaps, especially relevant for those working on the broader societal dimensions of AI safety beyond technical alignment.

Metadata

Importance: 52/100organizational reportanalysis

Summary

The AI Now 2017 Report is an annual assessment from the AI Now Institute examining the social implications of artificial intelligence, focusing on labor and automation, bias and inclusion, rights and liberties, and safety and critical infrastructure. It synthesizes research findings and policy recommendations across these domains, highlighting urgent challenges posed by AI deployment in consequential sectors. The report calls for greater accountability, transparency, and interdisciplinary oversight of AI systems.

Key Points

  • Warns against deploying AI in high-stakes domains (criminal justice, healthcare, welfare) without adequate testing, transparency, or accountability mechanisms.
  • Highlights systemic bias and discrimination risks in AI systems, particularly affecting marginalized communities, and calls for diversity in AI research and development.
  • Recommends that government agencies conduct impact assessments before deploying AI systems affecting public rights and services.
  • Argues that AI safety must encompass social and political harms, not just technical failure modes, broadening the scope of what 'safety' means.
  • Calls for stronger worker protections and labor rights frameworks in response to automation-driven displacement and surveillance in workplaces.

1 FactBase fact citing this source

Cached Content Preview

HTTP 200Fetched Apr 6, 20265 KB
AI Now 2017 Report - AI Now Institute 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 

 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 With key recommendations for the field of artificial intelligence ## AI Now Institute Announces 2017 Report With Key Recommendations for the Field of Artificial Intelligence

 *Second annual report calls for an end to black box predictive systems in core public institutions like the criminal justice system, and outlines specific approaches needed to address bias in AI and related technologies*

 New York, NY – October 18, 2017 – The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its [second annual research report](https://ainowinstitute.org/AI_Now_2017_Report.pdf). In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.

 “The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”

 The 2017 report comes at a time when AI technologies are being introduced in critical areas like criminal justice, finance, education, and the workplace. And the consequences of incomplete or biased systems can be very real. A team of journalists and technologists at Propublica demonstrated how an algorithm used by courts and law enforcement to predict recidivism in criminal defendants was measurably biased against African Americans. In a different setting, a study at the University of Pittsburgh Medical Center observed that an AI system used to triage pneumonia patients was missing a major risk factor for severe complications. And there are many other high stakes domains where these systems are currently being used, without being tested and assessed for bias and inaccuracy. Indeed, standardized methods for conducting such testing are yet to be developed.

 The 2017 report calls for all core public institutions – such as those responsible for criminal j

... (truncated, 5 KB total)
Resource ID: kb-f605d7c236fe33b0 | Stable ID: sid_uiaNMGy9SB