Skip to content
Longterm Wiki
Back

EPIC: Comments to NIST on Managing the Risks of Misuse with AI Foundation Models

web

This is a formal regulatory comment from EPIC to NIST, relevant to those tracking civil society input into U.S. AI governance frameworks, particularly around foundation model risk management standards.

Metadata

Importance: 42/100policy briefprimary source

Summary

EPIC (Electronic Privacy Information Center) submitted formal comments to NIST addressing the risks of misuse associated with AI foundation models, advocating for stronger regulatory frameworks and risk management practices. The comments focus on how large-scale AI models can be exploited for harmful purposes and what governance mechanisms should be established. EPIC argues for accountability measures, transparency requirements, and meaningful oversight of foundation model development and deployment.

Key Points

  • EPIC calls for NIST to establish robust misuse risk management guidelines specifically tailored to foundation models and their broad downstream applications.
  • Comments emphasize the need for transparency and accountability from developers of large-scale AI models regarding potential harms and misuse vectors.
  • EPIC advocates for privacy protections and civil liberties considerations to be central to any NIST framework on foundation model risk.
  • The submission highlights that foundation models' general-purpose nature creates unique dual-use risks that require proactive governance rather than reactive measures.
  • EPIC urges NIST to coordinate with other agencies and stakeholders to ensure consistent, enforceable standards rather than purely voluntary guidelines.

Cited by 1 page

PageTypeQuality
NIST and AI SafetyOrganization63.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202622 KB
EPIC Comments to NIST on Managing the Risks of Misuse with AI Foundation Models – EPIC – Electronic Privacy Information Center 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 Join EPIC’s fight to STOP THE SURVEILLANCE STATE. 

 Defend Privacy. Protect Democracy. Support EPIC. 

 epic.org/stop-the-surveillance-state 

 
 

 
 

 
 Dismiss message. 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 APA Comments

 EPIC Comments to NIST on Managing the Risks of Misuse with AI Foundation Models

 
 2024-09824

 
 
 
 
 DOWNLOAD EPIC Comments on NIST AI 800-1 09.09.24 pdf 243.4KB
 
 
 
 
 
 
 
 
 
 Contents

 
 Contents
 
 
 
 Introduction 

 I. Sociotechnical AI Risks Are Misuse Risks 

 II. Preserving Consumer Privacy is Crucial to Managing the Misuse Risks of Foundation Models 

 Conclusion 

 
 
 
 
 
 
 COMMENTS OF THE ELECTRONIC PRIVACY INFORMATION CENTER

 to the

 NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY

 Request for Comments on the U.S. Artificial Intelligence Safety Institute’s Draft Document: Managing Misuse Risk for Dual-Use Foundation Models

 No. 2024-17614

 September 9, 2024

 

 Introduction

             The Electronic Privacy Information Center (EPIC) submits these comments in response to the National Institute of Standards and Technology’s (NIST’s) Request for Comments on the U.S. Artificial Intelligence Safety Institute’s ­Draft Document on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1). 1 

             EPIC is a public interest research center in Washington, D.C., established in 1994 to secure the fundamental right to privacy in the digital age for all people through advocacy, research, and litigation. 2 We advocate for a human-rights-based approach to AI policy that ensures new technologies are subject to democratic governance. 3 Over the last decade, EPIC has consistently advocated for the adoption of clear, commonsense, and actionable AI regulations across the country. 4 EPIC has also published extensive research on emerging AI technologies like generative AI, 5 as well as the ways that government agencies develop, procure, and use AI systems around the country. 6 EPIC is a member of NIST’s U.S. Artificial Intelligence Safety Institute Consortium (AISIC).

             As the U.S. Artificial Intelligence Safety Institute (AISI) considers updates to Draft Document NIST AI 800-1, Managing Misuse Risk for Dual-Use Foundation Models, EPIC reemphasizes our call for NIST and its affiliated entities to implement actionable AI risk mitigation strategies with strong incentive structures and accountability mechanisms—steps that will ensure that AI developers and deployers faithfully adopt the practices and implementation recommendations within NIST AI 800-1. 7 At the same time, EPIC encourages AISI to view the misuse risks of generative AI technologies, including dual-use and multimodal foundation models, as extensions of traditional AI and automated decision-making risks, rather than qualitativ

... (truncated, 22 KB total)
Resource ID: e2d1acdd85129b22 | Stable ID: sid_dwlYYwX9Jk