[2307.03718] Frontier AI Regulation: Managing Emerging Risks to Public Safety
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
An arXiv preprint (ID 2307.03718) that likely presents original research relevant to AI safety; the specific contribution cannot be determined without access to the full paper title and abstract.
Paper Details
Metadata
Abstract
Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term "frontier AI" models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model's capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of frontier models are needed: (1) standard-setting processes to identify appropriate requirements for frontier AI developers, (2) registration and reporting requirements to provide regulators with visibility into frontier AI development processes, and (3) mechanisms to ensure compliance with safety standards for the development and deployment of frontier AI models. Industry self-regulation is an important first step. However, wider societal discussions and government intervention will be needed to create standards and to ensure compliance with them. We consider several options to this end, including granting enforcement powers to supervisory authorities and licensure regimes for frontier AI models. Finally, we propose an initial set of safety standards. These include conducting pre-deployment risk assessments; external scrutiny of model behavior; using risk assessments to inform deployment decisions; and monitoring and responding to new information about model capabilities and uses post-deployment. We hope this discussion contributes to the broader conversation on how to balance public safety risks and innovation benefits from advances at the frontier of AI development.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Structured Access / API-Only | Approach | 91.0 |
1 FactBase fact citing this source
| Entity | Property | Value | As Of |
|---|---|---|---|
| GovAI | publication | Frontier AI Regulation: Managing Emerging Risks to Public Safety — proposes three regulatory building blocks: standards, registration/reporting, compliance mechanisms | Jul 2023 |
Cached Content Preview
[2307.03718] Frontier AI Regulation:Managing Emerging Risks to Public Safety
Frontier AI Regulation:Managing Emerging Risks to Public Safety
Markus Anderljung 1,2∗† ,
Joslyn Barnhart 3∗∗ ,
Anton Korinek 4,5,1∗∗† ,
Jade Leung 6∗ ,
Cullen O’Keefe 6∗ ,
Jess Whittlestone 7∗∗ ,
Shahar Avin 8 ,
Miles Brundage 6 ,
Justin Bullock 9,10 ,
Duncan Cass-Beggs 11 ,
Ben Chang 12 ,
Tantum Collins 13,14 ,
Tim Fist 2 ,
Gillian Hadfield 15,16,17,6 ,
Alan Hayes 18 ,
Lewis Ho 3 ,
Sara Hooker 19 ,
Eric Horvitz 20 ,
Noam Kolt 15 ,
Jonas Schuett 1 ,
Yonadav Shavit 14∗∗∗ ,
Divya Siddarth 21 ,
Robert Trager 1,22 ,
Kevin Wolf 18
( 1 Centre for the Governance of AI,
2 Center for a New American Security,
3 Google DeepMind,
4 Brookings Institution,
5 University of Virginia,
6 OpenAI,
7 Centre for Long-Term Resilience,
8 Centre for the Study of Existential Risk, University of Cambridge,
9 University of Washington,
10 Convergence Analysis,
11 Centre for International Governance Innovation,
12 The Andrew W. Marshall Foundation,
13 GETTING-Plurality Network, Edmond & Lily Safra Center for Ethics,
14 Harvard University,
15 University of Toronto,
16 Schwartz Reisman Institute for Technology and Society,
17 Vector Institute,
18 Akin Gump Strauss Hauer & Feld LLP,
19 Cohere For AI,
20 Microsoft,
21 Collective Intelligence Project,
22 University of California: Los Angeles
)
Abstract
Advanced AI models hold the promise of tremendous benefits for humanity, but
society needs to proactively manage the accompanying risks. In this paper, we
focus on what we term “frontier AI” models — highly capable foundation models
that could possess dangerous capabilities sufficient to pose severe risks to
public safety. Frontier AI models pose a distinct regulatory challenge:
dangerous capabilities can arise unexpectedly; it is difficult to robustly
prevent a deployed model from being misused; and, it is difficult to stop a
model’s capabilities from proliferating broadly. To address these challenges,
at least three building blocks for the regulation of frontier models are
needed: (1) standard-setting processes to identify appropriate requirements for
frontier AI developers, (2) registration and reporting requirements to provide
regulators with visibility into frontier AI development processes, and (3)
mechanisms to ensure compliance with safety standards for the development and
deployment of frontier AI models. Industry self-regulation is an important
first step. However, wider societal discussions and government intervention
will be needed to create standards and to ensure compliance with them. We
consider several options to this end, including granting enforcement powers to
supervisory authorities and licensure regimes for frontier AI models. Finally,
we propose an initial set of safety standards. These include conducting
pre-deployment risk assessments; external scrutiny of model behavior; using
risk assessmen
... (truncated, 98 KB total)kb-b475e8a9fafdcd6b | Stable ID: sid_9KqAimcGdw