Longterm Wiki
Back

Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems | FAR.AI

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
FAR AIOrganization76.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20264 KB
Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems | FAR.AI 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 We updated our website and would love your feedback! 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Events 
 
 
 
 
 
 Events 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Programs 
 
 
 
 
 
 Programs 
 
 
 
 
 
 
 
 
 Blog 
 
 
 
 
 
 About 
 
 
 
 
 
 About 
 
 
 
 
 
 
 
 
 
 Careers Donate 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 All Research 
 / Robustness 
 
 
 Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems

 
 
 
 Full PDF 
 
 
 
 
 Project 
 
 
 
 
 
 
 Source 
 
 
 
 
 Blog 
 
 
 
 
 
 
 
 
 
 
 Citation 
 
 
 
 
 
 
 
 
 
 
 May 10, 2024

 
 
 
 
 
 David A. Dalrymple (davidad) 
 
 
 Joar Max Viktor Skalse 
 
 
 Yoshua Bengio 
 
 
 Stuart Russell 
 
 
 Max Tegmark 
 
 
 Sanjit Seshia 
 
 
 Steve Omohundro 
 
 
 Christian Szegedy 
 
 
 Ben Goldhaber 
 
 
 Nora Ammann 
 
 
 Alessandro Abate 
 
 
 Joe Halpern 
 
 
 Clark Barrett 
 
 
 Ding Zhao 
 
 
 Tan Zhi-Xuan 
 
 
 Jeannette Wing 
 
 
 Joshua Tenenbaum 
 
 
 
 
 
 
 
 abstract

 
 
 
 
 Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.

 
 
 
 
 Share on: 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Research

 Our research explores a portfolio 
of high-potential agendas.

 
 
 
 
 
 
 
 
 Events

 Our events bring together 
global leaders in AI.

 
 
 
 
 
 
 
 
 Programs

 Our programs build the field of trustworthy and secure AI

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Subscribe 
 
 
 
 
 
 
 
 
 
 Subscribe to our newsletter 
 
 
 
 

 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 Organization About Team Programs News Search 
 
 
 Events All Events Alignment Workshops Specialized Workshops All Event Recordin

... (truncated, 4 KB total)
Resource ID: 66d05ccd31d3b5d8 | Stable ID: NDE0MzlmNm