Skip to content
Longterm Wiki
Back

ARIA Safeguarded AI Programme

web

ARIA's Safeguarded AI programme is a £59m UK government-backed initiative developing formal mathematical safety guarantees for AI systems, representing a significant institutional effort to create verifiable AI safety through proof-based methods rather than empirical approaches.

Metadata

Importance: 72/100homepage

Summary

The Safeguarded AI programme, funded by £59m through ARIA (UK), aims to build a mathematical assurance toolkit enabling AI agents to produce formally verified outputs at scale. It combines scientific world models and mathematical proofs to create a 'gatekeeper' AI system providing quantitative safety guarantees analogous to those in nuclear power and aviation. The programme is structured across three technical areas: formal scaffolding, machine learning for verification, and real-world cyber-physical applications.

Key Points

  • £59m UK government-backed programme developing formal mathematical safety guarantees for AI systems, targeting quantitative assurance rather than empirical safety methods.
  • Three technical areas: TA1 (formal language/platform scaffolding), TA2 (ML-assisted mathematical modeling), TA3 (real-world cyber-physical deployment).
  • Programme leadership transitioned from David 'davidad' Dalrymple to Nora Ammann as Programme Director, with focus shifting toward application in cybersecurity and microelectronics.
  • Core goal is a 'gatekeeper' AI system that understands and reduces risks of other AI agents using formal verification and proof certificates.
  • Aims to mature toolsuite into open, usable infrastructure and expand scope over time, with an updated programme thesis forthcoming.

Cited by 2 pages

3 FactBase facts citing this source

Cached Content Preview

HTTP 200Fetched Apr 7, 20268 KB
Safeguarded AI Skip to main content 
 
 

 

 
 
 Home 
 Opportunity spaces 
 Mathematics for Safe AI 
 Safeguarded AI 
 Opportunity space: Mathematics for Safe AI Programme: Safeguarded AI Safeguarded AI

 Backed by £59m, this programme sits within the Mathematics for Safe AI opportunity space and is building a mathematical assurance toolkit that lets fleets of AI agents produce formally verified artefacts at unprecedented speed and scale.

 Programme overview 
 Structure + funded projects 
 Funding 
 Leadership update 

 After launching the Safeguarded AI programme and establishing its technical foundations, David ‘davidad’ Dalrymple has decided to transition to a new role as Technical Advisor. We are delighted to announce that Nora Ammann – who has helped run the programme as Technical Specialist since before it began - is stepping into the role of Programme Director. Nora has effectively been running Safeguarded AI alongside davidad. She has deep technical context, established relationships with our Creators, and a clear vision for the programme's next phase.

 There are no changes to TA1 Creator contracts, funding, or objectives. Work on the toolsuite continues as planned. The core mission – building a mathematical assurance toolkit for AI – remains the same. An updated thesis will be published to reflect the programme’s shift toward application, with initial efforts likely focusing on cybersecurity and microelectronics.

 Alongside this shift, the programme will mature the toolsuite into open, usable infrastructure, publish an updated thesis, and expand the team, including hiring a new Technical Specialist. We will also continue to explore opportunities to extend the programme’s scope over time.

 Programme progress and ambition

 Nora and davidad sat down with our CEO, Kathleen Fisher, to reflect on the programme’s progress under davidad’s leadership and its ambitions as it transitions to the next phase.

 Watch now 
 
 Our goal 

 This programme aims to usher in a new era for AI safety, allowing us to unlock the full economic and social benefits of advanced AI systems while minimising risks.

 As AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters. But only if it’s deployed wisely. Current techniques working to mitigate the risk of advanced AI systems have serious limitations, and can’t be relied upon empirically to ensure safety. To date, very little R&D effort has gone into approaches that provide quantitative safety guarantees for AI systems, because they’re considered impossible or impractical. 

 By combining scientific world models and mathematical proofs we will aim to construct a ‘gatekeeper’, an AI system tasked with understanding and reducing the risks of other AI agents. In doing so we’ll develop quantitative safety guarantees for AI in the way we have come to expect for nuclear power and passenger aviation.

 

 Read the programme thesis 



... (truncated, 8 KB total)
Resource ID: kb-993eacf1d62c61ae