Phase 1: GBP 1M across 3 teams (completed). Phase 2 (GBP 18M) abandoned in Nov 2025 pivot — frontier AI advances made dedicated ML capability development less valuable. Funds redirected to expand TA1.
GBP 5.4M Phase 1 across 9 teams (continuing to completion). Applications in energy grid, automated driving, clinical trials, logistics, biopharmaceuticals, and telecom. Phase 2 (GBP 8.4M) cancelled Nov 2025; replaced by cybersecurity pivot to formally-verified firewalls for critical infrastructure.
GBP 3.5M Phase 1 across 22 projects. Mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, and formal verification foundations. Scope expanded in Nov 2025 pivot.
ARIA's flagship AI safety programme, led by Programme Director David 'davidad' Dalrymple with Scientific Director Yoshua Bengio (joined Aug 2024). GBP 59M committed. Nov 2025 pivot expanded TA1 scope to broader 'mathematical assurance and auditability', abandoned TA2 Phase 2, cancelled TA3 Phase 2 in favor of cybersecurity focus.
GBP 3.5M Phase 1 across 22 projects. Mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, and formal verification foundations. Scope expanded in Nov 2025 pivot.
22 projects on mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, string diagrams, and verification foundations.
Phase 1: development teams for ML approaches to safeguarded AI. Phase 2 (GBP 18M single award) abandoned in Nov 2025 pivot — frontier AI advances made dedicated ML capability development less valuable.
Real-world demonstrations of safeguarded AI in energy grid, automated driving, clinical trials, logistics, biopharmaceuticals, and telecom. Phase 2 (GBP 8.4M) cancelled in Nov 2025 pivot; replaced by cybersecurity focus on formally-verified firewalls for critical infrastructure.
Backend infrastructure (TA1.2) and human-computer interface (TA1.3) for the Safeguarded AI programme. Proof checking, automated reasoning, collaborative modeling, and UX.
[Safeguarded AI TA1.4] Field Building for Better Formal Models of Society. Lead(s): Joe Edelman, Ryan Lowe. Institutions: Meaning Alignment Institute. Status: active.
[Safeguarded AI TA1.3] Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational Environment. Lead(s): Cyrus Omar, Andrew Blinn, Thomas Porter. Institutions: University of Michigan. Status: active.
ARIA TA1.1: Hyper-optimised Tensor Contraction for Neural Networks Verification
University of BirminghamOrganizationUniversity of BirminghamPublic research university in Birmingham, UK, with research programs in computer science, formal methods, and AI safety.
[Safeguarded AI TA1.2] CatColab: Collaborative modeling, specification, and verification. Lead(s): Evan Patterson, Tim Hosgood, Kevin Carlson, Brendan Fong. Institutions: Topos Institute. Status: active.
[Safeguarded AI TA3] Digital Custodians for Ageing Infrastructure. Lead(s): Nathan Korda, Julia Bush, Mark McLeod. Institutions: Mind Foundry / WSP. Status: closed.
University of BirminghamOrganizationUniversity of BirminghamPublic research university in Birmingham, UK, with research programs in computer science, formal methods, and AI safety.
[Safeguarded AI TA3] Safeguarded AI-Enabled Biopharmaceutical Manufacturing. Lead(s): Mirco Giacobbe, Leonardo Stella, Paul Devine, Jared Delmar. Institutions: University of Birmingham / AstraZeneca. Status: active.
[Safeguarded AI TA1.1] Modal Types for Quantitative Analysis. Lead(s): Vineet Rajani, Dominic Orchard. Institutions: University of Kent. Status: active.
[Safeguarded AI TA1.2] From string diagrams to GPU optimisation. Lead(s): Paolo Perrone, Nikolaj Jensen. Institutions: Adjoint Labs Limited. Status: active.
ARIA TA1.4: Privacy-preserving AI Safety Verification
University of BirminghamOrganizationUniversity of BirminghamPublic research university in Birmingham, UK, with research programs in computer science, formal methods, and AI safety.
[Safeguarded AI TA1.4] Privacy-preserving AI Safety Verification. Lead(s): Pascal Berrang, Mirco Giacobbe, Yang Zhang. Institutions: University of Birmingham / CISPA Helmholtz Center. Status: active.
ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified ML
Heriot-Watt UniversityOrganizationHeriot-Watt UniversityPublic research university in Edinburgh, Scotland, with research programs in computer science and robotics.
[Safeguarded AI TA1.1] Quantitative Predicate Logic as a Foundation for Verified ML. Lead(s): Ekaterina Komendantskaya, Robert Atkey, Radu Mardare, Matteo Capucci. Institutions: Heriot-Watt University / University of Strathclyde. Status: closed.
ARIA TA1.1: Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes
University College LondonOrganizationUniversity College LondonPublic research university in London, one of the UK's leading institutions for computer science and AI research.
[Safeguarded AI TA1.1] Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes. Lead(s): Fabio Zanasi. Institutions: University College London. Status: active.
Association for Long Term Existence and Resilience (ALTER)OrganizationAssociation for Long Term Existence and Resilience (ALTER)Academic research and advocacy organization investigating AI alignment and existential risk. Funded by SFF and Open Philanthropy.
[Safeguarded AI TA1.1] Learning-Theoretic AI Safety. Lead(s): Vanessa Kosoy, David Manheim, Alexander Appel, Gergely Szucs. Institutions: ALTER. Status: closed.
[Safeguarded AI TA1.3] UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator). Lead(s): Dei Vilkinsons, Ciaran Morinan. Institutions: HASH. Status: active.
ARIA TA1.2: Data-Parallel Proof Checking for Monoidal Theories
University College LondonOrganizationUniversity College LondonPublic research university in London, one of the UK's leading institutions for computer science and AI research.
ARIA TA1.1: Supermartingale Certificates for Temporal Logic
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA1.1] Supermartingale Certificates for Temporal Logic. Lead(s): Mirco Giacobbe, Diptarko Roy, Alessandro Abate. Institutions: University of Birmingham / University of Oxford. Status: closed.
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA1.1] SAINT: Safe AI ageNTs. Lead(s): Alessandro Abate, Virginie Debauche, Niko Vertovec. Institutions: University of Oxford. Status: active.
ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed Systems
University College LondonOrganizationUniversity College LondonPublic research university in London, one of the UK's leading institutions for computer science and AI research.
ARIA TA1.1: Employing Categorical Probability Towards Safe AI
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA1.1] Employing Categorical Probability Towards Safe AI. Lead(s): Sam Staton, Pedro Amorim, Elena Di Lavore, Paolo Perrone, Mario Roman, Ruben Van Belle, Younesse Kaddar, Jack Liell-Cock, Owen Lynch. Institutions: University of Oxford. Status: active.
ARIA TA1.1: Probabilistic Protocol Specification for Distributed Autonomous Processes
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA1.1] Probabilistic Protocol Specification for Distributed Autonomous Processes. Lead(s): Nobuko Yoshida, Adrian Puerto Aubel, Burak Ekici, Joseph Paulus, Dylan McDermott. Institutions: University of Oxford. Status: active.
ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AI
ConjectureOrganizationConjectureConjecture is a 30-40 person London-based AI safety org founded 2022, pursuing Cognitive Emulation (CoEm) - building interpretable AI from ground-up rather than aligning LLMs - with $30M+ Series A ...Quality: 37/100
[Safeguarded AI TA3] Safeguarded AI for Energy Savings in Radio Access Networks. Lead(s): Marco Fiore, Paul Patras. Institutions: Net AI. Status: active.
[Safeguarded AI TA2] Recursive Safeguarding. Lead(s): Younesse Kaddar, Rob Cornish, Pedro Amorim, Jacek Kaworski, Nikolaj Jensen, Paolo Perrone, Sam Staton. Institutions: Recursive Safeguarding Limited. Status: active.
[Safeguarded AI TA1.1] Syntax and Semantics for Multimodal Petri Nets. Lead(s): Amar Hadzihasanovic, Diana Kessler. Institutions: Tallinn University of Technology. Status: active.
[Safeguarded AI TA2] SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing. Lead(s): Mohammed Begg. Institutions: Manufacturing Technology Centre. Status: active.
[Safeguarded AI TA3] SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving. Lead(s): Simon Burton, Radu Calinescu, Kester Clegg, Jie Zou, Ioannis Stefanakos. Institutions: University of York. Status: active.
[Safeguarded AI TA1.1] Double Categorical Systems Theory for Safeguarded AI. Lead(s): David Jaz Myers, Owen Lynch, Sophie Libkind, David Spivak, James Fairbanks. Institutions: Topos Research UK / University of Florida. Status: active.
[Safeguarded AI TA1.1] String Diagrammatic Probabilistic Logic. Lead(s): Pawel Sobocinski, Eigil Rischel. Institutions: Tallinn University of Technology. Status: active.
ARIA TA1.4: Deliberative AI Specifications and Infrastructure
Massachusetts Institute of TechnologyOrganizationMassachusetts Institute of TechnologyPrivate research university in Cambridge, Massachusetts. A leading center for AI and machine learning research.
[Safeguarded AI TA1.4] Deliberative AI Specifications and Infrastructure. Lead(s): Aviv Ovadya, Luke Thorburn, Andrew Konya, Kyle Redman. Institutions: AI & Democracy Foundation / UW / MIT. Status: active.
[Safeguarded AI TA1.1] Profunctors: A unified semantics for safeguarded AI. Lead(s): Nicola Gambino. Institutions: University of Manchester. Status: active.
[Safeguarded AI TA1.4] AI-enabled Governance Models for Advanced AI R&D Organisations. Lead(s): Alex Petropoulos, Bengüsu Ozcan, David Janku, Max Reddel. Institutions: Centre for Future Generations. Status: active.
ARIA TA1.1: Employing Categorical Probability Towards Safe AI
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA1.1] Employing Categorical Probability Towards Safe AI. Lead(s): Sam Staton, Pedro Amorim, Elena Di Lavore, Paolo Perrone, Mario Roman, Ruben Van Belle, Younesse Kaddar, Jack Liell-Cock, Owen Lynch. Institutions: University of Oxford. Status: active.
[Safeguarded AI TA3] SAINTES: Safe and scalable AI decision support for Energy Systems. Lead(s): Dawei Qiu, Zhong Fan, Qiong Liu, Zhanhua Pan. Institutions: University of Exeter. Status: active.
[Safeguarded AI TA1.3] Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational Environment. Lead(s): Cyrus Omar, Andrew Blinn, Thomas Porter. Institutions: University of Michigan. Status: active.
[Safeguarded AI TA1.2] CatColab: Collaborative modeling, specification, and verification. Lead(s): Evan Patterson, Tim Hosgood, Kevin Carlson, Brendan Fong. Institutions: Topos Institute. Status: active.
ARIA TA1.1: Supermartingale Certificates for Temporal Logic
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA1.1] Supermartingale Certificates for Temporal Logic. Lead(s): Mirco Giacobbe, Diptarko Roy, Alessandro Abate. Institutions: University of Birmingham / University of Oxford. Status: closed.
[Safeguarded AI TA1.1] Double Categorical Systems Theory for Safeguarded AI. Lead(s): David Jaz Myers, Owen Lynch, Sophie Libkind, David Spivak, James Fairbanks. Institutions: Topos Research UK / University of Florida. Status: active.
University of BirminghamOrganizationUniversity of BirminghamPublic research university in Birmingham, UK, with research programs in computer science, formal methods, and AI safety.
[Safeguarded AI TA3] Safeguarded AI-Enabled Biopharmaceutical Manufacturing. Lead(s): Mirco Giacobbe, Leonardo Stella, Paul Devine, Jared Delmar. Institutions: University of Birmingham / AstraZeneca. Status: active.
[Safeguarded AI TA2] Recursive Safeguarding. Lead(s): Younesse Kaddar, Rob Cornish, Pedro Amorim, Jacek Kaworski, Nikolaj Jensen, Paolo Perrone, Sam Staton. Institutions: Recursive Safeguarding Limited. Status: active.
[Safeguarded AI TA1.2] From string diagrams to GPU optimisation. Lead(s): Paolo Perrone, Nikolaj Jensen. Institutions: Adjoint Labs Limited. Status: active.
ARIA TA1.4: Deliberative AI Specifications and Infrastructure
Massachusetts Institute of TechnologyOrganizationMassachusetts Institute of TechnologyPrivate research university in Cambridge, Massachusetts. A leading center for AI and machine learning research.
[Safeguarded AI TA1.4] Deliberative AI Specifications and Infrastructure. Lead(s): Aviv Ovadya, Luke Thorburn, Andrew Konya, Kyle Redman. Institutions: AI & Democracy Foundation / UW / MIT. Status: active.
ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AI
ConjectureOrganizationConjectureConjecture is a 30-40 person London-based AI safety org founded 2022, pursuing Cognitive Emulation (CoEm) - building interpretable AI from ground-up rather than aligning LLMs - with $30M+ Series A ...Quality: 37/100
[Safeguarded AI TA3] SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving. Lead(s): Simon Burton, Radu Calinescu, Kester Clegg, Jie Zou, Ioannis Stefanakos. Institutions: University of York. Status: active.
Association for Long Term Existence and Resilience (ALTER)OrganizationAssociation for Long Term Existence and Resilience (ALTER)Academic research and advocacy organization investigating AI alignment and existential risk. Funded by SFF and Open Philanthropy.
[Safeguarded AI TA1.1] Learning-Theoretic AI Safety. Lead(s): Vanessa Kosoy, David Manheim, Alexander Appel, Gergely Szucs. Institutions: ALTER. Status: closed.
[Safeguarded AI TA1.1] Syntax and Semantics for Multimodal Petri Nets. Lead(s): Amar Hadzihasanovic, Diana Kessler. Institutions: Tallinn University of Technology. Status: active.
ARIA TA1.4: Privacy-preserving AI Safety Verification
University of BirminghamOrganizationUniversity of BirminghamPublic research university in Birmingham, UK, with research programs in computer science, formal methods, and AI safety.
[Safeguarded AI TA1.4] Privacy-preserving AI Safety Verification. Lead(s): Pascal Berrang, Mirco Giacobbe, Yang Zhang. Institutions: University of Birmingham / CISPA Helmholtz Center. Status: active.
[Safeguarded AI TA1.3] UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator). Lead(s): Dei Vilkinsons, Ciaran Morinan. Institutions: HASH. Status: active.
[Safeguarded AI TA2] SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing. Lead(s): Mohammed Begg. Institutions: Manufacturing Technology Centre. Status: active.
ARIA TA1.2: Data-Parallel Proof Checking for Monoidal Theories
University College LondonOrganizationUniversity College LondonPublic research university in London, one of the UK's leading institutions for computer science and AI research.
ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified ML
Heriot-Watt UniversityOrganizationHeriot-Watt UniversityPublic research university in Edinburgh, Scotland, with research programs in computer science and robotics.
[Safeguarded AI TA1.1] Quantitative Predicate Logic as a Foundation for Verified ML. Lead(s): Ekaterina Komendantskaya, Robert Atkey, Radu Mardare, Matteo Capucci. Institutions: Heriot-Watt University / University of Strathclyde. Status: closed.
[Safeguarded AI TA1.4] AI-enabled Governance Models for Advanced AI R&D Organisations. Lead(s): Alex Petropoulos, Bengüsu Ozcan, David Janku, Max Reddel. Institutions: Centre for Future Generations. Status: active.
ARIA TA1.1: Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes
University College LondonOrganizationUniversity College LondonPublic research university in London, one of the UK's leading institutions for computer science and AI research.
[Safeguarded AI TA1.1] Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes. Lead(s): Fabio Zanasi. Institutions: University College London. Status: active.
ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed Systems
University College LondonOrganizationUniversity College LondonPublic research university in London, one of the UK's leading institutions for computer science and AI research.
[Safeguarded AI TA1.1] String Diagrammatic Probabilistic Logic. Lead(s): Pawel Sobocinski, Eigil Rischel. Institutions: Tallinn University of Technology. Status: active.
[Safeguarded AI TA1.1] Profunctors: A unified semantics for safeguarded AI. Lead(s): Nicola Gambino. Institutions: University of Manchester. Status: active.
ARIA TA1.1: Probabilistic Protocol Specification for Distributed Autonomous Processes
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA1.1] Probabilistic Protocol Specification for Distributed Autonomous Processes. Lead(s): Nobuko Yoshida, Adrian Puerto Aubel, Burak Ekici, Joseph Paulus, Dylan McDermott. Institutions: University of Oxford. Status: active.
[Safeguarded AI TA3] Safeguarded AI for Energy Savings in Radio Access Networks. Lead(s): Marco Fiore, Paul Patras. Institutions: Net AI. Status: active.
ARIA TA1.1: Hyper-optimised Tensor Contraction for Neural Networks Verification
University of BirminghamOrganizationUniversity of BirminghamPublic research university in Birmingham, UK, with research programs in computer science, formal methods, and AI safety.
[Safeguarded AI TA1.4] Field Building for Better Formal Models of Society. Lead(s): Joe Edelman, Ryan Lowe. Institutions: Meaning Alignment Institute. Status: active.
ARIA TA3: Large-Scale Validation of Business Process AI (BPAI)
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA3] Large-Scale Validation of Business Process AI (BPAI). Lead(s): Nobuko Yoshida, David Parker, Adrian Puerto Aubel, Joseph Paulus. Institutions: University of Oxford. Status: active.
ARIA TA3: SAGEflex: Safeguarded AI Agents for Grid-Edge Flexibility
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA3] SAGEflex: Safeguarded AI Agents for Grid-Edge Flexibility. Lead(s): Thomas Morstyn, Jakob Foerster, Yihong Zhou, Sofia Sampaio. Institutions: University of Oxford. Status: active.
[Safeguarded AI TA3] Digital Custodians for Ageing Infrastructure. Lead(s): Nathan Korda, Julia Bush, Mark McLeod. Institutions: Mind Foundry / WSP. Status: closed.
University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.
[Safeguarded AI TA1.1] SAINT: Safe AI ageNTs. Lead(s): Alessandro Abate, Virginie Debauche, Niko Vertovec. Institutions: University of Oxford. Status: active.
[Safeguarded AI TA1.1] Modal Types for Quantitative Analysis. Lead(s): Vineet Rajani, Dominic Orchard. Institutions: University of Kent. Status: active.