Structured Facts
Database Records
Total Funding Raised
£59M
as of Nov 2025
Founded Date
2022
Key People
1YB
All Facts
Financial
Organization
General
Divisions
7| Name | DivisionType | Status | StartDate | EndDate | Source | Notes | Source check |
|---|---|---|---|---|---|---|---|
| TA2 — Machine Learning | program-area | inactive | 2024 | 2025-11 | aria.org.uk | Phase 1: GBP 1M across 3 teams (completed). Phase 2 (GBP 18M) abandoned in Nov 2025 pivot — frontier AI advances made dedicated ML capability development less valuable. Funds redirected to expand TA1. | |
| TA3 — Real-World Applications | program-area | active | 2024 | — | aria.org.uk | GBP 5.4M Phase 1 across 9 teams (continuing to completion). Applications in energy grid, automated driving, clinical trials, logistics, biopharmaceuticals, and telecom. Phase 2 (GBP 8.4M) cancelled Nov 2025; replaced by cybersecurity pivot to formally-verified firewalls for critical infrastructure. | |
| TA1.1 — Theory (Scaffolding) | program-area | active | 2024-04 | — | aria.org.uk | GBP 3.5M Phase 1 across 22 projects. Mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, and formal verification foundations. Scope expanded in Nov 2025 pivot. | |
| Safeguarded AI Programme | program-area | active | 2023 | — | aria.org.uk | ARIA's flagship AI safety programme, led by Programme Director David 'davidad' Dalrymple with Scientific Director Yoshua Bengio (joined Aug 2024). GBP 59M committed. Nov 2025 pivot expanded TA1 scope to broader 'mathematical assurance and auditability', abandoned TA2 Phase 2, cancelled TA3 Phase 2 in favor of cybersecurity focus. | |
| TA1.2 + TA1.3 — Platform (Backend + HCI) | program-area | active | 2024 | — | aria.org.uk | GBP 14.2M across 8 projects. TA1.2 (backend): proof checking, automated reasoning, GPU optimization. TA1.3 (human-computer interface): collaborative modeling, type-theoretic environments. | |
| TA1.4 — Sociotechnical Integration | program-area | active | 2024 | — | aria.org.uk | GBP 3.4M across 6 teams. Law-following AI, formal models of society, governance models, privacy-preserving verification, preference aggregation, and deliberative AI specifications. | |
| TA1.1 — Theory (Scaffolding) | program-area | active | 2024-04 | — | aria.org.uk | GBP 3.5M Phase 1 across 22 projects. Mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, and formal verification foundations. Scope expanded in Nov 2025 pivot. |
Funding Programs
5| Name | ProgramType | Description | DivisionId | TotalBudget | Currency | Status | Source | Notes | Source check |
|---|---|---|---|---|---|---|---|---|---|
| Safeguarded AI TA1.4 — Sociotechnical Integration | solicitation | Law-following AI, formal models of society, governance models, privacy-preserving verification, preference aggregation, and deliberative AI specifications. | 7PTcCdLnoC | $3.4M | GBP | awarded | aria.org.uk | GBP 3.4M across 6 teams, Phase 1 (up to 18 months). | |
| Safeguarded AI TA1.1 — Theory | solicitation | 22 projects on mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, string diagrams, and verification foundations. | UO9LvMlj_x | $3.5M | GBP | awarded | aria.org.uk | GBP 3.5M Phase 1 across 22 projects. Call opened April 2024. | |
| Safeguarded AI TA2 — Machine Learning | solicitation | Phase 1: development teams for ML approaches to safeguarded AI. Phase 2 (GBP 18M single award) abandoned in Nov 2025 pivot — frontier AI advances made dedicated ML capability development less valuable. | zAJlIJiXxB | $1M | GBP | closed | aria.org.uk | Phase 1: GBP 1M across 3 teams (completed). Phase 2 (GBP 18M) abandoned Nov 2025. | |
| Safeguarded AI TA3 — Real-World Applications | solicitation | Real-world demonstrations of safeguarded AI in energy grid, automated driving, clinical trials, logistics, biopharmaceuticals, and telecom. Phase 2 (GBP 8.4M) cancelled in Nov 2025 pivot; replaced by cybersecurity focus on formally-verified firewalls for critical infrastructure. | uRPFXhBwBY | $5.4M | GBP | awarded | aria.org.uk | GBP 5.4M Phase 1 across 9 teams (continuing). Phase 2 (GBP 8.4M) cancelled Nov 2025. | |
| Safeguarded AI TA1.2 + TA1.3 — Platform | solicitation | Backend infrastructure (TA1.2) and human-computer interface (TA1.3) for the Safeguarded AI programme. Proof checking, automated reasoning, collaborative modeling, and UX. | Cf0dXNt8tu | $14M | GBP | awarded | aria.org.uk | GBP 14.2M across 8 projects. |
Grants
88| Name | Recipient | Date | Source | Notes | ProgramId | Amount | Source check |
|---|---|---|---|---|---|---|---|
| ARIA TA1.4: Field Building for Better Formal Models of Society | Meaning Alignment Institute | 2025-02 | aria.org.uk | [Safeguarded AI TA1.4] Field Building for Better Formal Models of Society. Lead(s): Joe Edelman, Ryan Lowe. Institutions: Meaning Alignment Institute. Status: active. | W1Z6qeahXY | — | |
| ARIA TA1.3: Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational Environment | University of Michigan | 2024-09 | aria.org.uk | [Safeguarded AI TA1.3] Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational Environment. Lead(s): Cyrus Omar, Andrew Blinn, Thomas Porter. Institutions: University of Michigan. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.1: Hyper-optimised Tensor Contraction for Neural Networks Verification | University of Birmingham | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Hyper-optimised Tensor Contraction for Neural Networks Verification. Lead(s): Stefano Gogioso, Mirco Giacobbe. Institutions: Hashberg Ltd / University of Birmingham. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Monoidal Coalgebraic Metrics | University of Pisa | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Monoidal Coalgebraic Metrics. Lead(s): Filippo Bonchi. Institutions: University of Pisa. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.2: CatColab: Collaborative modeling, specification, and verification | Topos Institute | 2024-09 | aria.org.uk | [Safeguarded AI TA1.2] CatColab: Collaborative modeling, specification, and verification. Lead(s): Evan Patterson, Tim Hosgood, Kevin Carlson, Brendan Fong. Institutions: Topos Institute. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA3: Digital Custodians for Ageing Infrastructure | Mind Foundry / WSP | 2024-09 | aria.org.uk | [Safeguarded AI TA3] Digital Custodians for Ageing Infrastructure. Lead(s): Nathan Korda, Julia Bush, Mark McLeod. Institutions: Mind Foundry / WSP. Status: closed. | OdiZdS7PvJ | — | |
| ARIA TA1.1: ULTIMATE: Universal Stochastic Modelling, Verification and Synthesis Framework | University of York | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] ULTIMATE: Universal Stochastic Modelling, Verification and Synthesis Framework. Lead(s): Radu Calinescu, Simos Gerasimou, Sinem Getir Yaman, Gricel Vazquez. Institutions: University of York. Status: active. | VpM42-Oye3 | — | |
| ARIA TA3: Safeguarded AI-Enabled Biopharmaceutical Manufacturing | University of Birmingham | 2024-09 | aria.org.uk | [Safeguarded AI TA3] Safeguarded AI-Enabled Biopharmaceutical Manufacturing. Lead(s): Mirco Giacobbe, Leonardo Stella, Paul Devine, Jared Delmar. Institutions: University of Birmingham / AstraZeneca. Status: active. | OdiZdS7PvJ | — | |
| ARIA TA3: SAILS: Safeguarded AI for Logistics and Supply chain | HASH | 2024-09 | aria.org.uk | [Safeguarded AI TA3] SAILS: Safeguarded AI for Logistics and Supply chain. Lead(s): Leah Pickering. Institutions: HASH. Status: active. | OdiZdS7PvJ | — | |
| ARIA TA1.1: Modal Types for Quantitative Analysis | University of Kent | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Modal Types for Quantitative Analysis. Lead(s): Vineet Rajani, Dominic Orchard. Institutions: University of Kent. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Doubly Categorical Systems Logic | Matteo Capucci (Independent) | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Doubly Categorical Systems Logic. Lead(s): Matteo Capucci. Institutions: Independent Researcher. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.2: From string diagrams to GPU optimisation | Adjoint Labs Limited | 2024-09 | aria.org.uk | [Safeguarded AI TA1.2] From string diagrams to GPU optimisation. Lead(s): Paolo Perrone, Nikolaj Jensen. Institutions: Adjoint Labs Limited. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.2: Automated Reasoning Technologies for AI Safety Verification | Zeroth Research | 2024-09 | aria.org.uk | [Safeguarded AI TA1.2] Automated Reasoning Technologies for AI Safety Verification. Lead(s): Mirco Giacobbe, Luca Arnaboldi, Pascal Berrang. Institutions: Zeroth Research / Fondazione Bruno Kessler. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.4: Privacy-preserving AI Safety Verification | University of Birmingham | 2025-02 | aria.org.uk | [Safeguarded AI TA1.4] Privacy-preserving AI Safety Verification. Lead(s): Pascal Berrang, Mirco Giacobbe, Yang Zhang. Institutions: University of Birmingham / CISPA Helmholtz Center. Status: active. | W1Z6qeahXY | — | |
| ARIA TA1.1: Philosophical Applied Category Theory | David Corfield (Independent) | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Philosophical Applied Category Theory. Lead(s): David Corfield. Institutions: Independent Researcher. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified ML | Heriot-Watt University | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Quantitative Predicate Logic as a Foundation for Verified ML. Lead(s): Ekaterina Komendantskaya, Robert Atkey, Radu Mardare, Matteo Capucci. Institutions: Heriot-Watt University / University of Strathclyde. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.1: Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes | University College London | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes. Lead(s): Fabio Zanasi. Institutions: University College London. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Learning-Theoretic AI Safety | Association for Long Term Existence and Resilience (ALTER) | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Learning-Theoretic AI Safety. Lead(s): Vanessa Kosoy, David Manheim, Alexander Appel, Gergely Szucs. Institutions: ALTER. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.1: Safety: Core representation underlying safeguarded AI | University of Edinburgh | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Safety: Core representation underlying safeguarded AI. Lead(s): Ohad Kammar, Justus Matthiesen, Jesse Sigal. Institutions: University of Edinburgh. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.1: True Categorical Programming for Composable Systems | GLAIVE | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] True Categorical Programming for Composable Systems. Lead(s): Jade Master, Zans Mihejevs, Andre Videla, Dylan Braithwaite. Institutions: GLAIVE. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.3: UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator) | HASH | 2024-09 | aria.org.uk | [Safeguarded AI TA1.3] UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator). Lead(s): Dei Vilkinsons, Ciaran Morinan. Institutions: HASH. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.2: Data-Parallel Proof Checking for Monoidal Theories | University College London | 2024-09 | aria.org.uk | [Safeguarded AI TA1.2] Data-Parallel Proof Checking for Monoidal Theories. Lead(s): Fabio Zanasi, Paul Wilson. Institutions: UCL / Hellas AI. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.1: Supermartingale Certificates for Temporal Logic | University of Oxford | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Supermartingale Certificates for Temporal Logic. Lead(s): Mirco Giacobbe, Diptarko Roy, Alessandro Abate. Institutions: University of Birmingham / University of Oxford. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.1: SAINT: Safe AI ageNTs | University of Oxford | 2024-06 | cs.ox.ac.uk | [Safeguarded AI TA1.1] SAINT: Safe AI ageNTs. Lead(s): Alessandro Abate, Virginie Debauche, Niko Vertovec. Institutions: University of Oxford. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Computational Mechanics Approach to World Models | University of Sussex | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Computational Mechanics Approach to World Models. Lead(s): Fernando Rosas. Institutions: University of Sussex. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed Systems | University College London | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Unified Automated Reasoning for Randomised Distributed Systems. Lead(s): Alexandra Silva, Robin Piedeleu, Noam Zilberstein. Institutions: UCL / Cornell. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Employing Categorical Probability Towards Safe AI | University of Oxford | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Employing Categorical Probability Towards Safe AI. Lead(s): Sam Staton, Pedro Amorim, Elena Di Lavore, Paolo Perrone, Mario Roman, Ruben Van Belle, Younesse Kaddar, Jack Liell-Cock, Owen Lynch. Institutions: University of Oxford. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Probabilistic Protocol Specification for Distributed Autonomous Processes | University of Oxford | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Probabilistic Protocol Specification for Distributed Autonomous Processes. Lead(s): Nobuko Yoshida, Adrian Puerto Aubel, Burak Ekici, Joseph Paulus, Dylan McDermott. Institutions: University of Oxford. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Event Structures as World Models | University of Bristol | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Event Structures as World Models. Lead(s): Alex Kavvos. Institutions: University of Bristol. Status: active. | VpM42-Oye3 | — | |
| ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AI | Conjecture | 2025-04 | aria.org.uk | [Safeguarded AI TA2] Cognitive Emulation: Our Path to Safeguarded AI. Lead(s): Connor Leahy, Jean-Gabriel Bechard. Institutions: Conjecture. Status: closed. | tDrsgSLi8J | — | |
| ARIA TA1.4: Law-following AI | Institute for Law & AI | 2025-02 | aria.org.uk | [Safeguarded AI TA1.4] Law-following AI. Lead(s): Cullen O'Keefe, Janna Tay. Institutions: Institute for Law & AI. Status: active. | W1Z6qeahXY | — | |
| ARIA TA3: Safeguarded AI for Energy Savings in Radio Access Networks | Net AI | 2024-09 | aria.org.uk | [Safeguarded AI TA3] Safeguarded AI for Energy Savings in Radio Access Networks. Lead(s): Marco Fiore, Paul Patras. Institutions: Net AI. Status: active. | OdiZdS7PvJ | — | |
| ARIA TA2: Recursive Safeguarding | Recursive Safeguarding Limited | 2025-04 | aria.org.uk | [Safeguarded AI TA2] Recursive Safeguarding. Lead(s): Younesse Kaddar, Rob Cornish, Pedro Amorim, Jacek Kaworski, Nikolaj Jensen, Paolo Perrone, Sam Staton. Institutions: Recursive Safeguarding Limited. Status: active. | tDrsgSLi8J | — | |
| ARIA TA1.1: Syntax and Semantics for Multimodal Petri Nets | Tallinn University of Technology | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Syntax and Semantics for Multimodal Petri Nets. Lead(s): Amar Hadzihasanovic, Diana Kessler. Institutions: Tallinn University of Technology. Status: active. | VpM42-Oye3 | — | |
| ARIA TA2: SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing | Manufacturing Technology Centre | 2025-04 | aria.org.uk | [Safeguarded AI TA2] SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing. Lead(s): Mohammed Begg. Institutions: Manufacturing Technology Centre. Status: active. | tDrsgSLi8J | — | |
| ARIA TA3: SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving | University of York | 2024-09 | york.ac.uk | [Safeguarded AI TA3] SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving. Lead(s): Simon Burton, Radu Calinescu, Kester Clegg, Jie Zou, Ioannis Stefanakos. Institutions: University of York. Status: active. | OdiZdS7PvJ | $460K | |
| ARIA TA1.1: Double Categorical Systems Theory for Safeguarded AI | Topos Institute | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Double Categorical Systems Theory for Safeguarded AI. Lead(s): David Jaz Myers, Owen Lynch, Sophie Libkind, David Spivak, James Fairbanks. Institutions: Topos Research UK / University of Florida. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.2: TA1.2 Technical Coordinator | Obsidian Systems | 2024-09 | aria.org.uk | [Safeguarded AI TA1.2] TA1.2 Technical Coordinator. Lead(s): Colin Hobbins. Institutions: Obsidian Systems. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.1: String Diagrammatic Probabilistic Logic | Tallinn University of Technology | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] String Diagrammatic Probabilistic Logic. Lead(s): Pawel Sobocinski, Eigil Rischel. Institutions: Tallinn University of Technology. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.4: Deliberative AI Specifications and Infrastructure | Massachusetts Institute of Technology | 2025-02 | aria.org.uk | [Safeguarded AI TA1.4] Deliberative AI Specifications and Infrastructure. Lead(s): Aviv Ovadya, Luke Thorburn, Andrew Konya, Kyle Redman. Institutions: AI & Democracy Foundation / UW / MIT. Status: active. | W1Z6qeahXY | — | |
| ARIA TA1.1: Profunctors: A unified semantics for safeguarded AI | University of Manchester | 2024-06 | aria.org.uk | [Safeguarded AI TA1.1] Profunctors: A unified semantics for safeguarded AI. Lead(s): Nicola Gambino. Institutions: University of Manchester. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.4: AI-enabled Governance Models for Advanced AI R&D Organisations | Centre for Future Generations | 2025-02 | aria.org.uk | [Safeguarded AI TA1.4] AI-enabled Governance Models for Advanced AI R&D Organisations. Lead(s): Alex Petropoulos, Bengüsu Ozcan, David Janku, Max Reddel. Institutions: Centre for Future Generations. Status: active. | W1Z6qeahXY | — | |
| ARIA TA1.1: Event Structures as World Models | University of Bristol | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Event Structures as World Models. Lead(s): Alex Kavvos. Institutions: University of Bristol. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Employing Categorical Probability Towards Safe AI | University of Oxford | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Employing Categorical Probability Towards Safe AI. Lead(s): Sam Staton, Pedro Amorim, Elena Di Lavore, Paolo Perrone, Mario Roman, Ruben Van Belle, Younesse Kaddar, Jack Liell-Cock, Owen Lynch. Institutions: University of Oxford. Status: active. | VpM42-Oye3 | — | |
| ARIA TA3: SAINTES: Safe and scalable AI decision support for Energy Systems | University of Exeter | 2024-01 | aria.org.uk | [Safeguarded AI TA3] SAINTES: Safe and scalable AI decision support for Energy Systems. Lead(s): Dawei Qiu, Zhong Fan, Qiong Liu, Zhanhua Pan. Institutions: University of Exeter. Status: active. | OdiZdS7PvJ | — | |
| ARIA TA1.3: Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational Environment | University of Michigan | 2024-01 | aria.org.uk | [Safeguarded AI TA1.3] Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational Environment. Lead(s): Cyrus Omar, Andrew Blinn, Thomas Porter. Institutions: University of Michigan. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.1: Safety: Core representation underlying safeguarded AI | University of Edinburgh | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Safety: Core representation underlying safeguarded AI. Lead(s): Ohad Kammar, Justus Matthiesen, Jesse Sigal. Institutions: University of Edinburgh. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.2: CatColab: Collaborative modeling, specification, and verification | Topos Institute | 2024-01 | aria.org.uk | [Safeguarded AI TA1.2] CatColab: Collaborative modeling, specification, and verification. Lead(s): Evan Patterson, Tim Hosgood, Kevin Carlson, Brendan Fong. Institutions: Topos Institute. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.1: Supermartingale Certificates for Temporal Logic | University of Oxford | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Supermartingale Certificates for Temporal Logic. Lead(s): Mirco Giacobbe, Diptarko Roy, Alessandro Abate. Institutions: University of Birmingham / University of Oxford. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.1: Double Categorical Systems Theory for Safeguarded AI | Topos Institute | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Double Categorical Systems Theory for Safeguarded AI. Lead(s): David Jaz Myers, Owen Lynch, Sophie Libkind, David Spivak, James Fairbanks. Institutions: Topos Research UK / University of Florida. Status: active. | VpM42-Oye3 | — | |
| ARIA TA3: Safeguarded AI-Enabled Biopharmaceutical Manufacturing | University of Birmingham | 2024-01 | aria.org.uk | [Safeguarded AI TA3] Safeguarded AI-Enabled Biopharmaceutical Manufacturing. Lead(s): Mirco Giacobbe, Leonardo Stella, Paul Devine, Jared Delmar. Institutions: University of Birmingham / AstraZeneca. Status: active. | OdiZdS7PvJ | — | |
| ARIA TA2: Recursive Safeguarding | Recursive Safeguarding Limited | 2024-01 | aria.org.uk | [Safeguarded AI TA2] Recursive Safeguarding. Lead(s): Younesse Kaddar, Rob Cornish, Pedro Amorim, Jacek Kaworski, Nikolaj Jensen, Paolo Perrone, Sam Staton. Institutions: Recursive Safeguarding Limited. Status: active. | tDrsgSLi8J | — | |
| ARIA TA1.2: From string diagrams to GPU optimisation | Adjoint Labs Limited | 2024-01 | aria.org.uk | [Safeguarded AI TA1.2] From string diagrams to GPU optimisation. Lead(s): Paolo Perrone, Nikolaj Jensen. Institutions: Adjoint Labs Limited. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.4: Deliberative AI Specifications and Infrastructure | Massachusetts Institute of Technology | 2024-01 | aria.org.uk | [Safeguarded AI TA1.4] Deliberative AI Specifications and Infrastructure. Lead(s): Aviv Ovadya, Luke Thorburn, Andrew Konya, Kyle Redman. Institutions: AI & Democracy Foundation / UW / MIT. Status: active. | W1Z6qeahXY | — | |
| ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AI | Conjecture | 2024-01 | aria.org.uk | [Safeguarded AI TA2] Cognitive Emulation: Our Path to Safeguarded AI. Lead(s): Connor Leahy, Jean-Gabriel Bechard. Institutions: Conjecture. Status: closed. | tDrsgSLi8J | — | |
| ARIA TA3: SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving | University of York | 2024-01 | york.ac.uk | [Safeguarded AI TA3] SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving. Lead(s): Simon Burton, Radu Calinescu, Kester Clegg, Jie Zou, Ioannis Stefanakos. Institutions: University of York. Status: active. | OdiZdS7PvJ | $460K | |
| ARIA TA1.1: Monoidal Coalgebraic Metrics | University of Pisa | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Monoidal Coalgebraic Metrics. Lead(s): Filippo Bonchi. Institutions: University of Pisa. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: ULTIMATE: Universal Stochastic Modelling, Verification and Synthesis Framework | University of York | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] ULTIMATE: Universal Stochastic Modelling, Verification and Synthesis Framework. Lead(s): Radu Calinescu, Simos Gerasimou, Sinem Getir Yaman, Gricel Vazquez. Institutions: University of York. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Learning-Theoretic AI Safety | Association for Long Term Existence and Resilience (ALTER) | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Learning-Theoretic AI Safety. Lead(s): Vanessa Kosoy, David Manheim, Alexander Appel, Gergely Szucs. Institutions: ALTER. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.1: Syntax and Semantics for Multimodal Petri Nets | Tallinn University of Technology | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Syntax and Semantics for Multimodal Petri Nets. Lead(s): Amar Hadzihasanovic, Diana Kessler. Institutions: Tallinn University of Technology. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.4: Law-following AI | Institute for Law & AI | 2024-01 | aria.org.uk | [Safeguarded AI TA1.4] Law-following AI. Lead(s): Cullen O'Keefe, Janna Tay. Institutions: Institute for Law & AI. Status: active. | W1Z6qeahXY | — | |
| ARIA TA1.1: Doubly Categorical Systems Logic | Matteo Capucci (Independent) | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Doubly Categorical Systems Logic. Lead(s): Matteo Capucci. Institutions: Independent Researcher. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.4: Privacy-preserving AI Safety Verification | University of Birmingham | 2024-01 | aria.org.uk | [Safeguarded AI TA1.4] Privacy-preserving AI Safety Verification. Lead(s): Pascal Berrang, Mirco Giacobbe, Yang Zhang. Institutions: University of Birmingham / CISPA Helmholtz Center. Status: active. | W1Z6qeahXY | — | |
| ARIA TA1.1: Philosophical Applied Category Theory | David Corfield (Independent) | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Philosophical Applied Category Theory. Lead(s): David Corfield. Institutions: Independent Researcher. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.3: UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator) | HASH | 2024-01 | aria.org.uk | [Safeguarded AI TA1.3] UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator). Lead(s): Dei Vilkinsons, Ciaran Morinan. Institutions: HASH. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA2: SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing | Manufacturing Technology Centre | 2024-01 | aria.org.uk | [Safeguarded AI TA2] SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing. Lead(s): Mohammed Begg. Institutions: Manufacturing Technology Centre. Status: active. | tDrsgSLi8J | — | |
| ARIA TA1.1: True Categorical Programming for Composable Systems | GLAIVE | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] True Categorical Programming for Composable Systems. Lead(s): Jade Master, Zans Mihejevs, Andre Videla, Dylan Braithwaite. Institutions: GLAIVE. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.2: Data-Parallel Proof Checking for Monoidal Theories | University College London | 2024-01 | aria.org.uk | [Safeguarded AI TA1.2] Data-Parallel Proof Checking for Monoidal Theories. Lead(s): Fabio Zanasi, Paul Wilson. Institutions: UCL / Hellas AI. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.3: GAIOS | University of Cambridge | 2024-01 | aria.org.uk | [Safeguarded AI TA1.3] GAIOS. Lead(s): Peter van Hardenberg, Martin Kleppmann. Institutions: Ink & Switch / Cambridge University. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.1: Computational Mechanics Approach to World Models | University of Sussex | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Computational Mechanics Approach to World Models. Lead(s): Fernando Rosas. Institutions: University of Sussex. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified ML | Heriot-Watt University | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Quantitative Predicate Logic as a Foundation for Verified ML. Lead(s): Ekaterina Komendantskaya, Robert Atkey, Radu Mardare, Matteo Capucci. Institutions: Heriot-Watt University / University of Strathclyde. Status: closed. | VpM42-Oye3 | — | |
| ARIA TA1.2: Automated Reasoning Technologies for AI Safety Verification | Zeroth Research | 2024-01 | aria.org.uk | [Safeguarded AI TA1.2] Automated Reasoning Technologies for AI Safety Verification. Lead(s): Mirco Giacobbe, Luca Arnaboldi, Pascal Berrang. Institutions: Zeroth Research / Fondazione Bruno Kessler. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.4: AI-enabled Governance Models for Advanced AI R&D Organisations | Centre for Future Generations | 2024-01 | aria.org.uk | [Safeguarded AI TA1.4] AI-enabled Governance Models for Advanced AI R&D Organisations. Lead(s): Alex Petropoulos, Bengüsu Ozcan, David Janku, Max Reddel. Institutions: Centre for Future Generations. Status: active. | W1Z6qeahXY | — | |
| ARIA TA1.1: Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes | University College London | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes. Lead(s): Fabio Zanasi. Institutions: University College London. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed Systems | University College London | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Unified Automated Reasoning for Randomised Distributed Systems. Lead(s): Alexandra Silva, Robin Piedeleu, Noam Zilberstein. Institutions: UCL / Cornell. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.2: TA1.2 Technical Coordinator | Obsidian Systems | 2024-01 | aria.org.uk | [Safeguarded AI TA1.2] TA1.2 Technical Coordinator. Lead(s): Colin Hobbins. Institutions: Obsidian Systems. Status: active. | -C8O_gr9Bj | — | |
| ARIA TA1.1: String Diagrammatic Probabilistic Logic | Tallinn University of Technology | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] String Diagrammatic Probabilistic Logic. Lead(s): Pawel Sobocinski, Eigil Rischel. Institutions: Tallinn University of Technology. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Profunctors: A unified semantics for safeguarded AI | University of Manchester | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Profunctors: A unified semantics for safeguarded AI. Lead(s): Nicola Gambino. Institutions: University of Manchester. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Probabilistic Protocol Specification for Distributed Autonomous Processes | University of Oxford | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Probabilistic Protocol Specification for Distributed Autonomous Processes. Lead(s): Nobuko Yoshida, Adrian Puerto Aubel, Burak Ekici, Joseph Paulus, Dylan McDermott. Institutions: University of Oxford. Status: active. | VpM42-Oye3 | — | |
| ARIA TA3: SAILS: Safeguarded AI for Logistics and Supply chain | HASH | 2024-01 | aria.org.uk | [Safeguarded AI TA3] SAILS: Safeguarded AI for Logistics and Supply chain. Lead(s): Leah Pickering. Institutions: HASH. Status: active. | OdiZdS7PvJ | — | |
| ARIA TA3: Safeguarded AI for Energy Savings in Radio Access Networks | Net AI | 2024-01 | aria.org.uk | [Safeguarded AI TA3] Safeguarded AI for Energy Savings in Radio Access Networks. Lead(s): Marco Fiore, Paul Patras. Institutions: Net AI. Status: active. | OdiZdS7PvJ | — | |
| ARIA TA1.1: Hyper-optimised Tensor Contraction for Neural Networks Verification | University of Birmingham | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Hyper-optimised Tensor Contraction for Neural Networks Verification. Lead(s): Stefano Gogioso, Mirco Giacobbe. Institutions: Hashberg Ltd / University of Birmingham. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.4: Field Building for Better Formal Models of Society | Meaning Alignment Institute | 2024-01 | aria.org.uk | [Safeguarded AI TA1.4] Field Building for Better Formal Models of Society. Lead(s): Joe Edelman, Ryan Lowe. Institutions: Meaning Alignment Institute. Status: active. | W1Z6qeahXY | — | |
| ARIA TA3: Large-Scale Validation of Business Process AI (BPAI) | University of Oxford | 2024-01 | aria.org.uk | [Safeguarded AI TA3] Large-Scale Validation of Business Process AI (BPAI). Lead(s): Nobuko Yoshida, David Parker, Adrian Puerto Aubel, Joseph Paulus. Institutions: University of Oxford. Status: active. | OdiZdS7PvJ | — | |
| ARIA TA3: SAGEflex: Safeguarded AI Agents for Grid-Edge Flexibility | University of Oxford | 2024-01 | aria.org.uk | [Safeguarded AI TA3] SAGEflex: Safeguarded AI Agents for Grid-Edge Flexibility. Lead(s): Thomas Morstyn, Jakob Foerster, Yihong Zhou, Sofia Sampaio. Institutions: University of Oxford. Status: active. | OdiZdS7PvJ | — | |
| ARIA TA3: Digital Custodians for Ageing Infrastructure | Mind Foundry / WSP | 2024-01 | aria.org.uk | [Safeguarded AI TA3] Digital Custodians for Ageing Infrastructure. Lead(s): Nathan Korda, Julia Bush, Mark McLeod. Institutions: Mind Foundry / WSP. Status: closed. | OdiZdS7PvJ | — | |
| ARIA TA1.1: SAINT: Safe AI ageNTs | University of Oxford | 2024-01 | cs.ox.ac.uk | [Safeguarded AI TA1.1] SAINT: Safe AI ageNTs. Lead(s): Alessandro Abate, Virginie Debauche, Niko Vertovec. Institutions: University of Oxford. Status: active. | VpM42-Oye3 | — | |
| ARIA TA1.1: Modal Types for Quantitative Analysis | University of Kent | 2024-01 | aria.org.uk | [Safeguarded AI TA1.1] Modal Types for Quantitative Analysis. Lead(s): Vineet Rajani, Dominic Orchard. Institutions: University of Kent. Status: active. | VpM42-Oye3 | — |
▶Internal Metadata
| ID: | sid_XqjV4mbMXQ |
| Stable ID: | sid_XqjV4mbMXQ |
| Wiki ID: | E1997 |
| Type: | organization |
| YAML Source: | packages/factbase/data/fb-entities/aria-uk.yaml |
| Facts: | 7 structured (12 total) |
| Records: | 101 in 4 collections |