Skip to content
Longterm Wiki

Advanced Research and Invention Agency (ARIA)

Government
Founded 2022 (4 years old)HQ: London, UKaria.org.uk
Structured Facts
Database Records
Total Funding Raised
£59M
as of Nov 2025
Founded Date
2022

Key People

1
YB
Scientific Director, Safeguarded AI
Aug 2024 – present

All Facts

Financial

Grant Count48Nov 2025view →
Total Funding Raised£59MNov 2025view →

Organization

Founded Date2022view →
HeadquartersLondon, UKview →

General

Websitehttps://www.aria.org.ukview →

Divisions

7
NameDivisionTypeStatusStartDateEndDateSourceNotesSource check
TA2 — Machine Learningprogram-areainactive20242025-11aria.org.ukPhase 1: GBP 1M across 3 teams (completed). Phase 2 (GBP 18M) abandoned in Nov 2025 pivot — frontier AI advances made dedicated ML capability development less valuable. Funds redirected to expand TA1.
TA3 — Real-World Applicationsprogram-areaactive2024aria.org.ukGBP 5.4M Phase 1 across 9 teams (continuing to completion). Applications in energy grid, automated driving, clinical trials, logistics, biopharmaceuticals, and telecom. Phase 2 (GBP 8.4M) cancelled Nov 2025; replaced by cybersecurity pivot to formally-verified firewalls for critical infrastructure.
TA1.1 — Theory (Scaffolding)program-areaactive2024-04aria.org.ukGBP 3.5M Phase 1 across 22 projects. Mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, and formal verification foundations. Scope expanded in Nov 2025 pivot.
Safeguarded AI Programmeprogram-areaactive2023aria.org.ukARIA's flagship AI safety programme, led by Programme Director David 'davidad' Dalrymple with Scientific Director Yoshua Bengio (joined Aug 2024). GBP 59M committed. Nov 2025 pivot expanded TA1 scope to broader 'mathematical assurance and auditability', abandoned TA2 Phase 2, cancelled TA3 Phase 2 in favor of cybersecurity focus.
TA1.2 + TA1.3 — Platform (Backend + HCI)program-areaactive2024aria.org.ukGBP 14.2M across 8 projects. TA1.2 (backend): proof checking, automated reasoning, GPU optimization. TA1.3 (human-computer interface): collaborative modeling, type-theoretic environments.
TA1.4 — Sociotechnical Integrationprogram-areaactive2024aria.org.ukGBP 3.4M across 6 teams. Law-following AI, formal models of society, governance models, privacy-preserving verification, preference aggregation, and deliberative AI specifications.
TA1.1 — Theory (Scaffolding)program-areaactive2024-04aria.org.ukGBP 3.5M Phase 1 across 22 projects. Mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, and formal verification foundations. Scope expanded in Nov 2025 pivot.

Funding Programs

5
NameProgramTypeDescriptionDivisionIdTotalBudgetCurrencyStatusSourceNotesSource check
Safeguarded AI TA1.4 — Sociotechnical IntegrationsolicitationLaw-following AI, formal models of society, governance models, privacy-preserving verification, preference aggregation, and deliberative AI specifications.7PTcCdLnoC$3.4MGBPawardedaria.org.ukGBP 3.4M across 6 teams, Phase 1 (up to 18 months).
Safeguarded AI TA1.1 — Theorysolicitation22 projects on mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, string diagrams, and verification foundations.UO9LvMlj_x$3.5MGBPawardedaria.org.ukGBP 3.5M Phase 1 across 22 projects. Call opened April 2024.
Safeguarded AI TA2 — Machine LearningsolicitationPhase 1: development teams for ML approaches to safeguarded AI. Phase 2 (GBP 18M single award) abandoned in Nov 2025 pivot — frontier AI advances made dedicated ML capability development less valuable.zAJlIJiXxB$1MGBPclosedaria.org.ukPhase 1: GBP 1M across 3 teams (completed). Phase 2 (GBP 18M) abandoned Nov 2025.
Safeguarded AI TA3 — Real-World ApplicationssolicitationReal-world demonstrations of safeguarded AI in energy grid, automated driving, clinical trials, logistics, biopharmaceuticals, and telecom. Phase 2 (GBP 8.4M) cancelled in Nov 2025 pivot; replaced by cybersecurity focus on formally-verified firewalls for critical infrastructure.uRPFXhBwBY$5.4MGBPawardedaria.org.ukGBP 5.4M Phase 1 across 9 teams (continuing). Phase 2 (GBP 8.4M) cancelled Nov 2025.
Safeguarded AI TA1.2 + TA1.3 — PlatformsolicitationBackend infrastructure (TA1.2) and human-computer interface (TA1.3) for the Safeguarded AI programme. Proof checking, automated reasoning, collaborative modeling, and UX.Cf0dXNt8tu$14MGBPawardedaria.org.ukGBP 14.2M across 8 projects.

Grants

88
NameRecipientDateSourceNotesProgramIdAmountSource check
ARIA TA1.4: Field Building for Better Formal Models of SocietyMeaning Alignment Institute2025-02aria.org.uk[Safeguarded AI TA1.4] Field Building for Better Formal Models of Society. Lead(s): Joe Edelman, Ryan Lowe. Institutions: Meaning Alignment Institute. Status: active.W1Z6qeahXY
ARIA TA1.3: Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational EnvironmentUniversity of Michigan2024-09aria.org.uk[Safeguarded AI TA1.3] Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational Environment. Lead(s): Cyrus Omar, Andrew Blinn, Thomas Porter. Institutions: University of Michigan. Status: active.-C8O_gr9Bj
ARIA TA1.1: Hyper-optimised Tensor Contraction for Neural Networks VerificationUniversity of Birmingham2024-06aria.org.uk[Safeguarded AI TA1.1] Hyper-optimised Tensor Contraction for Neural Networks Verification. Lead(s): Stefano Gogioso, Mirco Giacobbe. Institutions: Hashberg Ltd / University of Birmingham. Status: active.VpM42-Oye3
ARIA TA1.1: Monoidal Coalgebraic MetricsUniversity of Pisa2024-06aria.org.uk[Safeguarded AI TA1.1] Monoidal Coalgebraic Metrics. Lead(s): Filippo Bonchi. Institutions: University of Pisa. Status: active.VpM42-Oye3
ARIA TA1.2: CatColab: Collaborative modeling, specification, and verificationTopos Institute2024-09aria.org.uk[Safeguarded AI TA1.2] CatColab: Collaborative modeling, specification, and verification. Lead(s): Evan Patterson, Tim Hosgood, Kevin Carlson, Brendan Fong. Institutions: Topos Institute. Status: active.-C8O_gr9Bj
ARIA TA3: Digital Custodians for Ageing InfrastructureMind Foundry / WSP2024-09aria.org.uk[Safeguarded AI TA3] Digital Custodians for Ageing Infrastructure. Lead(s): Nathan Korda, Julia Bush, Mark McLeod. Institutions: Mind Foundry / WSP. Status: closed.OdiZdS7PvJ
ARIA TA1.1: ULTIMATE: Universal Stochastic Modelling, Verification and Synthesis FrameworkUniversity of York2024-06aria.org.uk[Safeguarded AI TA1.1] ULTIMATE: Universal Stochastic Modelling, Verification and Synthesis Framework. Lead(s): Radu Calinescu, Simos Gerasimou, Sinem Getir Yaman, Gricel Vazquez. Institutions: University of York. Status: active.VpM42-Oye3
ARIA TA3: Safeguarded AI-Enabled Biopharmaceutical ManufacturingUniversity of Birmingham2024-09aria.org.uk[Safeguarded AI TA3] Safeguarded AI-Enabled Biopharmaceutical Manufacturing. Lead(s): Mirco Giacobbe, Leonardo Stella, Paul Devine, Jared Delmar. Institutions: University of Birmingham / AstraZeneca. Status: active.OdiZdS7PvJ
ARIA TA3: SAILS: Safeguarded AI for Logistics and Supply chainHASH2024-09aria.org.uk[Safeguarded AI TA3] SAILS: Safeguarded AI for Logistics and Supply chain. Lead(s): Leah Pickering. Institutions: HASH. Status: active.OdiZdS7PvJ
ARIA TA1.1: Modal Types for Quantitative AnalysisUniversity of Kent2024-06aria.org.uk[Safeguarded AI TA1.1] Modal Types for Quantitative Analysis. Lead(s): Vineet Rajani, Dominic Orchard. Institutions: University of Kent. Status: active.VpM42-Oye3
ARIA TA1.1: Doubly Categorical Systems LogicMatteo Capucci (Independent)2024-06aria.org.uk[Safeguarded AI TA1.1] Doubly Categorical Systems Logic. Lead(s): Matteo Capucci. Institutions: Independent Researcher. Status: closed.VpM42-Oye3
ARIA TA1.2: From string diagrams to GPU optimisationAdjoint Labs Limited2024-09aria.org.uk[Safeguarded AI TA1.2] From string diagrams to GPU optimisation. Lead(s): Paolo Perrone, Nikolaj Jensen. Institutions: Adjoint Labs Limited. Status: active.-C8O_gr9Bj
ARIA TA1.2: Automated Reasoning Technologies for AI Safety VerificationZeroth Research2024-09aria.org.uk[Safeguarded AI TA1.2] Automated Reasoning Technologies for AI Safety Verification. Lead(s): Mirco Giacobbe, Luca Arnaboldi, Pascal Berrang. Institutions: Zeroth Research / Fondazione Bruno Kessler. Status: active.-C8O_gr9Bj
ARIA TA1.4: Privacy-preserving AI Safety VerificationUniversity of Birmingham2025-02aria.org.uk[Safeguarded AI TA1.4] Privacy-preserving AI Safety Verification. Lead(s): Pascal Berrang, Mirco Giacobbe, Yang Zhang. Institutions: University of Birmingham / CISPA Helmholtz Center. Status: active.W1Z6qeahXY
ARIA TA1.1: Philosophical Applied Category TheoryDavid Corfield (Independent)2024-06aria.org.uk[Safeguarded AI TA1.1] Philosophical Applied Category Theory. Lead(s): David Corfield. Institutions: Independent Researcher. Status: active.VpM42-Oye3
ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified MLHeriot-Watt University2024-06aria.org.uk[Safeguarded AI TA1.1] Quantitative Predicate Logic as a Foundation for Verified ML. Lead(s): Ekaterina Komendantskaya, Robert Atkey, Radu Mardare, Matteo Capucci. Institutions: Heriot-Watt University / University of Strathclyde. Status: closed.VpM42-Oye3
ARIA TA1.1: Axiomatic Theories of String Diagrams for Categories of Probabilistic ProcessesUniversity College London2024-06aria.org.uk[Safeguarded AI TA1.1] Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes. Lead(s): Fabio Zanasi. Institutions: University College London. Status: active.VpM42-Oye3
ARIA TA1.1: Learning-Theoretic AI SafetyAssociation for Long Term Existence and Resilience (ALTER)2024-06aria.org.uk[Safeguarded AI TA1.1] Learning-Theoretic AI Safety. Lead(s): Vanessa Kosoy, David Manheim, Alexander Appel, Gergely Szucs. Institutions: ALTER. Status: closed.VpM42-Oye3
ARIA TA1.1: Safety: Core representation underlying safeguarded AIUniversity of Edinburgh2024-06aria.org.uk[Safeguarded AI TA1.1] Safety: Core representation underlying safeguarded AI. Lead(s): Ohad Kammar, Justus Matthiesen, Jesse Sigal. Institutions: University of Edinburgh. Status: closed.VpM42-Oye3
ARIA TA1.1: True Categorical Programming for Composable SystemsGLAIVE2024-06aria.org.uk[Safeguarded AI TA1.1] True Categorical Programming for Composable Systems. Lead(s): Jade Master, Zans Mihejevs, Andre Videla, Dylan Braithwaite. Institutions: GLAIVE. Status: closed.VpM42-Oye3
ARIA TA1.3: UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator)HASH2024-09aria.org.uk[Safeguarded AI TA1.3] UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator). Lead(s): Dei Vilkinsons, Ciaran Morinan. Institutions: HASH. Status: active.-C8O_gr9Bj
ARIA TA1.2: Data-Parallel Proof Checking for Monoidal TheoriesUniversity College London2024-09aria.org.uk[Safeguarded AI TA1.2] Data-Parallel Proof Checking for Monoidal Theories. Lead(s): Fabio Zanasi, Paul Wilson. Institutions: UCL / Hellas AI. Status: active.-C8O_gr9Bj
ARIA TA1.1: Supermartingale Certificates for Temporal LogicUniversity of Oxford2024-06aria.org.uk[Safeguarded AI TA1.1] Supermartingale Certificates for Temporal Logic. Lead(s): Mirco Giacobbe, Diptarko Roy, Alessandro Abate. Institutions: University of Birmingham / University of Oxford. Status: closed.VpM42-Oye3
ARIA TA1.1: SAINT: Safe AI ageNTsUniversity of Oxford2024-06cs.ox.ac.uk[Safeguarded AI TA1.1] SAINT: Safe AI ageNTs. Lead(s): Alessandro Abate, Virginie Debauche, Niko Vertovec. Institutions: University of Oxford. Status: active.VpM42-Oye3
ARIA TA1.1: Computational Mechanics Approach to World ModelsUniversity of Sussex2024-06aria.org.uk[Safeguarded AI TA1.1] Computational Mechanics Approach to World Models. Lead(s): Fernando Rosas. Institutions: University of Sussex. Status: active.VpM42-Oye3
ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed SystemsUniversity College London2024-06aria.org.uk[Safeguarded AI TA1.1] Unified Automated Reasoning for Randomised Distributed Systems. Lead(s): Alexandra Silva, Robin Piedeleu, Noam Zilberstein. Institutions: UCL / Cornell. Status: active.VpM42-Oye3
ARIA TA1.1: Employing Categorical Probability Towards Safe AIUniversity of Oxford2024-06aria.org.uk[Safeguarded AI TA1.1] Employing Categorical Probability Towards Safe AI. Lead(s): Sam Staton, Pedro Amorim, Elena Di Lavore, Paolo Perrone, Mario Roman, Ruben Van Belle, Younesse Kaddar, Jack Liell-Cock, Owen Lynch. Institutions: University of Oxford. Status: active.VpM42-Oye3
ARIA TA1.1: Probabilistic Protocol Specification for Distributed Autonomous ProcessesUniversity of Oxford2024-06aria.org.uk[Safeguarded AI TA1.1] Probabilistic Protocol Specification for Distributed Autonomous Processes. Lead(s): Nobuko Yoshida, Adrian Puerto Aubel, Burak Ekici, Joseph Paulus, Dylan McDermott. Institutions: University of Oxford. Status: active.VpM42-Oye3
ARIA TA1.1: Event Structures as World ModelsUniversity of Bristol2024-06aria.org.uk[Safeguarded AI TA1.1] Event Structures as World Models. Lead(s): Alex Kavvos. Institutions: University of Bristol. Status: active.VpM42-Oye3
ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AIConjecture2025-04aria.org.uk[Safeguarded AI TA2] Cognitive Emulation: Our Path to Safeguarded AI. Lead(s): Connor Leahy, Jean-Gabriel Bechard. Institutions: Conjecture. Status: closed.tDrsgSLi8J
ARIA TA1.4: Law-following AIInstitute for Law & AI2025-02aria.org.uk[Safeguarded AI TA1.4] Law-following AI. Lead(s): Cullen O'Keefe, Janna Tay. Institutions: Institute for Law & AI. Status: active.W1Z6qeahXY
ARIA TA3: Safeguarded AI for Energy Savings in Radio Access NetworksNet AI2024-09aria.org.uk[Safeguarded AI TA3] Safeguarded AI for Energy Savings in Radio Access Networks. Lead(s): Marco Fiore, Paul Patras. Institutions: Net AI. Status: active.OdiZdS7PvJ
ARIA TA2: Recursive SafeguardingRecursive Safeguarding Limited2025-04aria.org.uk[Safeguarded AI TA2] Recursive Safeguarding. Lead(s): Younesse Kaddar, Rob Cornish, Pedro Amorim, Jacek Kaworski, Nikolaj Jensen, Paolo Perrone, Sam Staton. Institutions: Recursive Safeguarding Limited. Status: active.tDrsgSLi8J
ARIA TA1.1: Syntax and Semantics for Multimodal Petri NetsTallinn University of Technology2024-06aria.org.uk[Safeguarded AI TA1.1] Syntax and Semantics for Multimodal Petri Nets. Lead(s): Amar Hadzihasanovic, Diana Kessler. Institutions: Tallinn University of Technology. Status: active.VpM42-Oye3
ARIA TA2: SHIELD: Safeguarding High-Impact AI for Enhanced ManufacturingManufacturing Technology Centre2025-04aria.org.uk[Safeguarded AI TA2] SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing. Lead(s): Mohammed Begg. Institutions: Manufacturing Technology Centre. Status: active.tDrsgSLi8J
ARIA TA3: SAFER-ADS: Safety Assurance of Frontier AI for Automated DrivingUniversity of York2024-09york.ac.uk[Safeguarded AI TA3] SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving. Lead(s): Simon Burton, Radu Calinescu, Kester Clegg, Jie Zou, Ioannis Stefanakos. Institutions: University of York. Status: active.OdiZdS7PvJ$460K
ARIA TA1.1: Double Categorical Systems Theory for Safeguarded AITopos Institute2024-06aria.org.uk[Safeguarded AI TA1.1] Double Categorical Systems Theory for Safeguarded AI. Lead(s): David Jaz Myers, Owen Lynch, Sophie Libkind, David Spivak, James Fairbanks. Institutions: Topos Research UK / University of Florida. Status: active.VpM42-Oye3
ARIA TA1.2: TA1.2 Technical CoordinatorObsidian Systems2024-09aria.org.uk[Safeguarded AI TA1.2] TA1.2 Technical Coordinator. Lead(s): Colin Hobbins. Institutions: Obsidian Systems. Status: active.-C8O_gr9Bj
ARIA TA1.1: String Diagrammatic Probabilistic LogicTallinn University of Technology2024-06aria.org.uk[Safeguarded AI TA1.1] String Diagrammatic Probabilistic Logic. Lead(s): Pawel Sobocinski, Eigil Rischel. Institutions: Tallinn University of Technology. Status: active.VpM42-Oye3
ARIA TA1.4: Deliberative AI Specifications and InfrastructureMassachusetts Institute of Technology2025-02aria.org.uk[Safeguarded AI TA1.4] Deliberative AI Specifications and Infrastructure. Lead(s): Aviv Ovadya, Luke Thorburn, Andrew Konya, Kyle Redman. Institutions: AI & Democracy Foundation / UW / MIT. Status: active.W1Z6qeahXY
ARIA TA1.1: Profunctors: A unified semantics for safeguarded AIUniversity of Manchester2024-06aria.org.uk[Safeguarded AI TA1.1] Profunctors: A unified semantics for safeguarded AI. Lead(s): Nicola Gambino. Institutions: University of Manchester. Status: active.VpM42-Oye3
ARIA TA1.4: AI-enabled Governance Models for Advanced AI R&D OrganisationsCentre for Future Generations2025-02aria.org.uk[Safeguarded AI TA1.4] AI-enabled Governance Models for Advanced AI R&D Organisations. Lead(s): Alex Petropoulos, Bengüsu Ozcan, David Janku, Max Reddel. Institutions: Centre for Future Generations. Status: active.W1Z6qeahXY
ARIA TA1.1: Event Structures as World ModelsUniversity of Bristol2024-01aria.org.uk[Safeguarded AI TA1.1] Event Structures as World Models. Lead(s): Alex Kavvos. Institutions: University of Bristol. Status: active.VpM42-Oye3
ARIA TA1.1: Employing Categorical Probability Towards Safe AIUniversity of Oxford2024-01aria.org.uk[Safeguarded AI TA1.1] Employing Categorical Probability Towards Safe AI. Lead(s): Sam Staton, Pedro Amorim, Elena Di Lavore, Paolo Perrone, Mario Roman, Ruben Van Belle, Younesse Kaddar, Jack Liell-Cock, Owen Lynch. Institutions: University of Oxford. Status: active.VpM42-Oye3
ARIA TA3: SAINTES: Safe and scalable AI decision support for Energy SystemsUniversity of Exeter2024-01aria.org.uk[Safeguarded AI TA3] SAINTES: Safe and scalable AI decision support for Energy Systems. Lead(s): Dawei Qiu, Zhong Fan, Qiong Liu, Zhanhua Pan. Institutions: University of Exeter. Status: active.OdiZdS7PvJ
ARIA TA1.3: Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational EnvironmentUniversity of Michigan2024-01aria.org.uk[Safeguarded AI TA1.3] Safeguarded Collaboration with AI Agents in a Type-Theoretic Computational Environment. Lead(s): Cyrus Omar, Andrew Blinn, Thomas Porter. Institutions: University of Michigan. Status: active.-C8O_gr9Bj
ARIA TA1.1: Safety: Core representation underlying safeguarded AIUniversity of Edinburgh2024-01aria.org.uk[Safeguarded AI TA1.1] Safety: Core representation underlying safeguarded AI. Lead(s): Ohad Kammar, Justus Matthiesen, Jesse Sigal. Institutions: University of Edinburgh. Status: closed.VpM42-Oye3
ARIA TA1.2: CatColab: Collaborative modeling, specification, and verificationTopos Institute2024-01aria.org.uk[Safeguarded AI TA1.2] CatColab: Collaborative modeling, specification, and verification. Lead(s): Evan Patterson, Tim Hosgood, Kevin Carlson, Brendan Fong. Institutions: Topos Institute. Status: active.-C8O_gr9Bj
ARIA TA1.1: Supermartingale Certificates for Temporal LogicUniversity of Oxford2024-01aria.org.uk[Safeguarded AI TA1.1] Supermartingale Certificates for Temporal Logic. Lead(s): Mirco Giacobbe, Diptarko Roy, Alessandro Abate. Institutions: University of Birmingham / University of Oxford. Status: closed.VpM42-Oye3
ARIA TA1.1: Double Categorical Systems Theory for Safeguarded AITopos Institute2024-01aria.org.uk[Safeguarded AI TA1.1] Double Categorical Systems Theory for Safeguarded AI. Lead(s): David Jaz Myers, Owen Lynch, Sophie Libkind, David Spivak, James Fairbanks. Institutions: Topos Research UK / University of Florida. Status: active.VpM42-Oye3
ARIA TA3: Safeguarded AI-Enabled Biopharmaceutical ManufacturingUniversity of Birmingham2024-01aria.org.uk[Safeguarded AI TA3] Safeguarded AI-Enabled Biopharmaceutical Manufacturing. Lead(s): Mirco Giacobbe, Leonardo Stella, Paul Devine, Jared Delmar. Institutions: University of Birmingham / AstraZeneca. Status: active.OdiZdS7PvJ
ARIA TA2: Recursive SafeguardingRecursive Safeguarding Limited2024-01aria.org.uk[Safeguarded AI TA2] Recursive Safeguarding. Lead(s): Younesse Kaddar, Rob Cornish, Pedro Amorim, Jacek Kaworski, Nikolaj Jensen, Paolo Perrone, Sam Staton. Institutions: Recursive Safeguarding Limited. Status: active.tDrsgSLi8J
ARIA TA1.2: From string diagrams to GPU optimisationAdjoint Labs Limited2024-01aria.org.uk[Safeguarded AI TA1.2] From string diagrams to GPU optimisation. Lead(s): Paolo Perrone, Nikolaj Jensen. Institutions: Adjoint Labs Limited. Status: active.-C8O_gr9Bj
ARIA TA1.4: Deliberative AI Specifications and InfrastructureMassachusetts Institute of Technology2024-01aria.org.uk[Safeguarded AI TA1.4] Deliberative AI Specifications and Infrastructure. Lead(s): Aviv Ovadya, Luke Thorburn, Andrew Konya, Kyle Redman. Institutions: AI & Democracy Foundation / UW / MIT. Status: active.W1Z6qeahXY
ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AIConjecture2024-01aria.org.uk[Safeguarded AI TA2] Cognitive Emulation: Our Path to Safeguarded AI. Lead(s): Connor Leahy, Jean-Gabriel Bechard. Institutions: Conjecture. Status: closed.tDrsgSLi8J
ARIA TA3: SAFER-ADS: Safety Assurance of Frontier AI for Automated DrivingUniversity of York2024-01york.ac.uk[Safeguarded AI TA3] SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving. Lead(s): Simon Burton, Radu Calinescu, Kester Clegg, Jie Zou, Ioannis Stefanakos. Institutions: University of York. Status: active.OdiZdS7PvJ$460K
ARIA TA1.1: Monoidal Coalgebraic MetricsUniversity of Pisa2024-01aria.org.uk[Safeguarded AI TA1.1] Monoidal Coalgebraic Metrics. Lead(s): Filippo Bonchi. Institutions: University of Pisa. Status: active.VpM42-Oye3
ARIA TA1.1: ULTIMATE: Universal Stochastic Modelling, Verification and Synthesis FrameworkUniversity of York2024-01aria.org.uk[Safeguarded AI TA1.1] ULTIMATE: Universal Stochastic Modelling, Verification and Synthesis Framework. Lead(s): Radu Calinescu, Simos Gerasimou, Sinem Getir Yaman, Gricel Vazquez. Institutions: University of York. Status: active.VpM42-Oye3
ARIA TA1.1: Learning-Theoretic AI SafetyAssociation for Long Term Existence and Resilience (ALTER)2024-01aria.org.uk[Safeguarded AI TA1.1] Learning-Theoretic AI Safety. Lead(s): Vanessa Kosoy, David Manheim, Alexander Appel, Gergely Szucs. Institutions: ALTER. Status: closed.VpM42-Oye3
ARIA TA1.1: Syntax and Semantics for Multimodal Petri NetsTallinn University of Technology2024-01aria.org.uk[Safeguarded AI TA1.1] Syntax and Semantics for Multimodal Petri Nets. Lead(s): Amar Hadzihasanovic, Diana Kessler. Institutions: Tallinn University of Technology. Status: active.VpM42-Oye3
ARIA TA1.4: Law-following AIInstitute for Law & AI2024-01aria.org.uk[Safeguarded AI TA1.4] Law-following AI. Lead(s): Cullen O'Keefe, Janna Tay. Institutions: Institute for Law & AI. Status: active.W1Z6qeahXY
ARIA TA1.1: Doubly Categorical Systems LogicMatteo Capucci (Independent)2024-01aria.org.uk[Safeguarded AI TA1.1] Doubly Categorical Systems Logic. Lead(s): Matteo Capucci. Institutions: Independent Researcher. Status: closed.VpM42-Oye3
ARIA TA1.4: Privacy-preserving AI Safety VerificationUniversity of Birmingham2024-01aria.org.uk[Safeguarded AI TA1.4] Privacy-preserving AI Safety Verification. Lead(s): Pascal Berrang, Mirco Giacobbe, Yang Zhang. Institutions: University of Birmingham / CISPA Helmholtz Center. Status: active.W1Z6qeahXY
ARIA TA1.1: Philosophical Applied Category TheoryDavid Corfield (Independent)2024-01aria.org.uk[Safeguarded AI TA1.1] Philosophical Applied Category Theory. Lead(s): David Corfield. Institutions: Independent Researcher. Status: active.VpM42-Oye3
ARIA TA1.3: UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator)HASH2024-01aria.org.uk[Safeguarded AI TA1.3] UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator). Lead(s): Dei Vilkinsons, Ciaran Morinan. Institutions: HASH. Status: active.-C8O_gr9Bj
ARIA TA2: SHIELD: Safeguarding High-Impact AI for Enhanced ManufacturingManufacturing Technology Centre2024-01aria.org.uk[Safeguarded AI TA2] SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing. Lead(s): Mohammed Begg. Institutions: Manufacturing Technology Centre. Status: active.tDrsgSLi8J
ARIA TA1.1: True Categorical Programming for Composable SystemsGLAIVE2024-01aria.org.uk[Safeguarded AI TA1.1] True Categorical Programming for Composable Systems. Lead(s): Jade Master, Zans Mihejevs, Andre Videla, Dylan Braithwaite. Institutions: GLAIVE. Status: closed.VpM42-Oye3
ARIA TA1.2: Data-Parallel Proof Checking for Monoidal TheoriesUniversity College London2024-01aria.org.uk[Safeguarded AI TA1.2] Data-Parallel Proof Checking for Monoidal Theories. Lead(s): Fabio Zanasi, Paul Wilson. Institutions: UCL / Hellas AI. Status: active.-C8O_gr9Bj
ARIA TA1.3: GAIOSUniversity of Cambridge2024-01aria.org.uk[Safeguarded AI TA1.3] GAIOS. Lead(s): Peter van Hardenberg, Martin Kleppmann. Institutions: Ink & Switch / Cambridge University. Status: active.-C8O_gr9Bj
ARIA TA1.1: Computational Mechanics Approach to World ModelsUniversity of Sussex2024-01aria.org.uk[Safeguarded AI TA1.1] Computational Mechanics Approach to World Models. Lead(s): Fernando Rosas. Institutions: University of Sussex. Status: active.VpM42-Oye3
ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified MLHeriot-Watt University2024-01aria.org.uk[Safeguarded AI TA1.1] Quantitative Predicate Logic as a Foundation for Verified ML. Lead(s): Ekaterina Komendantskaya, Robert Atkey, Radu Mardare, Matteo Capucci. Institutions: Heriot-Watt University / University of Strathclyde. Status: closed.VpM42-Oye3
ARIA TA1.2: Automated Reasoning Technologies for AI Safety VerificationZeroth Research2024-01aria.org.uk[Safeguarded AI TA1.2] Automated Reasoning Technologies for AI Safety Verification. Lead(s): Mirco Giacobbe, Luca Arnaboldi, Pascal Berrang. Institutions: Zeroth Research / Fondazione Bruno Kessler. Status: active.-C8O_gr9Bj
ARIA TA1.4: AI-enabled Governance Models for Advanced AI R&D OrganisationsCentre for Future Generations2024-01aria.org.uk[Safeguarded AI TA1.4] AI-enabled Governance Models for Advanced AI R&D Organisations. Lead(s): Alex Petropoulos, Bengüsu Ozcan, David Janku, Max Reddel. Institutions: Centre for Future Generations. Status: active.W1Z6qeahXY
ARIA TA1.1: Axiomatic Theories of String Diagrams for Categories of Probabilistic ProcessesUniversity College London2024-01aria.org.uk[Safeguarded AI TA1.1] Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes. Lead(s): Fabio Zanasi. Institutions: University College London. Status: active.VpM42-Oye3
ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed SystemsUniversity College London2024-01aria.org.uk[Safeguarded AI TA1.1] Unified Automated Reasoning for Randomised Distributed Systems. Lead(s): Alexandra Silva, Robin Piedeleu, Noam Zilberstein. Institutions: UCL / Cornell. Status: active.VpM42-Oye3
ARIA TA1.2: TA1.2 Technical CoordinatorObsidian Systems2024-01aria.org.uk[Safeguarded AI TA1.2] TA1.2 Technical Coordinator. Lead(s): Colin Hobbins. Institutions: Obsidian Systems. Status: active.-C8O_gr9Bj
ARIA TA1.1: String Diagrammatic Probabilistic LogicTallinn University of Technology2024-01aria.org.uk[Safeguarded AI TA1.1] String Diagrammatic Probabilistic Logic. Lead(s): Pawel Sobocinski, Eigil Rischel. Institutions: Tallinn University of Technology. Status: active.VpM42-Oye3
ARIA TA1.1: Profunctors: A unified semantics for safeguarded AIUniversity of Manchester2024-01aria.org.uk[Safeguarded AI TA1.1] Profunctors: A unified semantics for safeguarded AI. Lead(s): Nicola Gambino. Institutions: University of Manchester. Status: active.VpM42-Oye3
ARIA TA1.1: Probabilistic Protocol Specification for Distributed Autonomous ProcessesUniversity of Oxford2024-01aria.org.uk[Safeguarded AI TA1.1] Probabilistic Protocol Specification for Distributed Autonomous Processes. Lead(s): Nobuko Yoshida, Adrian Puerto Aubel, Burak Ekici, Joseph Paulus, Dylan McDermott. Institutions: University of Oxford. Status: active.VpM42-Oye3
ARIA TA3: SAILS: Safeguarded AI for Logistics and Supply chainHASH2024-01aria.org.uk[Safeguarded AI TA3] SAILS: Safeguarded AI for Logistics and Supply chain. Lead(s): Leah Pickering. Institutions: HASH. Status: active.OdiZdS7PvJ
ARIA TA3: Safeguarded AI for Energy Savings in Radio Access NetworksNet AI2024-01aria.org.uk[Safeguarded AI TA3] Safeguarded AI for Energy Savings in Radio Access Networks. Lead(s): Marco Fiore, Paul Patras. Institutions: Net AI. Status: active.OdiZdS7PvJ
ARIA TA1.1: Hyper-optimised Tensor Contraction for Neural Networks VerificationUniversity of Birmingham2024-01aria.org.uk[Safeguarded AI TA1.1] Hyper-optimised Tensor Contraction for Neural Networks Verification. Lead(s): Stefano Gogioso, Mirco Giacobbe. Institutions: Hashberg Ltd / University of Birmingham. Status: active.VpM42-Oye3
ARIA TA1.4: Field Building for Better Formal Models of SocietyMeaning Alignment Institute2024-01aria.org.uk[Safeguarded AI TA1.4] Field Building for Better Formal Models of Society. Lead(s): Joe Edelman, Ryan Lowe. Institutions: Meaning Alignment Institute. Status: active.W1Z6qeahXY
ARIA TA3: Large-Scale Validation of Business Process AI (BPAI)University of Oxford2024-01aria.org.uk[Safeguarded AI TA3] Large-Scale Validation of Business Process AI (BPAI). Lead(s): Nobuko Yoshida, David Parker, Adrian Puerto Aubel, Joseph Paulus. Institutions: University of Oxford. Status: active.OdiZdS7PvJ
ARIA TA3: SAGEflex: Safeguarded AI Agents for Grid-Edge FlexibilityUniversity of Oxford2024-01aria.org.uk[Safeguarded AI TA3] SAGEflex: Safeguarded AI Agents for Grid-Edge Flexibility. Lead(s): Thomas Morstyn, Jakob Foerster, Yihong Zhou, Sofia Sampaio. Institutions: University of Oxford. Status: active.OdiZdS7PvJ
ARIA TA3: Digital Custodians for Ageing InfrastructureMind Foundry / WSP2024-01aria.org.uk[Safeguarded AI TA3] Digital Custodians for Ageing Infrastructure. Lead(s): Nathan Korda, Julia Bush, Mark McLeod. Institutions: Mind Foundry / WSP. Status: closed.OdiZdS7PvJ
ARIA TA1.1: SAINT: Safe AI ageNTsUniversity of Oxford2024-01cs.ox.ac.uk[Safeguarded AI TA1.1] SAINT: Safe AI ageNTs. Lead(s): Alessandro Abate, Virginie Debauche, Niko Vertovec. Institutions: University of Oxford. Status: active.VpM42-Oye3
ARIA TA1.1: Modal Types for Quantitative AnalysisUniversity of Kent2024-01aria.org.uk[Safeguarded AI TA1.1] Modal Types for Quantitative Analysis. Lead(s): Vineet Rajani, Dominic Orchard. Institutions: University of Kent. Status: active.VpM42-Oye3
Internal Metadata
ID: sid_XqjV4mbMXQ
Stable ID: sid_XqjV4mbMXQ
Wiki ID: E1997
Type: organization
YAML Source: packages/factbase/data/fb-entities/aria-uk.yaml
Facts: 7 structured (12 total)
Records: 101 in 4 collections