Programme Director, Advanced Research and Invention Agency (ARIA)OrganizationAdvanced Research and Invention Agency (ARIA)ARIA is a UK government R&D agency whose Safeguarded AI Programme (£59M, led by davidad with Yoshua Bengio as Scientific Director) represents the largest government investment targeting provably sa...
Programme
Safeguarded AI (\£59M R&D programme)
Previous Role
Research Fellow, Oxford University Future of Humanity InstituteOrganizationFuture of Humanity InstituteThe Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving...Quality: 51/100 (2021–2023)
Key Focus
Formal verification, provably safe AI, mathematical guarantees for critical infrastructure
Notable Work
Safeguarded AI framework, flexHEG, C. elegans nervous system simulation, Filecoin co-invention
Key Collaborators
Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100, Stuart RussellPersonStuart RussellStuart Russell (born 1962) is a British computer scientist and UC Berkeley professor who co-authored the dominant AI textbook 'Artificial Intelligence: A Modern Approach' (used in over 1,500 univer...Quality: 30/100, Max TegmarkPersonMax TegmarkComprehensive biographical profile of Max Tegmark covering his transition from cosmology to AI safety advocacy, his role founding the Future of Life Institute, and his controversial Mathematical Un...Quality: 63/100, Sanjit Seshia (co-authors); Evan HubingerPersonEvan HubingerComprehensive biography of Evan Hubinger documenting his influential theoretical work on mesa-optimization/deceptive alignment (2019, 205+ citations) and empirical demonstrations at Anthropic showi...Quality: 43/100 (MATS co-mentor)