Skip to content
Longterm Wiki

GovAI

Safety Organization
Founded 2016 (10 years old)governance.ai
Structured Facts
Database Records
Revenue
$5.7M
as of 2024
Headcount
40–45
as of 2025
Total Funding Raised
$13M
as of 2025
Founded Date
2016

Key People

2
AK
Economics of AI Lead
Also Professor at University of Virginia
BG
Director
2021 – present
Became Director after GovAI transitioned to independence from FHI

All Facts

Financial

Annual Expenses$920K2024view →
Grant Received$3MFeb 20254 pts
As OfValueLink
Feb 2025$3Mview →
2025$756Kview →
2024$2.5Mview →
2023$2.8Mview →
Headcount40–452025view →
Revenue$5.7M2024view →
Total Funding Raised$13M2025view →

Products & Usage

Publication Count502025view →

Organization

CountryUnited Kingdomview →
Founded Date2016view →

General

Websitehttps://www.governance.ai2 pts
As OfValueLink
https://www.governance.aiview →
https://governance.aiview →

Other

advisory-boardAjeya Cotra, Allan Dafoe, Helen Toner, Tasha McCauley, Toby Ord2025view →
independence-date2021view →
key-personMarkus Anderljung20242 pts
policy-influenceVice-Chair role on the EU GPAI Code of Practice drafting process (2024-2025)2024view →
programGovAI Fellowship — competitive research fellowship program bringing early-career researchers to work on AI governance for 3-12 months. 100+ alumni placed across DeepMind, OpenAI, Anthropic, government agencies, and think tanks.2025view →
publicationComputing Power and the Governance of Artificial Intelligence — argues compute is the most governable AI pillar, proposes international monitoring mechanismsFeb 20244 pts
As OfValueLink
Feb 2024Computing Power and the Governance of Artificial Intelligence — argues compute is the most governable AI pillar, proposes international monitoring mechanismsview →
2024Safety Cases for Frontier AI — argues for structured safety arguments analogous to safety cases in other high-risk industriesview →
2024Risk Thresholds for Frontier AI — proposes framework for when frontier AI capabilities warrant regulatory interventionview →
Jul 2023Frontier AI Regulation: Managing Emerging Risks to Public Safety — proposes three regulatory building blocks: standards, registration/reporting, compliance mechanismsview →

Divisions

2
NameDivisionTypeStatusSourceNotesSource check
GovAI Policyteamactivegovernance.aiPolicy engagement and fellowships program. Places fellows in government offices and international organizations to work on AI policy.
GovAI Researchteamactivegovernance.aiAI governance research at the University of Oxford. Publishes research on international AI governance, compute governance, and AI policy design.

Publications

23
TitlePublicationTypeAuthorsUrlPublishedDateSourceIsFlagshipNotesVenueSource check
Frontier AI Auditing: Toward Rigorous Third-Party AssessmentpaperBrundage, Dreksler, Homewood, McGregor et al.governance.ai2026-01governance.ai
Forecasting LLM-Enabled Biorisk and the Efficacy of SafeguardspaperWilliams, Righetti, Rosenberg et al.governance.ai2025-07governance.ai
Third-Party Compliance Reviews for Frontier AI Safety FrameworkspaperHomewood, Williams, Dreksler, Lidiard, Garfinkel, Schuett et al.governance.ai2025-05governance.ai
Infrastructure for AI AgentspaperChan, Wei, Huang, Rajkumar, Perrier, Lazar, Hadfield, Anderljunggovernance.ai2025-01governance.ai
IDs for AI SystemspaperChan, Kolt, Wills, Anwar, Schroeder de Witt, Rajkumar, Hammond, Krueger, Heim, Anderljunggovernance.ai2024-10governance.ai
Safety Cases for Frontier AIpaperBuhl, Sett, Koessler, Schuett, Anderljunggovernance.ai2024-10governance.ai
A Grading Rubric for AI Safety FrameworkspaperAlaga, Schuett, Anderljunggovernance.ai2024-09governance.ai
From Principles to Rules: A Regulatory Approach for Frontier AIpaperSchuett, Anderljung, Carlier, Koessler, Garfinkelgovernance.ai2024-08governance.ai
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of LLMspaperEloundou, Manning, Mishkin, Rockgovernance.ai2024-06governance.aiPublished in Science. Widely cited labor market impact analysis.
Visibility into AI AgentspaperChan, Ezell, Kaufmann, Wei, Hammond, Bradley, Bluemke, Rajkumar, Krueger, Kolt, Heim, Anderljunggovernance.ai2024-06governance.ai
Risk Thresholds for Frontier AIpaperKoessler, Schuett, Anderljunggovernance.ai2024-06governance.ai
Societal Adaptation to Advanced AIpaperBernardi, Mukobi, Greaves, Heim, Anderljunggovernance.ai2024-05governance.ai
Governing Through the Cloud: The Intermediary Role of Compute Providers in AI RegulationpaperHeim, Fist, Egan, Huang, Zekany, Trager, Osborne, Zilbermangovernance.ai2024-03governance.ai
Computing Power and the Governance of Artificial IntelligencepaperSastry, Heim, Anderljung et al.arxiv.org2024-02arxiv.orgarXiv
Computing Power and the Governance of AIpaperLennart Heim et al.governance.ai2024-02governance.ai
What Should Be Internationalised in AI Governance?paperRobert Trager, Ben Garfinkel, et al.governance.ai2024governance.ai
Frontier AI Regulation: Managing Emerging Risks to Public SafetypaperMarkus Anderljung, Joslyn Barnhart, Anton Korinek, et al.governance.ai2023-11governance.ai
Three Lines of Defense Against Risks from AIpaperJonas Schuettgovernance.ai2023-10governance.ai
Open-Sourcing Highly Capable Foundation ModelspaperElizabeth Seger, Noemi Dreksler, Richard Moulange, et al.governance.ai2023-09governance.ai
International Governance of Civilian AI: A Jurisdictional Certification ApproachpaperTrager, Harack, Reuel, Carnegie, Heim, Ho et al.governance.ai2023-08governance.ai
Frontier AI Regulation: Managing Emerging Risks to Public SafetypaperAnderljung et al.arxiv.org2023-07arxiv.orgarXiv
Model Evaluation for Extreme RiskspaperShevlane, Farquhar, Garfinkel et al.arxiv.org2023-05arxiv.orgarXiv
Model Evaluation for Extreme RiskspaperToby Shevlane, Sebastian Farquhar, Ben Garfinkel, et al.arxiv.org2023-05arxiv.org
Internal Metadata
ID: sid_XLLyzaEaCA
Stable ID: sid_XLLyzaEaCA
Wiki ID: E153
Type: organization
YAML Source: packages/factbase/data/fb-entities/govai.yaml
Facts: 26 structured (27 total)
Records: 27 in 3 collections