Structured Facts
Database Records
Revenue
$5.7M
as of 2024
Headcount
40–45
as of 2025
Total Funding Raised
$13M
as of 2025
Founded Date
2016
Key People
2AK
BG
Director
2021 – present
Became Director after GovAI transitioned to independence from FHI
All Facts
Financial
Products & Usage
Organization
General
Other
key-personMarkus Anderljung20242 pts▶
| As Of | Value | Link | |
|---|---|---|---|
| 2024 | Markus Anderljung | view → | |
| 2024 | Ben Garfinkel | view → |
legal-identifierUK Companies House #15883729 (Company Limited by Guarantee)Aug 20243 pts▶
policy-influenceVice-Chair role on the EU GPAI Code of Practice drafting process (2024-2025)2024view →
programGovAI Fellowship — competitive research fellowship program bringing early-career researchers to work on AI governance for 3-12 months. 100+ alumni placed across DeepMind, OpenAI, Anthropic, government agencies, and think tanks.2025view →
publicationComputing Power and the Governance of Artificial Intelligence — argues compute is the most governable AI pillar, proposes international monitoring mechanismsFeb 20244 pts▶
| As Of | Value | Link | |
|---|---|---|---|
| Feb 2024 | Computing Power and the Governance of Artificial Intelligence — argues compute is the most governable AI pillar, proposes international monitoring mechanisms | view → | |
| 2024 | Safety Cases for Frontier AI — argues for structured safety arguments analogous to safety cases in other high-risk industries | view → | |
| 2024 | Risk Thresholds for Frontier AI — proposes framework for when frontier AI capabilities warrant regulatory intervention | view → | |
| Jul 2023 | Frontier AI Regulation: Managing Emerging Risks to Public Safety — proposes three regulatory building blocks: standards, registration/reporting, compliance mechanisms | view → |
Divisions
2| Name | DivisionType | Status | Source | Notes | Source check |
|---|---|---|---|---|---|
| GovAI Policy | team | active | governance.ai | Policy engagement and fellowships program. Places fellows in government offices and international organizations to work on AI policy. | |
| GovAI Research | team | active | governance.ai | AI governance research at the University of Oxford. Publishes research on international AI governance, compute governance, and AI policy design. |
Publications
23| Title | PublicationType | Authors | Url | PublishedDate | Source | IsFlagship | Notes | Venue | Source check |
|---|---|---|---|---|---|---|---|---|---|
| Frontier AI Auditing: Toward Rigorous Third-Party Assessment | paper | Brundage, Dreksler, Homewood, McGregor et al. | governance.ai | 2026-01 | governance.ai | — | — | — | |
| Forecasting LLM-Enabled Biorisk and the Efficacy of Safeguards | paper | Williams, Righetti, Rosenberg et al. | governance.ai | 2025-07 | governance.ai | — | — | — | |
| Third-Party Compliance Reviews for Frontier AI Safety Frameworks | paper | Homewood, Williams, Dreksler, Lidiard, Garfinkel, Schuett et al. | governance.ai | 2025-05 | governance.ai | — | — | — | |
| Infrastructure for AI Agents | paper | Chan, Wei, Huang, Rajkumar, Perrier, Lazar, Hadfield, Anderljung | governance.ai | 2025-01 | governance.ai | — | — | — | |
| IDs for AI Systems | paper | Chan, Kolt, Wills, Anwar, Schroeder de Witt, Rajkumar, Hammond, Krueger, Heim, Anderljung | governance.ai | 2024-10 | governance.ai | — | — | — | |
| Safety Cases for Frontier AI | paper | Buhl, Sett, Koessler, Schuett, Anderljung | governance.ai | 2024-10 | governance.ai | ✓ | — | — | |
| A Grading Rubric for AI Safety Frameworks | paper | Alaga, Schuett, Anderljung | governance.ai | 2024-09 | governance.ai | — | — | — | |
| From Principles to Rules: A Regulatory Approach for Frontier AI | paper | Schuett, Anderljung, Carlier, Koessler, Garfinkel | governance.ai | 2024-08 | governance.ai | — | — | — | |
| GPTs are GPTs: An Early Look at the Labor Market Impact Potential of LLMs | paper | Eloundou, Manning, Mishkin, Rock | governance.ai | 2024-06 | governance.ai | ✓ | Published in Science. Widely cited labor market impact analysis. | — | |
| Visibility into AI Agents | paper | Chan, Ezell, Kaufmann, Wei, Hammond, Bradley, Bluemke, Rajkumar, Krueger, Kolt, Heim, Anderljung | governance.ai | 2024-06 | governance.ai | ✓ | — | — | |
| Risk Thresholds for Frontier AI | paper | Koessler, Schuett, Anderljung | governance.ai | 2024-06 | governance.ai | — | — | — | |
| Societal Adaptation to Advanced AI | paper | Bernardi, Mukobi, Greaves, Heim, Anderljung | governance.ai | 2024-05 | governance.ai | — | — | — | |
| Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation | paper | Heim, Fist, Egan, Huang, Zekany, Trager, Osborne, Zilberman | governance.ai | 2024-03 | governance.ai | ✓ | — | — | |
| Computing Power and the Governance of Artificial Intelligence | paper | Sastry, Heim, Anderljung et al. | arxiv.org | 2024-02 | arxiv.org | ✓ | — | arXiv | |
| Computing Power and the Governance of AI | paper | Lennart Heim et al. | governance.ai | 2024-02 | governance.ai | ✓ | — | — | |
| What Should Be Internationalised in AI Governance? | paper | Robert Trager, Ben Garfinkel, et al. | governance.ai | 2024 | governance.ai | — | — | — | |
| Frontier AI Regulation: Managing Emerging Risks to Public Safety | paper | Markus Anderljung, Joslyn Barnhart, Anton Korinek, et al. | governance.ai | 2023-11 | governance.ai | ✓ | — | — | |
| Three Lines of Defense Against Risks from AI | paper | Jonas Schuett | governance.ai | 2023-10 | governance.ai | — | — | — | |
| Open-Sourcing Highly Capable Foundation Models | paper | Elizabeth Seger, Noemi Dreksler, Richard Moulange, et al. | governance.ai | 2023-09 | governance.ai | — | — | — | |
| International Governance of Civilian AI: A Jurisdictional Certification Approach | paper | Trager, Harack, Reuel, Carnegie, Heim, Ho et al. | governance.ai | 2023-08 | governance.ai | — | — | — | |
| Frontier AI Regulation: Managing Emerging Risks to Public Safety | paper | Anderljung et al. | arxiv.org | 2023-07 | arxiv.org | ✓ | — | arXiv | |
| Model Evaluation for Extreme Risks | paper | Shevlane, Farquhar, Garfinkel et al. | arxiv.org | 2023-05 | arxiv.org | — | — | arXiv | |
| Model Evaluation for Extreme Risks | paper | Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, et al. | arxiv.org | 2023-05 | arxiv.org | ✓ | — | — |
▶Internal Metadata
| ID: | sid_XLLyzaEaCA |
| Stable ID: | sid_XLLyzaEaCA |
| Wiki ID: | E153 |
| Type: | organization |
| YAML Source: | packages/factbase/data/fb-entities/govai.yaml |
| Facts: | 26 structured (27 total) |
| Records: | 27 in 3 collections |