Skip to content

GovAI

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:43 (Adequate)⚠️
Importance:42 (Reference)
Last edited:2025-12-28 (5 weeks ago)
Words:1.7k
Backlinks:5
Structure:
📊 14📈 1🔗 4📚 78%Score: 14/15
LLM Summary:GovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and currently holds Vice-Chair position in EU GPAI Code drafting. Their compute governance research has influenced regulatory thresholds across US, UK, and EU, with alumni now occupying key positions in frontier labs, think tanks, and government.
Critical Insights (4):
  • Quant.A single AI governance organization with ~20 staff and ~$1.8M annual funding has trained 100+ researchers who now hold key positions across frontier AI labs (DeepMind, OpenAI, Anthropic) and government agencies.S:4.5I:4.0A:4.5
  • ClaimGovAI's Director of Policy currently serves as Vice-Chair of the EU's General-Purpose AI Code of Practice drafting process, representing unprecedented direct participation by an AI safety researcher in major regulatory implementation.S:4.5I:5.0A:3.0
  • ClaimGovAI's compute governance framework directly influenced major AI regulations, with their research informing the EU AI Act's 10^25 FLOP threshold and being cited in the US Executive Order on AI.S:4.0I:4.5A:3.5
Issues (2):
  • QualityRated 43 but structure suggests 93 (underrated by 50 points)
  • Links1 link could use <R> components
See also:EA Forum
Research Lab

GovAI

Importance42

The Centre for the Governance of AI (GovAI) is one of the most influential AI policy research organizations globally, combining rigorous research with direct policy engagement at the highest levels. Originally founded as part of the Future of Humanity Institute at Oxford, GovAI became an independent nonprofit in 2023 when FHI closed, and subsequently relocated to London in 2024 to enhance its policy engagement capabilities.

GovAI’s theory of impact centers on producing foundational research that shapes how governments and industry approach AI governance, while simultaneously training the next generation of AI governance professionals. Their 2018 research agenda helped define the nascent field of AI governance, and their subsequent work on compute governance has become a cornerstone of regulatory thinking in the US, UK, and EU. The organization receives substantial support from Coefficient Giving, with grants totaling over $1.8 million in 2023-2024 alone.

The organization’s influence extends beyond research: GovAI alumni now occupy key positions across the AI governance landscape—in frontier AI labs (DeepMind, OpenAI, Anthropic), major think tanks (CSET, RAND), and government positions in the US, UK, and EU. Perhaps most significantly, GovAI’s Director of Policy Markus Anderljung currently serves as Vice-Chair of the EU’s General-Purpose AI Code of Practice drafting process, directly shaping how the world’s first comprehensive AI law will be implemented.

AttributeDetails
Founded2018 (as part of FHI); Independent 2023
LocationLondon, UK (moved from Oxford in 2024)
StructureIndependent nonprofit
Staff Size≈15-20 researchers and staff
Annual Budget≈$1-4M (estimated from grants)
Primary FunderCoefficient Giving ($1.8M+ in 2023-2024)
AffiliationsUS AI Safety Institute Consortium member
MetricValueNotes
Publications in peer-reviewed venues50+Nature, Science, NeurIPS, International Organization
Fellowship alumni placed100+Since 2018
Government advisory engagementsUK, US, EUDirect policy input
Current policy rolesEU GPAI Code Vice-ChairMarkus Anderljung

GovAI’s research spans four interconnected domains, with particular depth in compute governance where they have produced foundational work cited by policymakers globally.

Loading diagram...

GovAI’s signature contribution is the compute governance framework—the idea that computing power, unlike data or algorithms, is physical, measurable, and therefore governable. Their February 2024 paper “Computing Power and the Governance of AI” (Anderljung, Heim, et al.) has become the definitive reference, cited in policy discussions from Washington to Brussels.

Research StreamKey PapersPolicy Impact
Compute thresholdsTraining Compute Thresholds (2024)Informed EU 10^25 FLOP threshold
Cloud governanceGoverning Through the Cloud (2024)Know-Your-Customer proposals
Hardware controlsChip Tracking Mechanisms (2023)Export control discussions
VerificationAI Verification (2023)International monitoring concepts

Lennart Heim, formerly GovAI’s compute governance lead (now at RAND), regularly advises governments on implementation. His work demonstrates how compute provides a “governance surface”—a point where regulators can observe and influence AI development without requiring access to proprietary algorithms.

GovAI researches how nations can coordinate on AI governance despite competitive pressures. Their work on “AI Race Dynamics” examines why rational actors might collectively produce suboptimal outcomes, and what mechanisms might enable cooperation.

Research TopicKey FindingPolicy Relevance
Race dynamicsCompetitive pressure degrades safety investmentsSupports international coordination
Standards harmonizationTechnical standards can enable verificationInforms AI safety summits
Information sharingIncident reporting reduces collective riskModel for international registries

Recent GovAI work focuses specifically on governing frontier AI—systems at or near the capability frontier that pose novel safety and security risks.

PublicationYearContribution
Frontier AI Regulation: Managing Emerging Risks2023Proposed tiered regulatory framework
Safety Cases for Frontier AI2024Framework for demonstrating system safety
Coordinated Pausing Scheme2024Evaluation-based pause mechanism for dangerous capabilities

GovAI collaborated with UK AISI on safety case sketches for offensive cyber capabilities, demonstrating practical application of their theoretical frameworks.

GovAI runs competitive fellowship programs that have trained 100+ AI governance researchers since 2018. The fellowship provides mentorship from leading experts and has become a primary talent pipeline for the field.


GovAI’s leadership combines academic rigor with policy experience. Several former team members have moved to positions of significant influence.

Current Leadership
BG
Ben Garfinkel
Director
MA
Markus Anderljung
Director of Policy & Research
EB
Emma Bluemke
Research Manager
AK
Anton Korinek
Economics of AI Lead
PersonRoleBackgroundNotable Contributions
Ben GarfinkelDirectorDPhil Oxford (IR); Former OpenAI consultantSets organizational direction; security implications research
Markus AnderljungDirector of PolicyEY Sweden; UK Cabinet Office secondeeEU GPAI Code Vice-Chair; compute governance
Allan DafoePresident (now at DeepMind)Yale PhD; Founded GovAI 2018Foundational research agenda; field definition
Lennart HeimAdjunct Fellow (at RAND)Technical AI policyCompute governance lead; OECD expert group

GovAI’s impact extends through its alumni network, which now spans the AI governance ecosystem:

SectorOrganizationsSignificance
Frontier LabsDeepMind, OpenAI, AnthropicPolicy and governance roles
GovernmentUK Cabinet Office, US OSTP, EU AI OfficeDirect policy influence
Think TanksCSET, RAND, CNASResearch leadership
AcademiaOxford, CambridgeAcademic positions

GovAI has published extensively in peer-reviewed venues and policy outlets. Their work is notable for bridging academic rigor with practical policy relevance.

TitleYearAuthorsVenueImpact
Computing Power and the Governance of AI2024Anderljung, Heim, et al.GovAIFoundational compute governance reference
Safety Cases for Frontier AI2024GovAI/AISIGovAIFramework for demonstrating AI safety
Coordinated Pausing: An Evaluation-Based Scheme2024GovAIGovAIProposes pause mechanism for dangerous capabilities
Training Compute Thresholds2024Heim, KoesslerWhite paperInforms regulatory threshold-setting
Governing Through the Cloud2024Fist, Heim, et al.OxfordCloud provider regulatory role
Frontier AI Regulation2023GovAIGovAITiered regulatory framework proposal
Standards for AI Governance2023GovAIGovAIInternational standards analysis

GovAI researchers have published in leading journals and conferences:

Venue TypeExamples
Academic journalsNature, Nature Machine Intelligence, Science, International Organization
CS conferencesNeurIPS, AAAI AIES, ICML
Policy outletsJournal of Strategic Studies

GovAI’s influence operates through multiple channels: direct government advisory, regulatory participation, talent placement, and intellectual framework-setting.

EngagementRoleSignificance
EU GPAI Code of PracticeVice-Chair (Anderljung)Drafting Safety & Security chapter for AI Act implementation
UK Cabinet OfficeSecondment (Anderljung, past)Senior AI Policy Specialist
US AI Safety Institute ConsortiumMember organizationContributing to US AI safety standards
OECD AI Expert GroupMember (Heim)AI Compute and Climate

GovAI’s conceptual frameworks have shaped regulatory thinking:

FrameworkAdoption
Compute governanceReferenced in EU AI Act (10^25 FLOP threshold); US Executive Order
Tiered frontier regulationInforms UK, EU, US approaches to frontier AI
Safety casesAdopted by UK AISI as assessment framework
OrganizationFocusSizeBudgetPolicy Access
GovAIAI governance research + field building≈20≈$1MHigh (EU, UK, US)
CSET (Georgetown)Security + emerging tech≈50≈$10M+High (US focus)
RAND AIBroad AI policy≈30≈$1M+High (US focus)
Oxford AI GovernanceAcademic research≈10≈$1MMedium

GovAI is distinctive for combining research depth with direct regulatory participation—particularly through Anderljung’s Vice-Chair role in EU AI Act implementation.


GovAI is primarily funded by Coefficient Giving, which has provided substantial support for AI governance work.

GrantYearAmountPurpose
General Support2024$1,800,000Core operations
General Support2023$1,000,000Core operations
Field Building2021$141,613Fellowship programs

GovAI occupies a distinctive niche: producing rigorous, policy-relevant research while maintaining direct access to regulatory processes. Key strengths include:

  1. Compute governance expertise: Arguably the leading research group on this topic globally
  2. Talent pipeline: Fellowship program has trained significant portion of AI governance workforce
  3. Policy access: Direct participation in EU AI Act implementation; alumni in key government roles
  4. Academic credibility: Publications in top venues; Oxford affiliation (historical)
  1. Funding concentration: Heavy reliance on Coefficient Giving creates potential vulnerability
  2. Geographic focus: Primarily UK/US/EU; limited Global South engagement
  3. Implementation gap: Research excellence doesn’t always translate to implementation capacity
  4. Scale constraints: Small team relative to policy influence ambitions
QuestionSignificance
Will compute governance prove tractable?GovAI’s signature bet
EU AI Act implementation successTest of direct policy influence
Talent pipeline sustainabilityCentral to long-term impact
Funding diversificationReduces single-funder risk