Safety Research & Resources
- Quant.AI safety research has only ~1,100 FTE researchers globally compared to an estimated 30,000-100,000 capabilities researchers, creating a 1:50-100 ratio that is worsening as capabilities research grows 30-40% annually versus safety's 21-25% growth.S:4.5I:5.0A:4.5
- Quant.The spending ratio between AI capabilities and safety research is approximately 10,000:1, with capabilities investment exceeding $100 billion annually while safety research receives only $250-400M globally (0.0004% of global GDP).S:4.0I:5.0A:4.0
- ClaimDespite rapid 25% annual growth in AI safety research, the field tripled from ~400 to ~1,100 FTEs between 2022-2025 but is still producing insufficient research pipeline with only ~200-300 new researchers entering annually through structured programs.S:3.5I:4.5A:4.5
- QualityRated 62 but structure suggests 93 (underrated by 31 points)
- Links24 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Total Safety Researchers | ≈1,100 FTEs globally (2025) | AI Safety Field Growth Analysis: 600 technical + 500 governance |
| Annual Funding | $150-400M total; $10M Coefficient Giving (2024) | Coefficient Giving 2024 Report |
| Safety:Capabilities Ratio | 1:50-100 researcher ratio; 1:10,000 funding ratio | Stuart Russell estimates |
| Field Growth Rate | 21-24% annually (safety) vs 30-40% (capabilities) | EA Forum analysis |
| Government Investment | $160M+ combined (UK AISI: £240M, US AISI: $10M) | UK AISI grants, NIST budget |
| Training Pipeline | ≈300 new researchers/year via structured programs | MATS (98 scholars), SPAR (50+), ERA (30+) |
| Industry Safety Index | D average for existential safety across all labs | FLI AI Safety Index 2025 |
Overview
Section titled “Overview”This page tracks the size, growth, and resource allocation of the AI safety research field. Understanding these metrics helps assess whether safety research is keeping pace with capabilities development and identify critical capacity gaps. The analysis encompasses researcher headcount, funding flows, publication trends, and educational programs.
Key finding: Despite rapid growth, AI safety research remains severely under-resourced relative to capabilities development, with spending ratios estimated at 1:10,000 or worse. The field has tripled from ~400 to ~1,100 FTEs (2022-2025) but capabilities research is growing faster, creating a widening absolute gap. Current safety funding represents just 0.0004% of global GDP, while AI capabilities investment exceeds $100 billion annually. This creates significant questions about whether AI safety research can develop adequate solutions before transformative AI capabilities emerge.
Risk Assessment
Section titled “Risk Assessment”| Dimension | Assessment | Evidence | Trend |
|---|---|---|---|
| Researcher Shortage | Critical | 1:50-100 safety:capabilities ratio | Worsening |
| Funding Gap | Severe | 1:10,000 spending ratio | Stable disparity |
| Experience Gap | High | Median 2-5 years experience | Slowly improving |
| Growth Rate Mismatch | Concerning | 21% vs 30-40% annual growth | Gap widening |
Current Safety Research Capacity
Section titled “Current Safety Research Capacity”Field Structure Overview
Section titled “Field Structure Overview”Full-Time Researcher Headcount (2025)
Section titled “Full-Time Researcher Headcount (2025)”| Category | Count | Organizations | Growth Rate |
|---|---|---|---|
| Technical AI Safety | ≈600 FTEs | 68 active orgs | 21% annually |
| AI Governance/Policy | ≈500 FTEs | Various | 30% annually |
| Total Safety Research | ≈1,100 FTEs | 70+ orgs | 25% annually |
Data source: AI Safety Field Growth Analysis 2025↗✏️ blog★★★☆☆EA ForumAI Safety Field Growth Analysis 2025technicalities (2020)Source ↗Notes tracking organizations explicitly branded as “AI safety.”
Key limitations: 80,000 Hours↗🔗 web★★★☆☆80,000 Hours80,000 Hours80,000 Hours provides a comprehensive guide to technical AI safety research, highlighting its critical importance in preventing potential catastrophic risks from advanced AI sys...Source ↗Notes estimates “several thousand people” work on major AI risks when including researchers at major labs and academia, suggesting significant undercounting of part-time and embedded safety researchers.
Field Composition by Research Area
Section titled “Field Composition by Research Area”Top technical research areas by organization count:
- Miscellaneous technical AI safety research
- LLM safetyCapabilityLarge Language ModelsComprehensive analysis of LLM capabilities showing rapid progress from GPT-2 (1.5B parameters, 2019) to o3 (87.5% on ARC-AGI vs ~85% human baseline, 2024), with training costs growing 2.4x annually...Quality: 60/100
- InterpretabilitySafety AgendaInterpretabilityMechanistic interpretability has extracted 34M+ interpretable features from Claude 3 Sonnet with 90% automated labeling accuracy and demonstrated 75-85% success in causal validation, though less th...Quality: 66/100
- Alignment researchAlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100
Historical growth trajectory:
- 2022: ~400 FTE researchers total
- 2023: ~650 FTE researchers
- 2024: ~900 FTE researchers
- 2025: ~1,100 FTE researchers
This represents consistent 25%+ annual growth, but still lags behind estimated capabilities research expansion of 30-40% annually.
Funding Analysis
Section titled “Funding Analysis”Annual Safety Research Funding (2024-2025)
Section titled “Annual Safety Research Funding (2024-2025)”| Funding Source | Amount | Focus Area | Reliability |
|---|---|---|---|
| Coefficient Giving | ≈$10M (2024); $10M RFP (2025) | Technical safety (21 research areas), governance | High |
| Long-Term Future Fund↗🔗 webLong-Term Future FundSource ↗Notes | ≈$1-10M annually | Individual grants, upskilling | Medium |
| Government Programs | ≈$160M+ | UK AISI (£240M), US AISI ($10M), Canada ($10M) | Growing |
| Corporate Labs | Undisclosed | Internal safety teams | Unknown |
| Total Estimated | $150-400M | Global safety research | Medium confidence |
Coefficient Giving context: Since 2017, Coefficient Giving (then Open Philanthropy) has donated ≈$136 million to AI safety (~12% of their $1.8B total grants). They acknowledged their 2024 spending rate was “too slow” and are “more aggressively expanding support for technical AI safety work.” Their 2025 RFP covers 21 research directions including adversarial testing, model transparency, and theoretical alignment.
Government Safety Investment
Section titled “Government Safety Investment”| Country/Region | Program | Funding | Key Initiatives | Timeline |
|---|---|---|---|---|
| United Kingdom | UK AI Security Institute | £240M total | £15M Alignment Project, £8.5M Systemic Safety Grants, £200K Challenge Fund | 2023+ |
| United States | US AISIOrganizationUS AI Safety InstituteThe US AI Safety Institute (AISI), established November 2023 within NIST with $10M budget (FY2025 request $82.7M), conducted pre-deployment evaluations of frontier models through MOUs with OpenAI a...Quality: 91/100 (renamed CAISI 2025) | $10M (chronically underfunded) | Model evaluation partnerships with Anthropic/OpenAI | 2024+ |
| Canada | Canada AISI | $10M | Research coordination | 2024+ |
| European Union | AI Act implementation | €100M+ | Regulatory infrastructure | 2024+ |
Note: The UK-US AI Safety Institutes signed a landmark agreement in 2024 to jointly test advanced AI models, share research insights, and enable expert talent transfers. However, US funding remains substantially lower than UK investment—the NIST budget that hosts AISI has faced congressional budget cuts rather than the expansion requested by the Biden administration.
Capabilities vs Safety Spending
Section titled “Capabilities vs Safety Spending”Critical disparity metrics:
- 10,000:1 ratio of capabilities to safety investment (Stuart Russell, UC Berkeley↗🔗 webStuart Russell, UC BerkeleySource ↗Notes)
- Companies spend more than $100 billion building AGI vs ≈$10 million philanthropic safety research annually
- AI safety funding: 0.0004% of global GDP vs $131.5B in AI startup VC funding (2024)
- Only 2% of AI publications concern safety issues despite 312% growth in safety research (2018-2023)
- External safety organizations operate on budgets smaller than a frontier lab’s daily burn
Capability researcher growth comparison (AI Safety Field Growth Analysis 2025):
| Metric | Safety Field | Capabilities Field | Gap |
|---|---|---|---|
| Annual growth rate | 21-24% | 30-40% | Widening |
| OpenAI headcount | N/A | 300 → 3,000 (2021-2025) | 10x growth |
| Anthropic, DeepMind | N/A | Each grown more than 3x | Rapid expansion |
| ML papers per year | ≈45,000 safety-related (2018-2023) | Doubles every 2 years | Exponential |
For context: Global philanthropic climate funding reaches $1-15 billion annually, making climate funding 20-40x larger than AI safety funding. Prominent AI safety advocates recommend increasing safety investment to at least 30% of compute resources, a level far above current allocations.
Research Output & Quality
Section titled “Research Output & Quality”Publication Trends (2024-2025)
Section titled “Publication Trends (2024-2025)”Major alignment research developments:
| Research Area | Notable 2024-2025 Papers | Impact |
|---|---|---|
| Alignment Foundations | ”AI Alignment: A Comprehensive Survey” (RICE framework) | Comprehensive taxonomy |
| Mechanistic Interpretability | ”Mechanistic Interpretability Benchmark (MIB)“ | Standardized evaluation |
| Safety Benchmarks | WMDP Benchmark (ICML 2024) | Dangerous capability assessment |
| Training Methods | ”Is DPO Superior to PPO for LLM Alignment?” | Training optimization |
Industry research contributions:
- AnthropicLabAnthropicComprehensive profile of Anthropic tracking its rapid commercial growth (from $1B to $7B annualized revenue in 2025, 42% enterprise coding market share) alongside safety research (Constitutional AI...Quality: 51/100: Circuit tracing research revealing Claude’s “shared conceptual space” (March 2025)
- Google DeepMindLabGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100: Announced deprioritizing sparse autoencoders (March 2025)
- CAISLab ResearchCAISCAIS is a research organization that has distributed $2M+ in compute grants to 200+ researchers, published 50+ safety papers including benchmarks adopted by Anthropic/OpenAI, and organized the May ...Quality: 42/100: Supported 77 safety papers through compute cluster (2024)
Field debates: Intensified discussion about mechanistic interpretability value, with Dario Amodei↗🔗 web★★★★☆AnthropicDario AmodeiSource ↗Notes advocating focus while other labs shift priorities.
Research Quality Indicators
Section titled “Research Quality Indicators”Positive signals:
- Research “moving beyond raw performance to explainability, alignment, legal and ethical robustness”
- Standardized benchmarks emerging (MIB, WMDP)
- Industry-academic collaboration increasing
Concerning signals:
- OpenAI disbanded super-alignment team↗🔗 web★★★★☆AnthropicOpenAI disbanded super-alignment teamSource ↗Notes (May 2024)
- Safety leader departures citing safety “took a back seat to shiny products”
- 56.4% surge in AI incidents from 2023 to 2024
FLI AI Safety Index: Company Comparison (Winter 2025)
Section titled “FLI AI Safety Index: Company Comparison (Winter 2025)”The Future of Life Institute AI Safety Index evaluates leading AI companies across six domains using 33 indicators. Scores use US GPA system (A+ to F).
| Company | Overall | Risk Assessment | Current Harms | Safety Framework | Existential Safety | Governance | Information Sharing |
|---|---|---|---|---|---|---|---|
| Anthropic | B- | B | B+ | B | D | B | B- |
| OpenAI | C+ | B- | B | B- | D | C+ | C |
| Google DeepMind | C+ | B- | B- | B- | D | C | C |
| xAI | D+ | D | C | D | D- | D | D |
| Meta | D | D+ | C+ | D- | D- | D- | D |
| Zhipu AI | D- | D | C | D- | F | D- | D- |
| DeepSeek | D- | D- | C- | D- | F | F | D- |
Key finding: No company achieved above a D in Existential Safety, indicating industry-wide structural failure to prevent catastrophic misuse or loss of control. The top three performers (Anthropic, OpenAI, DeepMind) show substantially stronger practices than others, particularly in risk assessment and safety frameworks.
Educational Pipeline & Training
Section titled “Educational Pipeline & Training”PhD Programs and Fellowships
Section titled “PhD Programs and Fellowships”| Program | Funding | Duration | Focus |
|---|---|---|---|
| Vitalik Buterin PhD Fellowship↗🔗 web★★★☆☆Future of Life InstituteVitalik Buterin PhD FellowshipSource ↗Notes | $10K/year + tuition | 5 years | AI safety PhD research |
| Google PhD Fellowship | $85K/year | Variable | AI research including safety |
| Global AI Safety Fellowship | Up to $30K | 6 months | Career transitions |
| Anthropic Fellows Program↗🔗 web★★★★☆AnthropicAnthropic Fellows ProgramSource ↗Notes | $2,100/week | Flexible | Mid-career transitions |
Training Program Capacity
Section titled “Training Program Capacity”| Program | Annual Capacity | Target Audience | Outcomes | Support |
|---|---|---|---|---|
| MATS | 98 scholars (Summer 2025) | Aspiring safety researchers | 80% now work in AI safety; 10% co-founded startups | $15K stipend, $12K compute, housing |
| SPAR↗🔗 webSPARSource ↗Notes | 50+ participants | Undergraduate to professional | Research publications | Mentorship, resources |
| ERA Fellowship | 30+ fellows | Early-career researchers | Career transitions | Funding, network |
| LASR Labs | Variable | Research transitions | Lab placements | Project-based |
MATS program details: The MATS Summer 2025 cohort supported 98 scholars with 57 mentors across interpretability, governance, and security research tracks. Alumni outcomes show ~80% continue in AI safety/security roles, with ~75% continuing in fully-funded 6-12 month extensions. Notable alumni have published award-winning papers (ACL 2024 Outstanding Paper) and joined frontier labs like Anthropic. Program satisfaction averages 9.4/10.
Estimated field pipeline: ~300 new safety researchers entering annually through structured programs, plus unknown numbers through academic and industry pathways.
Conference Participation & Community
Section titled “Conference Participation & Community”Major AI Conference Attendance (2024-2025)
Section titled “Major AI Conference Attendance (2024-2025)”| Conference | Total Submissions | Attendance | AI Safety Content | Growth |
|---|---|---|---|---|
| NeurIPS 2025 | 21,575 valid submissions (5,290 accepted, 24.5%) | 16,000+ | 8 safety-focused social sessions | 61% submission increase |
| NeurIPS 2024 | 19,756 participants | ≈16,000 | Safety workshops, CAIS papers | 27% increase |
| ICML 2024 | 9,095 participants | 9,095 | ”Next Generation of AI Safety” workshop | 15% increase |
| ICLR 2024 | ≈8,000 participants | ≈8,000 | Alignment research track | 12% increase |
NeurIPS 2025 context: The conference saw a 61% increase in submissions over 2024, supported by 20,518 reviewers, 1,663 area chairs, and 199 senior area chairs. This massive growth reflects the global surge in AI research productivity, though safety-specific research remains a small fraction of total submissions.
Safety-specific events:
- CAISLab ResearchCAISCAIS is a research organization that has distributed $2M+ in compute grants to 200+ researchers, published 50+ safety papers including benchmarks adopted by Anthropic/OpenAI, and organized the May ...Quality: 42/100 online course: 240 participants (2024)
- AI safety conference workshops and socials organized by multiple organizations
- NeurIPS 2025 split between Mexico City and Copenhagen due to capacity constraints
Community Growth Indicators
Section titled “Community Growth Indicators”Positive trends:
- Safety workshops becoming standard at major AI conferences
- Industry participation in safety research increasing
- Graduate programs adding AI safety coursework
Infrastructure constraints:
- Major conferences approaching venue capacity limits
- Competition for safety researcher talent intensifying
- Funding concentration creating bottlenecks
Field Trajectory & Projections
Section titled “Field Trajectory & Projections”Current Growth Rates vs Requirements
Section titled “Current Growth Rates vs Requirements”| Metric | Current State (2025) | Current Growth | Required Growth | Gap Assessment |
|---|---|---|---|---|
| Safety researchers | ≈1,100 FTEs | 21-24% annually | 50%+ (to catch up) | Critical: widening |
| Safety funding | $150-400M | ≈25% annually | 100%+ (recommended 30% of compute) | Severe |
| Safety publications | ≈2% of AI papers | ≈20% annually (312% growth 2018-2023) | Unknown | Moderate |
| Training pipeline | ≈300/year | Growing | ≈1,000/year needed | Significant |
2-5 Year Projections
Section titled “2-5 Year Projections”Based on current exponential growth models from the AI Safety Field Growth Analysis 2025:
Optimistic scenario (current 21-24% growth continues):
- ~2,500-3,000 FTE safety researchers by 2030 (extrapolating from current 1,100)
- ≈$100M-1B annual safety funding by 2028
- Mature graduate programs producing 500+ researchers annually
- UK AISI Alignment Project produces breakthrough research
Concerning scenario (capabilities growth accelerates to 50%+):
- Safety research remains under 5% of total AI research
- Racing dynamicsRiskRacing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100 intensify as AGI timelinesAgi TimelineComprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current p...Quality: 59/100 compress
- 30-40% capabilities growth vs 21-24% safety growth creates widening absolute gap
- External safety organizations continue operating on budgets smaller than frontier lab daily burn
Key Uncertainties & Research Gaps
Section titled “Key Uncertainties & Research Gaps”Critical Unknowns
Section titled “Critical Unknowns”Capability researcher count: No comprehensive database exists for AI capabilities researchers. Estimates suggest 30,000-100,000 globally based on:
- OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100 growth: 300→3,000 employees (2021-2025)
- Similar expansion at AnthropicLabAnthropicComprehensive profile of Anthropic tracking its rapid commercial growth (from $1B to $7B annualized revenue in 2025, 42% enterprise coding market share) alongside safety research (Constitutional AI...Quality: 51/100, DeepMindLabGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100
- ML conference attendance doubling every 2-3 years
Industry safety spending: Most AI labs don’t disclose safety vs capabilities budget breakdowns. Known examples:
- IBM: 2.9%→4.6% of AI budgets (2022-2024)
- OpenAI: Super-alignment team disbanded (May 2024)
- Anthropic: Constitutional AI research ongoing but budget undisclosed
Expert Disagreements
Section titled “Expert Disagreements”Field size adequacy:
- Optimists: Current growth sufficient if focused on highest-impact research
- Pessimists: Need 10x more researchers given AI risk timelineAgi TimelineComprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current p...Quality: 59/100
Research prioritization:
- Technical focus: Emphasize interpretabilitySafety AgendaInterpretabilityMechanistic interpretability has extracted 34M+ interpretable features from Claude 3 Sonnet with 90% automated labeling accuracy and demonstrated 75-85% success in causal validation, though less th...Quality: 66/100, alignmentAlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100
- Governance focus: Prioritize policy interventions, coordinationAi Transition Model ParameterInternational CoordinationThis page contains only a React component placeholder with no actual content rendered. Cannot assess importance or quality without substantive text.
Funding allocation:
- Large grants to established organizations vs distributed funding for diverse approaches
- Academic vs industry vs independent researcher support ratios
Data Quality Assessment
Section titled “Data Quality Assessment”| Metric | Data Quality | Primary Limitations | Improvement Needs |
|---|---|---|---|
| FTE researchers | Medium | Undercounts independents, part-time contributors | Comprehensive workforce survey |
| Total funding | Medium | Many corporate/government grants undisclosed | Disclosure requirements |
| Spending ratios | Low | Labs don’t publish safety budget breakdowns | Industry transparency standards |
| Publication trends | Medium | No centralized safety research database | Standardized taxonomy and tracking |
| Experience levels | Very Low | No systematic demographic data collection | Regular field census |
| Researcher ratios | Low | No capability researcher baseline count | Comprehensive AI workforce analysis |
Most critical data gaps:
- Industry safety spending: Mandatory disclosure of safety vs capabilities R&D budgets
- Researcher demographics: Experience, background, career transition patterns
- Research impact assessment: Citation analysis and influence tracking for safety work
- International coordination: Non-English language safety research and global South participation
Sources & Resources
Section titled “Sources & Resources”Primary Field Analysis
Section titled “Primary Field Analysis”- AI Safety Field Growth Analysis 2025 (EA Forum)↗✏️ blog★★★☆☆EA ForumAI Safety Field Growth Analysis 2025technicalities (2020)Source ↗Notes
- 80,000 Hours: AI Safety Researcher Career Review↗🔗 web★★★☆☆80,000 Hours80,000 Hours80,000 Hours provides a comprehensive guide to technical AI safety research, highlighting its critical importance in preventing potential catastrophic risks from advanced AI sys...Source ↗Notes
- An Overview of the AI Safety Funding Situation (LessWrong)↗✏️ blog★★★☆☆LessWrongAn Overview of the AI Safety Funding Situation (LessWrong)Source ↗Notes
- International AI Safety Report 2025 - 96 AI experts contributing, nominated by 30 countries
- ETO AI Safety Research Almanac - Comprehensive research statistics
Funding & Investment Data
Section titled “Funding & Investment Data”- Coefficient Giving: Progress in 2024 and Plans for 2025↗🔗 webOpen Philanthropy: Progress in 2024 and Plans for 2025Source ↗Notes
- Coefficient Giving Technical AI Safety RFP - $10M across 21 research areas
- Coefficient Giving Grants Database↗🔗 webOpen Philanthropy Grants DatabaseOpen Philanthropy provides strategic grants across multiple domains including global health, catastrophic risks, scientific progress, and AI safety. Their portfolio aims to maxi...Source ↗Notes
- Center for AI Safety 2024 Year in Review (EA Forum)↗✏️ blog★★★☆☆EA ForumCenter for AI Safety 2024 Year in Review (EA Forum)Source ↗Notes
- UK AISI Grants Programs - £240M total funding
Research Output & Quality
Section titled “Research Output & Quality”- AI Alignment: A Comprehensive Survey (arXiv)↗📄 paper★★★☆☆arXivAI Alignment: A Comprehensive SurveyJi, Jiaming, Qiu, Tianyi, Chen, Boyuan et al. (2025)The survey provides an in-depth analysis of AI alignment, introducing a framework of forward and backward alignment to address risks from misaligned AI systems. It proposes four...Source ↗Notes
- Anthropic: Recommended Directions for AI Safety Research
- NeurIPS 2024 Fact Sheet↗🔗 webNeurIPS 2024 Fact SheetSource ↗Notes
- ICML 2024 Statistics↗🔗 webICML 2024 StatisticsSource ↗Notes
- ETO: Still a drop in the bucket - AI safety research
Training & Educational Programs
Section titled “Training & Educational Programs”- Future of Life Institute: PhD Fellowships↗🔗 web★★★☆☆Future of Life InstituteVitalik Buterin PhD FellowshipSource ↗Notes
- MATS Research Program - 98 scholars, 57 mentors (Summer 2025)
- Anthropic Fellows Program↗🔗 web★★★★☆AnthropicAnthropic Fellows ProgramSource ↗Notes
- SPAR - Research Program for AI Risks
Safety Assessment & Monitoring
Section titled “Safety Assessment & Monitoring”- FLI AI Safety Index Winter 2025 - 33 indicators across 6 domains
- FLI AI Safety Index Summer 2025
- Our World in Data: AI Conference Attendance↗🔗 web★★★★☆Our World in DataOur World in Data: AI Conference AttendanceSource ↗Notes
Government Programs
Section titled “Government Programs”- UK AI Security Institute - Research agenda and publications
- UK AISI Year in Review 2025
- UK AISI Alignment Project - £15M alignment research initiative
Last updated: January 30, 2026
Note: This analysis synthesizes data from multiple sources with varying quality and coverage. Quantitative estimates should be interpreted as order-of-magnitude indicators rather than precise counts. The field would benefit significantly from standardized data collection and reporting practices.