AI Safety Talent Supply/Demand Gap Model
- Quant.Current AI safety training programs show dramatically different cost-effectiveness ratios, with MATS-style programs producing researchers for $30-50K versus PhD programs at $200-400K, while achieving comparable placement rates of 70-80% versus 90-95%.S:4.0I:4.5A:4.5
- Quant.The AI safety talent shortage could expand from current 30-50% unfilled positions to 50-60% gaps by 2027 under scaling scenarios, with training pipelines producing only 220-450 researchers annually when 500-1,500 are needed.S:4.0I:5.0A:4.0
- Quant.Competition from capabilities research creates severe salary disparities that worsen with seniority, ranging from 2-3x premiums at entry level to 4-25x premiums at leadership levels, with senior capabilities roles offering $600K-2M+ versus $200-300K for safety roles.S:3.5I:4.5A:4.0
- TODOComplete 'Quantitative Analysis' section (8 placeholders)
- TODOComplete 'Strategic Importance' section
- TODOComplete 'Limitations' section (6 placeholders)
Safety Researcher Gap Model
Overview
Section titled “Overview”This model analyzes the persistent mismatch between AI safety researcher supply and organizational demand, with critical implications for alignment research progress timelines. The analysis reveals a structural talent shortage that represents one of the most binding constraints on AI safety progress.
Current estimates show 300-800 unfilled safety research positions (30-50% of total demand), with training pipelines producing only 220-450 qualified researchers annually when 500-1,500 are needed. Under scaling scenarios where AI safety becomes prioritized, this gap could expand to 50-60% by 2027, fundamentally limiting the field’s ability to address alignment difficulty before advanced systems deployment.
The model identifies four critical bottlenecks: insufficient training pathways, funding constraints, coordination failures, and competing demand from capabilities development, with intervention analysis suggesting targeted programs could cost-effectively expand supply.
Risk Assessment
Section titled “Risk Assessment”| Dimension | Assessment | Evidence | Timeline |
|---|---|---|---|
| Severity | Critical - talent shortage limits all safety progress | 3-10x gap between needed and available researchers | Ongoing |
| Likelihood | Very High - structural problem worsening | 70-90% probability gap persists under AI scaling | 2025-2030 |
| Trend | Negative - gap widening faster than solutions | Pipeline growth 15-25%/year vs demand growth 30-100%/year | Deteriorating |
| Tractability | Medium-High - proven interventions available | MATS-style programs show 60-80% placement rates | Immediate opportunities |
Current Supply Analysis
Section titled “Current Supply Analysis”Narrow Definition Supply (Technical AI Safety)
Section titled “Narrow Definition Supply (Technical AI Safety)”| Category | 2024 Estimate | Growth Rate | Quality Distribution |
|---|---|---|---|
| Full-time technical researchers | 300-500 | 20%/year | 20% A-tier, 50% B-tier, 30% C-tier |
| Safety-focused PhD students | 200-400 | 25%/year | 30% A-tier potential |
| Lab safety engineers | 500-1,000 | 30%/year | 10% A-tier, 60% B-tier |
| Total narrow supply | 1,000-1,900 | 25%/year | 15% A-tier overall |
Broader Definition Supply (Safety-Adjacent)
Section titled “Broader Definition Supply (Safety-Adjacent)”Organizations like AnthropicLabAnthropicComprehensive profile of Anthropic tracking its rapid commercial growth (from $1B to $7B annualized revenue in 2025, 42% enterprise coding market share) alongside safety research (Constitutional AI...Quality: 51/100, OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100, and DeepMindLabGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100 employ researchers working on safety-relevant problems who don’t identify primarily as safety researchers.
| Category | 2024 Estimate | Conversion Rate to Safety |
|---|---|---|
| ML researchers with safety interest | 2,000-5,000 | 5-15% |
| Interpretability/robustness researchers | 1,000-2,000 | 20-40% |
| AI governance/policy researchers | 500-1,000 | 10-30% |
| Potential conversion pool | 3,500-8,000 | 10-25% |
Demand Assessment
Section titled “Demand Assessment”Current Organizational Demand (2024)
Section titled “Current Organizational Demand (2024)”| Organization Type | Open Positions | Fill Rate | Salary Range | Source |
|---|---|---|---|---|
| Frontier labs (safety teams) | 500-1,000 | 60-80% | $150-800K | Anthropic careers↗🔗 web★★★★☆AnthropicAnthropic careersSource ↗Notes, OpenAI jobs↗🔗 web★★★★☆OpenAIOpenAI jobsSource ↗Notes |
| Academic safety groups | 200-400 | 40-60% | $80-200K | University job boards |
| Safety orgs (MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, CHAILab AcademicCHAICHAI is UC Berkeley's AI safety research center founded by Stuart Russell in 2016, pioneering cooperative inverse reinforcement learning and human-compatible AI frameworks. The center has trained 3...Quality: 37/100, etc.) | 100-200 | 50-70% | $100-300K | 80,000 Hours job board |
| Government/policy roles (AISIOrganizationUS AI Safety InstituteThe US AI Safety Institute (AISI), established November 2023 within NIST with $10M budget (FY2025 request $82.7M), conducted pre-deployment evaluations of frontier models through MOUs with OpenAI a...Quality: 91/100) | 50-100 | 30-50% | $120-250K | USAjobs.gov |
| Total current demand | 850-1,700 | 50-70% | Varies | Multiple sources |
Projected Demand Under Scaling Scenarios
Section titled “Projected Demand Under Scaling Scenarios”| Scenario | Description | 2027 Demand | Demand Multiple |
|---|---|---|---|
| Baseline | Current growth trajectory | 1,300-2,500 | 1.5x |
| Moderate Scaling | Safety becomes industry priority | 2,500-5,000 | 3x |
| Crisis Response | Government/industry mobilization | 4,000-17,000 | 5-10x |
| Manhattan Project | Wartime-level resource allocation | 10,000-30,000 | 12-18x |
Training Pipeline Bottlenecks
Section titled “Training Pipeline Bottlenecks”Pipeline Capacity Analysis
Section titled “Pipeline Capacity Analysis”The training pipeline represents the most significant constraint on talent supply, with current pathways producing insufficient researchers to meet projected demand.
| Training Pathway | Annual Output | Time to Competence | Quality Level | Cost per Researcher |
|---|---|---|---|---|
| PhD programs (safety-focused) | 20-50 | 4-6 years | High | $200-400K total |
| MATS-style programs | 50-100 | 6-12 months | Medium-High | $30-50K |
| Self-study/independent | 100-200 | 1-3 years | Variable | $10-30K |
| Industry transition programs | 50-100 | 1-2 years | Medium | $50-100K |
| Total pipeline capacity | 220-450/year | 1-6 years | Mixed | $30-400K |
Pipeline Efficiency Metrics
Section titled “Pipeline Efficiency Metrics”Current training programs show significant variation in effectiveness and cost-efficiency:
| Program | Completion Rate | Placement Rate | Cost Efficiency | Success Factors |
|---|---|---|---|---|
| MATS↗🔗 webMATS Research ProgramMATS is an intensive training program that helps researchers transition into AI safety, providing mentorship, funding, and community support. Since 2021, over 446 researchers ha...Source ↗Notes | 85-90% | 70-80% | High | Mentorship, practical projects |
| SERI MATS | 80-85% | 60-70% | High | Research experience |
| PhD programs | 70-80% | 90-95% | Medium | Deep expertise, credentials |
| Bootcamps | 60-70% | 40-60% | Medium | Intensive format |
Bottleneck Deep Dive
Section titled “Bottleneck Deep Dive”Bottleneck 1: Training Pipeline Constraints
Section titled “Bottleneck 1: Training Pipeline Constraints”Problem: Current training capacity produces only 30-50% of needed researchers annually.
Quantitative Breakdown:
- Required new researchers (to close gap by 2027): 500-1,500/year
- Current pipeline output: 220-450/year
- Pipeline deficit: 50-1,050/year (55-70% shortfall)
Quality Distribution Issues:
- A-tier researchers needed: 200-400
- A-tier production: 50-100/year
- A-tier gap: 100-300 (50-75% of demand)
Bottleneck 2: Funding Architecture
Section titled “Bottleneck 2: Funding Architecture”Organizations like Open Philanthropy↗🔗 webOpen Philanthropy grants databaseOpen Philanthropy provides grants across multiple domains including global health, catastrophic risks, and scientific progress. Their focus spans technological, humanitarian, an...Source ↗Notes provide substantial funding, but total resources remain insufficient for scaling scenarios.
| Funding Source | 2024 Allocation | Growth Rate | Sustainability |
|---|---|---|---|
| Coefficient Giving | $50-100M | Stable | Medium-term |
| Frontier lab budgets | $100-300M | 20-30%/year | Market-dependent |
| Government funding | $20-50M | Slow | Policy-dependent |
| Other foundations | $10-30M | Variable | Uncertain |
| Total funding | $180-480M | 15-25%/year | Mixed |
Bottleneck 3: Competition from Capabilities Research
Section titled “Bottleneck 3: Competition from Capabilities Research”The racing dynamicsRiskRacing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100 between safety and capabilities create severe talent competition, with capabilities roles offering substantially higher compensation.
| Experience Level | Safety Org Salary | Capabilities Lab Salary | Premium Ratio |
|---|---|---|---|
| Entry-level | $80-120K | $200-400K | 2-3x |
| Mid-level | $120-200K | $400-800K | 3-4x |
| Senior | $200-300K | $600K-2M+ | 3-7x |
| Leadership | $250-400K | $1M-10M+ | 4-25x |
Intervention Analysis
Section titled “Intervention Analysis”High-Impact Training Interventions
Section titled “High-Impact Training Interventions”| Intervention | Annual Cost | Output Increase | Cost per Researcher | Implementation Timeline |
|---|---|---|---|---|
| Scale MATS programs 3x | $15-30M | +200/year | $75-150K | 6-12 months |
| New safety PhD programs | $40-80M | +80/year | $500K-1M | 2-3 years |
| Industry transition bootcamps | $20-40M | +100-200/year | $100-200K | 6-12 months |
| Online certification programs | $5-10M | +100-300/year | $17-100K | 3-6 months |
Retention and Quality Interventions
Section titled “Retention and Quality Interventions”Current annual attrition rates of 16-32% represent significant talent loss that could be reduced through targeted interventions.
| Retention Strategy | Cost | Attrition Reduction | ROI Analysis |
|---|---|---|---|
| Competitive salary fund | $50-100M/year | 5-10 percentage points | 2-4x researcher replacement cost |
| Career development programs | $10-20M/year | 3-5 percentage points | 3-5x |
| Research infrastructure | $20-40M/year | 2-4 percentage points | 2-3x |
| Geographic flexibility | $5-10M/year | 2-3 percentage points | 4-6x |
Scenario Modeling
Section titled “Scenario Modeling”Baseline Scenario: Current Trajectory
Section titled “Baseline Scenario: Current Trajectory”Under current trends, the talent gap improves modestly but remains significant:
| Year | Supply | Demand | Gap | Gap % |
|---|---|---|---|---|
| 2024 | 1,500 | 1,300 | -200 | 15% |
| 2025 | 1,800 | 1,600 | -200 | 13% |
| 2026 | 2,100 | 2,000 | -100 | 5% |
| 2027 | 2,500 | 2,800 | +300 | 11% |
Crisis Response Scenario
Section titled “Crisis Response Scenario”If AI progress triggers safety prioritization, gaps could become critical:
| Year | Supply (Enhanced) | Demand (Crisis) | Gap | Gap % |
|---|---|---|---|---|
| 2024 | 1,500 | 1,300 | -200 | 15% |
| 2025 | 2,200 | 3,000 | +800 | 27% |
| 2026 | 3,500 | 7,000 | +3,500 | 50% |
| 2027 | 6,000 | 15,000 | +9,000 | 60% |
Historical Precedents
Section titled “Historical Precedents”Manhattan Project Comparison
Section titled “Manhattan Project Comparison”The Manhattan Project↗🔗 webManhattan ProjectSource ↗Notes provides insights into rapid scientific talent mobilization:
| Metric | Manhattan Project (1942-1945) | AI Safety (Current) | AI Safety (Mobilized) |
|---|---|---|---|
| Initial researcher pool | ≈100 nuclear physicists | ≈1,500 safety researchers | ≈1,500 |
| Peak workforce | ≈6,000 scientists/engineers | ≈2,000 (projected 2027) | ≈10,000 (potential) |
| Scaling factor | 60x in 3 years | 1.3x in 3 years | 6.7x in 3 years |
| Government priority | Maximum | Minimal | Hypothetical high |
| Resource allocation | $28B (2020 dollars) | ≈$500M annually | $5-10B annually |
Other Technology Mobilizations
Section titled “Other Technology Mobilizations”| Program | Duration | Talent Scale-up | Success Factors |
|---|---|---|---|
| Apollo Program | 8 years | 20x | Clear goal, unlimited resources |
| COVID vaccine development | 1 year | 5x | Existing infrastructure, parallel efforts |
| Cold War cryptography | 10 years | 15x | Security priority, university partnerships |
Feedback Loop Analysis
Section titled “Feedback Loop Analysis”Positive Feedback Loops
Section titled “Positive Feedback Loops”Research Quality → Field Attraction:
- High-impact safety research increases field prestige
- Prestigious field attracts top-tier researchers
- Better researchers produce higher-impact research
Success → Funding → Scale:
- Visible safety progress builds funder confidence
- Increased funding enables program expansion
- Larger programs achieve economies of scale
Negative Feedback Loops
Section titled “Negative Feedback Loops”Capability Race → Brain Drain:
- AI race intensifies, driving higher capability salaries
- Safety researchers transition to better-compensated roles
- Reduced safety talent further slows progress
Progress Pessimism → Attrition:
- Slow safety progress relative to capabilities
- Researcher demoralization and career changes
- Talent loss further slows progress
Geographic Distribution
Section titled “Geographic Distribution”Current Concentration
Section titled “Current Concentration”| Region | Safety Researchers | Major Organizations | Constraints |
|---|---|---|---|
| SF Bay Area | 40-50% | Anthropic, OpenAI, MIRI | High cost of living |
| Boston/Cambridge | 15-20% | MIT, Harvard | Limited industry positions |
| London | 10-15% | DeepMind, Oxford | Visa requirements |
| Other US | 15-20% | Various universities | Geographic dispersion |
| Other International | 10-15% | Scattered | Visa, funding constraints |
Geographic Bottlenecks
Section titled “Geographic Bottlenecks”Visa and Immigration Issues:
- H-1B lottery system blocks international talent
- Security clearance requirements limit government roles
- Brexit complications affect EU-UK movement
Regional Capacity Constraints:
- Housing costs in AI hubs (SF, Boston) limit accessibility
- Limited remote work policies at some organizations
- Talent concentration reduces geographic resilience
Quality vs. Quantity Trade-offs
Section titled “Quality vs. Quantity Trade-offs”Researcher Tier Analysis
Section titled “Researcher Tier Analysis”| Tier | Characteristics | Current Supply | Needed Supply | Impact Multiple |
|---|---|---|---|---|
| A-tier | Can lead research agendas, mentor others | 50-100 | 200-400 | 10-50x average |
| B-tier | Independent research, implementation | 200-500 | 800-1,200 | 3-5x average |
| C-tier | Execution, support roles | 500-1,000 | 1,000-2,000 | 1x baseline |
| D-tier | Adjacent skills, potential | 1,000+ | Variable | 0.3-0.5x |
Strategic Implications
Section titled “Strategic Implications”Leadership Bottleneck: The shortage of A-tier researchers who can set research directions and mentor others may be more critical than total headcount.
Optimal Resource Allocation:
- High-leverage: Develop A-tier researchers (long-term, high-cost)
- Medium-leverage: Scale B-tier production (medium-term, medium-cost)
- Low-leverage: Increase C-tier volume (short-term, low-cost)
Economic Impact Analysis
Section titled “Economic Impact Analysis”Opportunity Cost Assessment
Section titled “Opportunity Cost Assessment”The talent shortage imposes significant opportunity costs on AI safety progress:
| Lost Progress Type | Annual Value | Cumulative Impact |
|---|---|---|
| Research breakthroughs delayed | $100-500M | Compound delay in safety solutions |
| Interpretability progress | $50-200M | Reduced understanding of systems |
| Governance preparation | $20-100M | Policy lag behind technology |
| Total opportunity cost | $170-800M/year | Exponential safety lag |
Return on Investment
Section titled “Return on Investment”Talent development interventions show strong ROI compared to opportunity costs:
| Investment | Annual Cost | Researchers Added | ROI (5-year) |
|---|---|---|---|
| Training programs | $100M | 500 | 5-10x |
| Retention programs | $100M | 200 (net) | 3-7x |
| Infrastructure | $50M | 100 | 4-8x |
| Combined program | $250M | 800 | 4-9x |
Policy Recommendations
Section titled “Policy Recommendations”Immediate Actions (2025)
Section titled “Immediate Actions (2025)”-
Scale Proven Programs:
-
Remove Friction:
- Streamline H-1B process for AI safety roles
- Create safety-specific grant categories
- Establish talent-sharing agreements between organizations
Medium-term Reforms (2025-2027)
Section titled “Medium-term Reforms (2025-2027)”-
Institutional Development:
- Fund 10-20 new AI safety PhD programs
- Establish government AI safety research fellowships
- Create safety-focused postdoc exchange programs
-
Competitive Balance:
- Safety researcher salary competitiveness fund
- Equity/ownership programs at safety organizations
- Long-term career advancement pathways
Long-term Infrastructure (2027-2030)
Section titled “Long-term Infrastructure (2027-2030)”-
National Capacity Building:
- AI Safety Corps (government service program)
- National AI Safety University Consortium
- International talent exchange agreements
-
Systemic Changes:
- Safety research requirements for AI development
- Academic tenure track positions in safety
- Industry safety certification programs
Key Uncertainties and Cruxes
Section titled “Key Uncertainties and Cruxes”Key Questions (6)
- How much additional research progress would each marginal safety researcher actually produce?
- Can training time be compressed from years to months without quality loss?
- Will competition from capabilities research permanently prevent salary competitiveness?
- What fraction of the 'adjacent' researcher pool could realistically transition to safety focus?
- How much does geographic distribution matter for research productivity and coordination?
- What is the optimal ratio between A-tier, B-tier, and C-tier researchers?
Critical Research Questions
Section titled “Critical Research Questions”- Marginal Impact Assessment: Quantifying the relationship between researcher quantity/quality and safety progress
- Training Optimization: Identifying minimum viable training for productive safety research
- Retention Psychology: Understanding what motivates long-term commitment to safety work
- Coordination Effects: Measuring productivity gains from researcher collaboration and proximity
Model Limitations and Biases
Section titled “Model Limitations and Biases”Data Quality Issues
Section titled “Data Quality Issues”- Definition Ambiguity: No consensus on what constitutes “AI safety research”
- Hidden Supply: Many researchers work on safety-relevant problems without identifying as safety researchers
- Quality Assessment: Subjective researcher quality ratings introduce bias
- Rapid Change: Field dynamics evolve faster than data collection cycles
Methodological Limitations
Section titled “Methodological Limitations”- Linear Assumptions: Model assumes linear relationships between resources and outcomes
- Quality-Quantity Simplification: Real productivity relationships are complex and nonlinear
- Geographic Aggregation: Treats globally distributed talent as fungible
- Temporal Lag Ignoring: Training and productivity gaps have complex timing relationships
Prediction Uncertainties
Section titled “Prediction Uncertainties”- Scenario Dependence: Projections highly sensitive to AI development trajectory
- Policy Response: Unknown government/industry response to demonstrated AI risks
- Technology Disruption: New training methods or research tools could change dynamics
- Field Evolution: Safety research priorities and methods continue evolving
Related Risk Models
Section titled “Related Risk Models”This talent gap model connects to several other risks that could compound or mitigate the shortage:
- Expertise AtrophyRiskExpertise AtrophyExpertise atrophy—humans losing skills to AI dependence—poses medium-term risks across critical domains (aviation, medicine, programming), creating oversight failures when AI errs or fails. Evidenc...Quality: 65/100: If AI tools replace human expertise, safety researcher skills may degrade
- Racing DynamicsRiskRacing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100: Competition between labs drives talent toward capabilities rather than safety
- Flash DynamicsRiskFlash DynamicsAI systems operating at microsecond speeds versus human reaction times of 200-500ms create cascading failure risks across financial markets (2010 Flash Crash: $1 trillion lost in 10 minutes), infra...Quality: 64/100: Rapid AI development could outpace even scaled talent pipelines
- Scientific CorruptionRiskScientific Knowledge CorruptionDocuments AI-enabled scientific fraud with evidence that 2-20% of submissions are from paper mills (field-dependent), 300,000+ fake papers exist, and detection tools are losing an arms race against...Quality: 91/100: Poor incentives could reduce effective research output per researcher
Strategic Implications
Section titled “Strategic Implications”The talent shortage represents a foundational constraint on AI safety progress that could determine whether adequate safety research occurs before advanced AI deployment. Unlike funding or technical challenges, talent development has long lead times that make delays especially costly.
For Organizations: Talent competition will likely intensify, making retention strategies and alternative talent sources critical for organizational success.
For Policymakers: Early intervention in talent development could provide significant leverage over long-term AI safety outcomes, while delayed action may prove ineffective.
For Individual Researchers: Career decisions made in the next 2-3 years could have outsized impact on field development during a critical period.
Sources and Resources
Section titled “Sources and Resources”Research and Analysis
Section titled “Research and Analysis”| Source | Type | Key Findings |
|---|---|---|
| 80,000 Hours AI Safety Career Reviews↗🔗 web★★★☆☆80,000 Hours80,000 Hours80,000 Hours provides a comprehensive guide to technical AI safety research, highlighting its critical importance in preventing potential catastrophic risks from advanced AI sys...Source ↗Notes | Career analysis | Talent bottlenecks, career pathways |
| Open Philanthropy AI Grant Database↗🔗 webOpen Philanthropy AI Grant DatabaseSource ↗Notes | Funding data | Investment patterns, organization capacity |
| MATS Program Outcomes↗🔗 webMATS Research ProgramMATS is an intensive training program that helps researchers transition into AI safety, providing mentorship, funding, and community support. Since 2021, over 446 researchers ha...Source ↗Notes | Training data | Completion rates, placement success |
| AI Safety Support Talent Survey↗🔗 webAI Safety Support Talent SurveySource ↗Notes | Field survey | Researcher demographics, career paths |
Training Programs and Organizations
Section titled “Training Programs and Organizations”| Program | Focus | Contact |
|---|---|---|
| MATS (ML Alignment & Theory Scholars)↗🔗 webMATS Research ProgramMATS is an intensive training program that helps researchers transition into AI safety, providing mentorship, funding, and community support. Since 2021, over 446 researchers ha...Source ↗Notes | Research training | applications@matsprogram.org |
| ARENA (AI Research Extensive Alliance)↗🔗 webARENASource ↗Notes | Technical bootcamps | contact@arena.education |
| AI Safety Support↗🔗 webAI Safety Support Talent SurveySource ↗Notes | Career guidance | advice@aisafetysupport.org |
| 80,000 Hours↗🔗 web★★★☆☆80,000 Hours80,000 Hours methodologySource ↗Notes | Career planning | team@80000hours.org |
Policy and Governance Resources
Section titled “Policy and Governance Resources”| Organization | Focus | Link |
|---|---|---|
| Centre for AI Governance↗🏛️ government★★★★☆Centre for the Governance of AIGovAIA research organization focused on understanding AI's societal impacts, governance challenges, and policy implications across various domains like workforce, infrastructure, and...Source ↗Notes | Policy research | https://www.governance.ai/ |
| Partnership on AI↗🔗 webPartnership on AIA nonprofit organization focused on responsible AI development by convening technology companies, civil society, and academic institutions. PAI develops guidelines and framework...Source ↗Notes | Industry coordination | https://www.partnershiponai.org/ |
| Future of Humanity Institute↗🔗 web★★★★☆Future of Humanity Institute**Future of Humanity Institute**Source ↗Notes | Long-term research | https://www.fhi.ox.ac.uk/ |