Longterm Wiki

AI Safety Intervention Portfolio

intervention-portfolioapproachPath: /knowledge-base/responses/intervention-portfolio/
E458Entity ID (EID)
← Back to page2 backlinksQuality: 91Updated: 2026-03-13
Page Recorddatabase.json — merged from MDX frontmatter + Entity YAML + computed metrics at build time
{
  "id": "intervention-portfolio",
  "numericId": null,
  "path": "/knowledge-base/responses/intervention-portfolio/",
  "filePath": "knowledge-base/responses/intervention-portfolio.mdx",
  "title": "AI Safety Intervention Portfolio",
  "quality": 91,
  "readerImportance": 61,
  "researchImportance": 49,
  "tacticalValue": null,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-03-13",
  "dateCreated": "2026-02-15",
  "llmSummary": "Provides a strategic framework for AI safety resource allocation by mapping 13+ interventions against 4 risk categories, evaluating each on ITN dimensions, and identifying portfolio gaps (epistemic resilience severely neglected, technical work over-concentrated in frontier labs). Total field investment ~\\$650M annually with 1,100 FTEs (21% annual growth), but 85% of external funding from 5 sources and safety/capabilities ratio at only 0.5-1.3%. Recommends rebalancing from very high RLHF investment toward evaluations (very high priority), AI control and compute governance (both high priority), with epistemic resilience increasing from very low to medium allocation.",
  "description": "Strategic overview of AI safety interventions analyzing ~\\$650M annual investment across 1,100 FTEs. Maps 13+ interventions against 4 risk categories with ITN prioritization. Key finding: 85% of external funding from 5 sources, safety/capabilities ratio at 0.5-1.3%, and epistemic resilience severely neglected (under 5% of portfolio). Recommends rebalancing toward evaluations, AI control, and compute governance.",
  "ratings": {
    "novelty": 7,
    "rigor": 7.5,
    "actionability": 8,
    "completeness": 7.5
  },
  "category": "responses",
  "subcategory": "alignment",
  "clusters": [
    "ai-safety",
    "governance",
    "community"
  ],
  "metrics": {
    "wordCount": 2836,
    "tableCount": 14,
    "diagramCount": 1,
    "internalLinks": 29,
    "externalLinks": 54,
    "footnoteCount": 0,
    "bulletRatio": 0.07,
    "sectionCount": 21,
    "hasOverview": true,
    "structuralScore": 15
  },
  "suggestedQuality": 100,
  "updateFrequency": 21,
  "evergreen": true,
  "wordCount": 2836,
  "unconvertedLinks": [
    {
      "text": "Coefficient Giving's 2025 RFP",
      "url": "https://www.openphilanthropy.org/request-for-proposals-technical-ai-safety-research/",
      "resourceId": "913cb820e5769c0b",
      "resourceTitle": "Open Philanthropy"
    },
    {
      "text": "AI Safety Field Growth Analysis",
      "url": "https://forum.effectivealtruism.org/posts/7YDyziQxkWxbGmF3u/ai-safety-field-growth-analysis-2025",
      "resourceId": "d5970e4ef7ed697f",
      "resourceTitle": "AI Safety Field Growth Analysis 2025"
    },
    {
      "text": "AI Safety Field Growth Analysis",
      "url": "https://forum.effectivealtruism.org/posts/7YDyziQxkWxbGmF3u/ai-safety-field-growth-analysis-2025",
      "resourceId": "d5970e4ef7ed697f",
      "resourceTitle": "AI Safety Field Growth Analysis 2025"
    },
    {
      "text": "International AI Safety Report 2025",
      "url": "https://internationalaisafetyreport.org/publication/international-ai-safety-report-2025",
      "resourceId": "b163447fdc804872",
      "resourceTitle": "International AI Safety Report 2025"
    },
    {
      "text": "Coefficient Giving analysis",
      "url": "https://coefficientgiving.org/research/ai-safety-and-security-need-more-funders/",
      "resourceId": "0b2d39c371e3abaa",
      "resourceTitle": "AI Safety and Security Need More Funders"
    },
    {
      "text": "Coefficient Giving",
      "url": "https://www.openphilanthropy.org/",
      "resourceId": "dd0cf0ff290cc68e",
      "resourceTitle": "Open Philanthropy grants database"
    },
    {
      "text": "AI Safety Fund",
      "url": "https://www.frontiermodelforum.org/ai-safety-fund/",
      "resourceId": "6bc74edd147a374b",
      "resourceTitle": "AI Safety Fund"
    },
    {
      "text": "AI Safety Field Growth Analysis 2025",
      "url": "https://forum.effectivealtruism.org/posts/7YDyziQxkWxbGmF3u/ai-safety-field-growth-analysis-2025",
      "resourceId": "d5970e4ef7ed697f",
      "resourceTitle": "AI Safety Field Growth Analysis 2025"
    },
    {
      "text": "Redwood Research received \\$1.2M",
      "url": "https://www.openphilanthropy.org/research/our-progress-in-2024-and-plans-for-2025/",
      "resourceId": "7ca35422b79c3ac9",
      "resourceTitle": "Open Philanthropy: Progress in 2024 and Plans for 2025"
    },
    {
      "text": "GovAI",
      "url": "https://www.governance.ai/research",
      "resourceId": "571cb6299c6d27cf",
      "resourceTitle": "Governance research"
    },
    {
      "text": "MIT Technology Review named mechanistic interpretability a 2026 Breakthrough Technology",
      "url": "https://www.technologyreview.com/2026/01/12/1130003/mechanistic-interpretability-ai-research-models-2026-breakthrough-technologies/",
      "resourceId": "3a4cf664bf7b27a8",
      "resourceTitle": "Mechanistic interpretability: 10 Breakthrough Technologies 2026 | MIT Technology Review"
    },
    {
      "text": "METR",
      "url": "https://metr.org/",
      "resourceId": "45370a5153534152",
      "resourceTitle": "metr.org"
    },
    {
      "text": "NIST invested \\$20M",
      "url": "https://www.nist.gov/news-events/news/2025/12/nist-launches-centers-ai-manufacturing-and-critical-infrastructure",
      "resourceId": "563d1d66cd664c48",
      "resourceTitle": "NIST News: NIST Launches Centers for AI in Manufacturing and Critical Infrastructure"
    },
    {
      "text": "International AI Safety Report 2025",
      "url": "https://internationalaisafetyreport.org/publication/international-ai-safety-report-2025",
      "resourceId": "b163447fdc804872",
      "resourceTitle": "International AI Safety Report 2025"
    },
    {
      "text": "\\$110-130 million in 2024",
      "url": "https://coefficientgiving.org/research/ai-safety-and-security-need-more-funders/",
      "resourceId": "0b2d39c371e3abaa",
      "resourceTitle": "AI Safety and Security Need More Funders"
    },
    {
      "text": "Coefficient Giving providing ~60%",
      "url": "https://www.openphilanthropy.org/research/our-progress-in-2024-and-plans-for-2025/",
      "resourceId": "7ca35422b79c3ac9",
      "resourceTitle": "Open Philanthropy: Progress in 2024 and Plans for 2025"
    },
    {
      "text": "Superalignment Fast Grants",
      "url": "https://openai.com/index/superalignment-fast-grants/",
      "resourceId": "82eb0a4b47c95d2a",
      "resourceTitle": "OpenAI Superalignment Fast Grants"
    },
    {
      "text": "CAIS (\\$1.5M)",
      "url": "https://www.openphilanthropy.org/research/our-progress-in-2024-and-plans-for-2025/",
      "resourceId": "7ca35422b79c3ac9",
      "resourceTitle": "Open Philanthropy: Progress in 2024 and Plans for 2025"
    },
    {
      "text": "Redwood Research (\\$1.2M)",
      "url": "https://www.openphilanthropy.org/research/our-progress-in-2024-and-plans-for-2025/",
      "resourceId": "7ca35422b79c3ac9",
      "resourceTitle": "Open Philanthropy: Progress in 2024 and Plans for 2025"
    },
    {
      "text": "UK/EU government initiatives (≈\\$14M total)",
      "url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
      "resourceId": "b1ab921f9cbae109",
      "resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
    },
    {
      "text": "Coefficient Giving",
      "url": "https://www.openphilanthropy.org/",
      "resourceId": "dd0cf0ff290cc68e",
      "resourceTitle": "Open Philanthropy grants database"
    },
    {
      "text": "AI Safety Fund",
      "url": "https://www.frontiermodelforum.org/ai-safety-fund/",
      "resourceId": "6bc74edd147a374b",
      "resourceTitle": "AI Safety Fund"
    },
    {
      "text": "≈\\$100B in AI data center capex (2024)",
      "url": "https://coefficientgiving.org/research/ai-safety-and-security-need-more-funders/",
      "resourceId": "0b2d39c371e3abaa",
      "resourceTitle": "AI Safety and Security Need More Funders"
    },
    {
      "text": "Diversify funding sources",
      "url": "https://www.insidephilanthropy.com/home/whos-funding-ai-regulation-and-safety",
      "resourceId": "d16ff456256936fa",
      "resourceTitle": "Inside Philanthropy - AI Regulation Funding"
    },
    {
      "text": "Humanity AI (\\$100M)",
      "url": "https://www.insidephilanthropy.com/home/whos-funding-ai-regulation-and-safety",
      "resourceId": "d16ff456256936fa",
      "resourceTitle": "Inside Philanthropy - AI Regulation Funding"
    },
    {
      "text": "Over-optimized for researchers",
      "url": "https://forum.effectivealtruism.org/posts/m5dDrMfHjLtMu293G/ai-safety-s-talent-pipeline-is-over-optimised-for",
      "resourceId": "4a117e76e94af55d",
      "resourceTitle": "EA Forum analysis"
    },
    {
      "text": "limited effectiveness against deceptive alignment",
      "url": "https://arxiv.org/abs/2406.18346",
      "resourceId": "bf50045e699d0004",
      "resourceTitle": "AI Alignment through RLHF"
    },
    {
      "text": "Coefficient Giving provides ≈60% of external funding",
      "url": "https://www.openphilanthropy.org/research/our-progress-in-2024-and-plans-for-2025/",
      "resourceId": "7ca35422b79c3ac9",
      "resourceTitle": "Open Philanthropy: Progress in 2024 and Plans for 2025"
    },
    {
      "text": "US and UK receive majority of funding",
      "url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
      "resourceId": "b1ab921f9cbae109",
      "resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
    },
    {
      "text": "MIRI (\\$1.1M)",
      "url": "https://www.openphilanthropy.org/research/our-progress-in-2024-and-plans-for-2025/",
      "resourceId": "7ca35422b79c3ac9",
      "resourceTitle": "Open Philanthropy: Progress in 2024 and Plans for 2025"
    },
    {
      "text": "Pipeline over-optimized for researchers",
      "url": "https://forum.effectivealtruism.org/posts/m5dDrMfHjLtMu293G/ai-safety-s-talent-pipeline-is-over-optimised-for",
      "resourceId": "4a117e76e94af55d",
      "resourceTitle": "EA Forum analysis"
    },
    {
      "text": "Coefficient Giving alone provides 60%",
      "url": "https://www.openphilanthropy.org/research/our-progress-in-2024-and-plans-for-2025/",
      "resourceId": "7ca35422b79c3ac9",
      "resourceTitle": "Open Philanthropy: Progress in 2024 and Plans for 2025"
    },
    {
      "text": "Coefficient Giving Progress 2024",
      "url": "https://www.openphilanthropy.org/research/our-progress-in-2024-and-plans-for-2025/",
      "resourceId": "7ca35422b79c3ac9",
      "resourceTitle": "Open Philanthropy: Progress in 2024 and Plans for 2025"
    },
    {
      "text": "AI Safety Funding Situation Overview",
      "url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
      "resourceId": "b1ab921f9cbae109",
      "resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
    },
    {
      "text": "AI Safety Needs More Funders",
      "url": "https://coefficientgiving.org/research/ai-safety-and-security-need-more-funders/",
      "resourceId": "0b2d39c371e3abaa",
      "resourceTitle": "AI Safety and Security Need More Funders"
    },
    {
      "text": "AI Safety Field Growth Analysis 2025",
      "url": "https://forum.effectivealtruism.org/posts/7YDyziQxkWxbGmF3u/ai-safety-field-growth-analysis-2025",
      "resourceId": "d5970e4ef7ed697f",
      "resourceTitle": "AI Safety Field Growth Analysis 2025"
    },
    {
      "text": "International AI Safety Report 2025",
      "url": "https://internationalaisafetyreport.org/publication/international-ai-safety-report-2025",
      "resourceId": "b163447fdc804872",
      "resourceTitle": "International AI Safety Report 2025"
    },
    {
      "text": "Future of Life AI Safety Index 2025",
      "url": "https://futureoflife.org/ai-safety-index-summer-2025/",
      "resourceId": "df46edd6fa2078d1",
      "resourceTitle": "FLI AI Safety Index Summer 2025"
    },
    {
      "text": "Coefficient Giving Technical AI Safety RFP",
      "url": "https://www.openphilanthropy.org/request-for-proposals-technical-ai-safety-research/",
      "resourceId": "913cb820e5769c0b",
      "resourceTitle": "Open Philanthropy"
    },
    {
      "text": "80,000 Hours: AI Risk",
      "url": "https://80000hours.org/problem-profiles/risks-from-power-seeking-ai/",
      "resourceId": "d9fb00b6393b6112",
      "resourceTitle": "80,000 Hours. \"Risks from Power-Seeking AI Systems\""
    },
    {
      "text": "RLHF Limitations Paper",
      "url": "https://arxiv.org/abs/2406.18346",
      "resourceId": "bf50045e699d0004",
      "resourceTitle": "AI Alignment through RLHF"
    },
    {
      "text": "ITU Annual AI Governance Report 2025",
      "url": "https://www.itu.int/epublications/en/publication/the-annual-ai-governance-report-2025-steering-the-future-of-ai/en/",
      "resourceId": "ce43b69bb5fb00b2",
      "resourceTitle": "ITU Annual AI Governance Report 2025"
    }
  ],
  "unconvertedLinkCount": 42,
  "convertedLinkCount": 0,
  "backlinkCount": 2,
  "hallucinationRisk": {
    "level": "low",
    "score": 25,
    "factors": [
      "no-citations",
      "high-rigor",
      "conceptual-content",
      "high-quality"
    ]
  },
  "entityType": "approach",
  "redundancy": {
    "maxSimilarity": 16,
    "similarPages": [
      {
        "id": "intervention-effectiveness-matrix",
        "title": "Intervention Effectiveness Matrix",
        "path": "/knowledge-base/models/intervention-effectiveness-matrix/",
        "similarity": 16
      },
      {
        "id": "intervention-timing-windows",
        "title": "Intervention Timing Windows",
        "path": "/knowledge-base/models/intervention-timing-windows/",
        "similarity": 14
      },
      {
        "id": "ai-risk-portfolio-analysis",
        "title": "AI Risk Portfolio Analysis",
        "path": "/knowledge-base/models/ai-risk-portfolio-analysis/",
        "similarity": 13
      },
      {
        "id": "capability-alignment-race",
        "title": "Capability-Alignment Race Model",
        "path": "/knowledge-base/models/capability-alignment-race/",
        "similarity": 13
      },
      {
        "id": "risk-interaction-matrix",
        "title": "Risk Interaction Matrix Model",
        "path": "/knowledge-base/models/risk-interaction-matrix/",
        "similarity": 13
      }
    ]
  },
  "changeHistory": [
    {
      "date": "2026-02-15",
      "branch": "claude/extract-wiki-interventions-WpOs4",
      "title": "Extract wiki proposals as structured data",
      "summary": "Created two new data layers:\n1. **Interventions** (broad categories): Extended `Intervention` schema with risk coverage matrix, ITN prioritization, funding data. Created `data/interventions.yaml` with 14 broad intervention categories. `InterventionCard`/`InterventionList` components.\n2. **Proposals** (narrow, tactical): New `Proposal` data type for specific, speculative, actionable items extracted from wiki pages. Created `data/proposals.yaml` with 27 proposals across 6 domains (philanthropic, financial, governance, technical, biosecurity, field-building). Each has cost/EV estimates, honest concerns, feasibility, stance (collaborative/adversarial). `ProposalCard`/`ProposalList` components.\n\nPost-review fixes: Fixed 13 incorrect wikiPageId E-codes in interventions.yaml (used numeric IDs instead of entity slugs). Added Intervention + Proposal to schema validator. Extracted shared badge color maps from 4 components into `badge-styles.ts`. Removed unused `client:load` prop and `fundingShare` destructure.",
      "pr": 141
    }
  ],
  "coverage": {
    "passing": 10,
    "total": 13,
    "targets": {
      "tables": 11,
      "diagrams": 1,
      "internalLinks": 23,
      "externalLinks": 14,
      "footnotes": 9,
      "references": 9
    },
    "actuals": {
      "tables": 14,
      "diagrams": 1,
      "internalLinks": 29,
      "externalLinks": 54,
      "footnotes": 0,
      "references": 19,
      "quotesWithQuotes": 0,
      "quotesTotal": 0,
      "accuracyChecked": 0,
      "accuracyTotal": 0
    },
    "items": {
      "llmSummary": "green",
      "schedule": "green",
      "entity": "green",
      "editHistory": "green",
      "overview": "green",
      "tables": "green",
      "diagrams": "green",
      "internalLinks": "green",
      "externalLinks": "green",
      "footnotes": "red",
      "references": "green",
      "quotes": "red",
      "accuracy": "red"
    },
    "editHistoryCount": 1,
    "ratingsString": "N:7 R:7.5 A:8 C:7.5"
  },
  "readerRank": 233,
  "researchRank": 287,
  "recommendedScore": 234.31
}
External Links

No external links

Backlinks (2)
idtitletyperelationship
field-building-analysisAI Safety Field Building Analysisapproach
__index__/knowledge-base/responsesSafety Responsesconcept
Longterm Wiki