Longterm Wiki
Navigation
Updated 2026-03-13HistoryData
Citations verified2 accurate2 flagged5 unchecked
Page StatusContent
Edited today3.1k words8 backlinksUpdated every 6 monthsDue in 26 weeks
55QualityAdequate •5.5ImportancePeripheral5.5ResearchMinimal
Summary

Analysis of how frontier AI labs (Anthropic, OpenAI, Google DeepMind) could deploy \$100-300B+ before TAI. Compute infrastructure absorbs 50-65% of spending (\$200-400B+ across the industry), with Stargate alone at \$500B committed. Safety spending remains at 1-5% (\$1-15B) representing different allocation choices across labs. Historical analogies (Manhattan Project \$30B, Apollo \$200B) provide context for current AI investment levels. Key finding: the spending pattern—and especially the safety allocation—is a variable that other organizations, governments, and funders are actively planning around.

Content9/13
LLM summaryScheduleEntityEdit history2Overview
Tables16/ ~12Diagrams2/ ~1Int. links22/ ~25Ext. links36/ ~15Footnotes0/ ~9References21/ ~9Quotes5/10Accuracy4/10RatingsN:7 R:5 A:7 C:7Backlinks8
Change History2
Remove legacy pageTemplate frontmatter3 weeks ago

Removed the legacy `pageTemplate` frontmatter field from 15 MDX files. This field was carried over from the Astro/Starlight era and is not used by the Next.js application.

opus-4-6 · ~10min

Migrate CAIRN pre-TAI capital pages#1554 weeks ago

Migrated 6 new model pages from CAIRN PR #11 to longterm-wiki, adapting from Astro/Starlight to Next.js MDX format. Created entity definitions (E700-E705). Fixed technical issues (orphaned footnotes, extra ratings fields, swapped refs). Ran Crux improve --tier=polish on all 6 pages for better sourcing, hedged language, and numeric EntityLink IDs. Added cross-links from 4 existing pages (safety-research-value, winner-take-all-concentration, racing-dynamics-impact, anthropic-impact).

Issues2
QualityRated 55 but structure suggests 100 (underrated by 45 points)
Links20 links could use <R> components

Pre-TAI Capital Deployment: $100B-$300B+ Spending Analysis

Analysis

Pre-TAI Capital Deployment: $100B-$300B+ Spending Analysis

Analysis of how frontier AI labs (Anthropic, OpenAI, Google DeepMind) could deploy \$100-300B+ before TAI. Compute infrastructure absorbs 50-65% of spending (\$200-400B+ across the industry), with Stargate alone at \$500B committed. Safety spending remains at 1-5% (\$1-15B) representing different allocation choices across labs. Historical analogies (Manhattan Project \$30B, Apollo \$200B) provide context for current AI investment levels. Key finding: the spending pattern—and especially the safety allocation—is a variable that other organizations, governments, and funders are actively planning around.

Related
Analyses
AI Megaproject InfrastructureSafety Spending at ScaleFrontier Lab Cost StructureAI Talent Market DynamicsPlanning for Frontier Lab ScalingAI Safety Research Value ModelWinner-Take-All Concentration Model
3.1k words · 8 backlinks
InfoBox requires type or entityId

Overview

The frontier AI industry is deploying capital at historically large scales. In 2025 alone, the five largest AI-adjacent companies (Microsoft, Google, Amazon, Meta, and Oracle) guided for $355-400 billion in combined capital expenditure, with an estimated 50-80% directed toward AI infrastructure.12 Individual AI labs are raising and spending at levels that would have seemed implausible two years earlier: OpenAI anchors the $500 billion Stargate project, Anthropic has raised $37B+ at a $350B valuation on $9B ARR, and Google has committed $75B in 2025 capex largely for AI.345

This analysis examines: How could frontier AI labs collectively deploy $100-300B+ before transformative AI (TAI) arrives, and what does this spending pattern mean for organizations trying to plan around it?

This question matters because the allocation decisions—how much goes to compute vs. safety, infrastructure vs. talent, proprietary development vs. open research—will shape the trajectory of AI development and the landscape in which every other actor (governments, philanthropies, startups, academia, civil society) must operate.

Scale of Capital Flows

Total AI Industry Investment (2024-2028 Projections)

Category2024 Actual2025 Committed2026-2028 ProjectedCumulative 2024-2028
Big Tech Capex (AI-related)≈$180B≈$250-280B$250-400B/year$1.2-2.0T
AI Lab Funding (VC + corporate)≈$80B≈$100B+$50-150B/year$350-650B
Government AI Programs≈$30B≈$50B$40-80B/year$190-350B
Total AI-Related Capital≈$290B≈$470B$340-630B/year$1.7-3.0T

Sources: Estimates based on company filings, announced commitments, and industry projections. Confidence intervals: ±20% for 2025-2026, ±40% for 2027-2028.

For historical context, the Manhattan Project cost approximately $30 billion in 2024 dollars. The Apollo program cost roughly $200 billion. The Human Genome Project cost $5 billion. Current annual AI spending exceeds these prior government megaprojects in nominal terms, though operates under different organizational structures and objectives.

Individual Lab Capital Positions

LabTotal Raised / AvailableAnnual RevenueAnnual Burn RateProjected Spending (2025-2030)
OpenAI$37B+ raised; Stargate $500B committed$20B ARR$9B/year (2025)$100-200B+
Anthropic$37B+ raised; Amazon $8B anchor$9B ARR$5-7B/year est.$50-100B+
Google DeepMindInternal (Alphabet $75B capex 2025)N/A (internal)Substantial$100-200B+
Meta AIInternal ($60-65B capex 2025)N/A (internal)Substantial$80-150B+
xAI$12B raised (Dec 2024)Early stageAggressive$20-50B+

Note: Internal spending by Google and Meta is allocated across many projects; AI-specific figures are approximate based on public guidance that majority of capex is AI-related.

Spending Category Breakdown

Estimated Allocation for a Frontier AI Lab ($100B Budget)

Loading diagram...

Detailed Category Analysis

CategoryShareOn $100BOn $300BKey ConstraintsGrowth Rate
Compute Infrastructure50-65%$50-65B$150-195BPower, land, TSMC capacity40-60%/year
Model Training Compute10-20%$10-20B$30-60BGPU supply, algorithmic efficiency100%+/year
Talent10-15%$10-15B$30-45BResearcher supply20-30%/year
R&D (Non-Compute)5-10%$5-10B$15-30BResearch direction clarity30-40%/year
Safety & Alignment1-5%$1-5B$3-15BAbsorptive capacity, talent30-50%/year
Acquisitions2-8%$2-8B$6-24BRegulatory approval, targetsVariable
Operations3-5%$3-5B$9-15BScaling org complexity15-20%/year

Source: Author estimates based on public spending announcements, company filings, and industry surveys. Confidence intervals: ±10-15% for each category.

Category 1: Compute Infrastructure (50-65%)

The majority of capital goes to building and operating data centers at frontier AI scale:

Data Center Construction: A single large AI data center costs $10-50 billion and takes 2-4 years to build. The Stargate project envisions a network of facilities across the U.S. totaling $500 billion over 4+ years.6 Cost drivers include:

ComponentCost ShareKey ConstraintKey Supplier
GPUs/Accelerators40-50%TSMC fab capacity, HBM supplyNVIDIA (80-90% share)
Networking10-15%InfiniBand/Ethernet at scaleNVIDIA (InfiniBand), Broadcom
Power Infrastructure15-20%Grid connections, generationUtilities, nuclear (SMR)
Construction/Land10-15%Permitting, water coolingRegional
Cooling Systems5-10%Liquid cooling at densitySpecialized vendors

Power Requirements: Frontier AI data centers require 100MW-1GW+ of power each. Current U.S. data center power consumption is approximately 40 TWh/year, projected by Goldman Sachs to reach 945 TWh by 2030.7 This is driving investment in dedicated power generation, including nuclear small modular reactors (SMRs), natural gas plants, and large-scale solar/battery installations.

See AI Megaproject Infrastructure for deeper analysis of infrastructure buildout economics.

Category 2: Model Training (10-20%)

Training costs scale with each model generation, though algorithmic efficiency improvements (approximately doubling every 8 months according to Epoch AI's analysis) partially offset raw compute scaling:

GenerationTraining CostCompute (FLOP)TimelineExamples
GPT-4 class (2023)$50-100M≈10²⁵2022-2023GPT-4, Claude 3
GPT-5 class (2025)$500M-2B≈10²⁶2024-2025GPT-5, Claude Opus 4
Next generation (2026-27)$2-10B≈10²⁷2025-2027Projected
Beyond (2028+)$10-50B+≈10²⁸+2027+Speculative

Sources: Training cost estimates based on public statements from OpenAI, Anthropic, and independent analysis by SemiAnalysis. Compute estimates from Epoch AI.

Training costs represent a smaller share of total spending than infrastructure because training runs, while expensive, are episodic—a frontier training run takes months, not years. The infrastructure to support continuous inference and serving typically costs more in aggregate.

Category 3: Talent (10-15%)

The AI talent market is concentrated and compensation-intensive. Research by Stanford HAI and McKinsey suggests approximately 5,000-10,000 researchers globally are capable of contributing to frontier AI development, with perhaps 500-1,000 at the highest level.

RoleMedian CompensationRangeSupply Constraint
Senior Research Scientist$800K-1.5M$500K-3M+≈500 globally at frontier level
ML Engineer (Senior)$400K-800K$250K-1.2M≈5,000 at frontier level
Safety Researcher (Senior)$400K-700K$250K-1M≈200 at frontier level
Research Engineer$250K-500K$150K-700K≈10,000 at frontier level

Sources: Compensation data from levels.fyi, Rora.ai, and industry surveys. Supply estimates based on conference attendance data, publication records, and surveys by McKinsey and Stanford HAI.

At 5,000-10,000 employees per major lab and $400K-1M+ average total compensation for technical staff, talent costs of $5-10B/year per lab are plausible at scale.

See AI Talent Market Dynamics for detailed analysis of talent constraints and scaling.

Category 4: Safety & Alignment (1-5%)

Current safety spending across the industry is approximately $700M-1.25B/year, representing roughly 1-5% of total AI lab spending. This varies substantially by lab:

LabEstimated Safety Spend% of TotalSafety ResearchersFocus Areas
Anthropic$400-700M/year5-8%100-200+Constitutional AI, interpretability, evals
OpenAI$100-200M/year1-3%Reduced (post-2024 departures)Superalignment (defunded), evals
Google DeepMind$150-300M/year2-4%200-300Scalable oversight, robustness
Others$50-100M/yearVariableVariableVarious

Sources: Safety spending estimates based on public team sizes, average compensation data, and analysis of published safety research output. Anthropic's allocation discussed in Anthropic Valuation Analysis.

The difference between a 1% allocation and a 5% allocation on a $200B budget represents $8 billion in additional safety investment—16x the current global total. Whether this difference represents under-investment, optimal allocation, or over-investment relative to research tractability remains uncertain and depends on absorptive capacity analysis.

See Safety Spending at Scale for analysis of what these budget levels could accomplish.

Historical Megaproject Comparison

ProjectTotal Cost (2024 $)DurationPeak Annual SpendWorkforceOutcome
Manhattan Project$30B4 years$12B125,000Nuclear weapons
Apollo Program$200B11 years$25B400,000Moon landing
Interstate Highway System$600B35 years$25BMillions48,000 miles
Human Genome Project$5B13 years$500M≈3,000Genome sequenced
ITER Fusion$35B+20+ years$3B5,000+Ongoing
Stargate AI$500B committed4+ years$125B+TBDAI infrastructure
Total Big Tech AI Capex (2025)$355-400B1 year$355-400BMillionsAI infrastructure

The AI buildout differs from prior megaprojects in several ways:

  1. Speed: Capital is being deployed faster than prior megaprojects. The Interstate Highway System took 35 years; comparable capital is being committed to AI in 3-5 years.
  2. Private sector leadership: Prior megaprojects were government-led. AI investment is predominantly private, driven by competitive dynamics and profit incentives.
  3. Uncertain objective: Manhattan and Apollo had defined technical goals. AI labs are scaling toward transformative AI without consensus on definition or timeline.
  4. Compounding potential: Unlike physical infrastructure, AI capabilities may compound—each generation of models may accelerate development of the next.

For comparison with other technology buildouts (5G networks, fiber optic infrastructure, cloud data centers), the 5G network buildout globally is estimated at $1-1.5T over 10 years according to GSMA Intelligence, while global cloud infrastructure spending reached $200B+ annually by 2024 according to Gartner. AI infrastructure spending is comparable in scale but concentrated in a shorter timeframe.

Timeline-Dependent Spending Scenarios

Capital deployment depends critically on when TAI arrives. Below are three scenarios with different spending patterns and implications:

Scenario 1: Short Timeline (TAI by 2027-2028)

CharacteristicAssessment
Total Industry Spend$500B-1T
Spending PatternSprint: maximize compute now, optimize efficiency later
InfrastructureRepurpose existing data centers; shortage-driven premium pricing
Safety AllocationPotentially compressed under time pressure (1-2% of total)
Key VariablesRushed deployment vs. safety testing tradeoffs; limited preparation time
Planning ImplicationOther orgs have limited time to prepare or influence outcomes

Scenario 2: Medium Timeline (TAI by 2030-2032)

CharacteristicAssessment
Total Industry Spend$1-3T
Spending PatternSustained buildout with multiple model generations
InfrastructurePurpose-built campuses; power generation partnerships
Safety AllocationAllocation patterns potentially shifting (3-5% if field matures)
Key VariablesCompetitive dynamics vs. safety commitments over time
Planning ImplicationWindow exists for influence on allocation decisions

Scenario 3: Long Timeline (TAI by 2035+)

CharacteristicAssessment
Total Industry Spend$3-10T+
Spending PatternMultiple investment cycles; potential corrections and recoveries
InfrastructureGlobal network; diversified power sources including potential fusion
Safety AllocationCould shift substantially if absorptive capacity grows (5-10% possible)
Key VariablesInvestment sustainability; talent pipeline development
Planning ImplicationTime for institutional development and policy response

Safety Allocation: Current State and Potential Scenarios

Loading diagram...

The ratio of capabilities spending to safety spending varies substantially across labs (roughly 20:1 to 100:1 depending on how categories are defined). What constitutes optimal allocation remains uncertain and depends on:

  1. Tractability: Whether marginal safety research dollar produces meaningful risk reduction
  2. Absorptive capacity: Whether the field can productively deploy larger budgets
  3. Urgency: Whether safety research needs to happen before or after certain capability thresholds
  4. Substitutability: Whether capabilities research is necessary for safety research progress

What Different Safety Allocations Could Fund

Safety %On $100B BudgetOn $300B BudgetPotential Activities
1% (current baseline at some labs)$1B$3BExisting safety teams, basic evaluations
3% (Anthropic's approximate level)$3B$9BExpanded interpretability, red-teaming, governance research
5% (increased allocation scenario)$5B$15BDedicated safety labs, academic partnerships, talent pipeline development
10% (substantial increase scenario)$10B$30BComprehensive safety research ecosystem, public infrastructure
20% (research parity scenario)$20B$60BSafety research funding approaching capabilities investment

Even a shift from 1% to 5% safety allocation on a $200B budget represents $8 billion in additional safety investment—16x the current global total. Arguments for increasing allocation include potential high leverage of safety research; arguments for current allocation include uncertainty about tractability and limited absorptive capacity in the near term.

See Safety Spending at Scale for analysis of what these budgets could accomplish and AI Safety Research Value Model for economic analysis of marginal returns on safety investment.

Implications for Other Organizations

The scale of AI lab spending creates planning challenges for every other actor in the ecosystem:

For Philanthropic / EA Organizations

ChallengeDescriptionPotential Response Options
Scale mismatchEA safety funding ($400-500M/yr) is <1% of industry spendFocus on neglected interventions, not matching total spend
Talent competitionLabs pay 3-5x philanthropic salariesFund pipeline development, early-career positions, and academic research
Speed of changeFunding cycles (6-12 months) lag industry shifts (weeks to months)Pre-committed flexible funding; rapid response mechanisms
Influence windowPre-TAI period may represent key opportunity for external influencePrioritize policy work, governance research, and allocation advocacy

Sources: Philanthropic funding estimates from Coefficient Giving grants database and 80,000 Hours analysis.

For Governments

ChallengeDescriptionPotential Response Options
Regulatory lagPolicy formation takes years; AI capabilities advance in monthsAdaptive regulation frameworks; regulatory sandboxes
Sovereignty considerationsCritical infrastructure controlled by private actorsPublic compute programs; domestic AI capacity development
Safety externalitiesPotential under-investment in safety relative to social benefitsMandatory safety spending requirements; public safety research funding
Workforce transitionAI-driven automation may accelerate with scaleTransition planning; education system adaptation

For Academic Institutions

ChallengeDescriptionPotential Response Options
Brain drainTop researchers receive 5-10x industry compensationIndustry partnerships; joint appointments; focus on areas with academic advantage
Compute accessFrontier research requires $10M-1B+ compute budgetsNational compute infrastructure; lab partnerships; focus on compute-efficient research
Publication velocityAcademic timelines (12-24 months) lag industry (weeks to months)Preprint culture; closer industry collaboration; focus on foundational research
Training pipelineGrowing demand for AI researchers at all levelsExpand programs; interdisciplinary training; industry curriculum partnerships

See Planning for Frontier Lab Scaling for comprehensive strategic frameworks for each actor type.

Key Uncertainties

UncertaintyRangeImpact on AnalysisResolution Timeline
TAI timeline2027-2040+Determines total spending and urgency of allocation decisionsUncertain
Scaling law persistenceContinues / plateaus / breaks downDetermines whether $10-100B+ training runs occur2-3 years
AI investment correction20-40% probability of 30-60% correctionCould substantially reduce available capital1-3 years
Regulatory interventionMinimal to comprehensiveCould mandate safety allocations or slow deployment2-5 years
Algorithmic efficiency2-10x improvement possible over 3-5 yearsCould reduce infrastructure needs substantiallyOngoing
Geopolitical competitionCooperation to confrontation spectrumShapes government investment and export controlsOngoing

Uncertainty ranges represent author's subjective confidence intervals based on available evidence.

The AI Investment Sustainability Question

A key uncertainty is whether current AI investment levels are sustainable. Indicators to monitor include:

Historical technology investment cycles (dot-com bubble 2000-2002 with 80% Nasdaq decline, telecom overinvestment 1998-2002 with $500B+ in write-downs according to Federal Reserve analysis) provide context but limited predictive power given differences in underlying technology trajectories.

If an AI investment correction occurs, it could reduce capital available for deployment by 30-60%, potentially shrinking the $100-300B+ figure substantially. However, the underlying technology trajectory would likely continue, though at a different pace and with different capital structures. Whether current spending levels represent rational investment or misallocation remains uncertain and depends partly on TAI timeline.

Methodological Notes

Estimation Methodology: Projections in this analysis combine:

  • Public company filings and guidance (highest confidence)
  • Announced commitments and partnerships (moderate confidence)
  • Industry surveys and expert interviews (lower confidence)
  • Author estimates based on analogy to historical patterns (lowest confidence)

Confidence Levels:

  • 2024-2025 figures: ±20% confidence intervals
  • 2026-2028 projections: ±40% confidence intervals
  • 2029+ scenarios: ±60%+ confidence intervals

Key Assumptions:

  1. No major regulatory intervention limiting spending (uncertain)
  2. Scaling laws continue at historical rates (uncertain, 2-3 year resolution)
  3. No major geopolitical disruption to supply chains (uncertain)
  4. TAI timeline in 2027-2035 range (highly uncertain)

Data Source Hierarchy: Where conflicts exist, this analysis prioritizes: (1) SEC filings and earnings calls, (2) direct company announcements, (3) industry analyst reports, (4) journalism, (5) author estimates.

Summary: Current State of Pre-TAI Capital Deployment

Based on current commitments and trajectories, spending of $100-300B+ per major lab over the next 5-10 years appears plausible, though significant uncertainties remain:

  1. Scale: Total industry spending could reach $1-3T through 2028-2030 based on current commitments, though investment corrections could reduce this by 30-60%.

  2. Infrastructure allocation: 50-65% goes to data centers, chips, and power. This is largely determined by competitive dynamics and existing commitments.

  3. Safety allocation: Current spending ranges from 1-5% across labs. The difference between 1% and 5% on a $200B budget is $8 billion—a substantial change if deployed effectively, though optimal allocation remains uncertain.

  4. Allocation timing: Pre-TAI is the period when spending patterns are being established. Once infrastructure is built and organizational patterns are set, changing allocation becomes harder.

  5. Planning context: The speed and scale of AI lab spending creates a different planning environment for governments, philanthropies, academia, and civil society organizations relative to historical technology transitions.

Sources

Footnotes

  1. Citation rc-f745 (data unavailable — rebuild with wiki-server access)

  2. Bloomberg - Microsoft, Google, Amazon, Meta combined AI infrastructure commitmentsBloomberg - Microsoft, Google, Amazon, Meta combined AI infrastructure commitments (2025)

  3. The Verge - Stargate: Trump announces $500B AI infrastructure projectThe Verge - Stargate: Trump announces $500B AI infrastructure project (January 2025)

  4. CNBC - Anthropic reaches $9B ARR, $350B valuationCNBC - Anthropic reaches $9B ARR, $350B valuation (2025)

  5. Alphabet Q4 2024 Earnings - $75B capex guidance for 2025Alphabet Q4 2024 Earnings - $75B capex guidance for 2025 (January 2025)

  6. Reuters - Inside Stargate: the $500B AI data center planReuters - Inside Stargate: the $500B AI data center plan (2025)

  7. Goldman Sachs Research - "AI, Data Centers, and the Coming U.S. Power Demand Surge"Goldman Sachs Research - "AI, Data Centers, and the Coming U.S. Power Demand Surge" (2024)

  8. CNBC - OpenAI chair Bret Taylor says AI is 'probably' a bubbleCNBC - OpenAI chair Bret Taylor says AI is 'probably' a bubble (January 2026)

  9. Carnegie Investments - Risks Facing OpenAICarnegie Investments - Risks Facing OpenAI (2025)

  10. Fortune - HSBC Analysis: OpenAI $207B funding shortfallFortune - HSBC Analysis: OpenAI $207B funding shortfall (November 2025)

  11. See <EntityLink id="E405" name="anthropic-valuation">Anthropic Valuation Analysis</EntityLink> for customer concentra... — See <EntityLink id="E405" name="anthropic-valuation">Anthropic Valuation Analysis</EntityLink> for customer concentration details

References

Claims (1)
In 2025 alone, the five largest AI-adjacent companies (Microsoft, Google, Amazon, Meta, and Oracle) guided for \$355-400 billion in combined capital expenditure, with an estimated 50-80% directed toward AI infrastructure. Individual AI labs are raising and spending at levels that would have seemed implausible two years earlier: OpenAI anchors the \$500 billion Stargate project, Anthropic has raised \$37B+ at a \$350B valuation on \$9B ARR, and Google has committed \$75B in 2025 capex largely for AI.
Claims (1)
- [HSBC analysis identifying \$207B funding shortfall for OpenAI's plans](https://fortune.com/2025/11/26/is-openai-profitable-forecast-data-center-200-billion-shortfall-hsbc/)
Accurate100%Feb 22, 2026
OpenAI won’t make money by 2030 and still needs to come up with another $207 billion to power its growth plans, HSBC estimates
Claims (1)
In 2025 alone, the five largest AI-adjacent companies (Microsoft, Google, Amazon, Meta, and Oracle) guided for \$355-400 billion in combined capital expenditure, with an estimated 50-80% directed toward AI infrastructure. Individual AI labs are raising and spending at levels that would have seemed implausible two years earlier: OpenAI anchors the \$500 billion Stargate project, Anthropic has raised \$37B+ at a \$350B valuation on \$9B ARR, and Google has committed \$75B in 2025 capex largely for AI.
Claims (1)
- [OpenAI projecting \$9B losses in 2025, profitability not expected until 2030](https://blog.carnegieinvest.com/the-risks-facing-openai-and-its-1.4t-in-spending-commitments)
Inaccurate50%Feb 22, 2026
OpenAI, the company behind ChatGPT, expects to end 2025 with an annualized run rate of revenue over $20 billion ; its fourth quarter revenue will be about $5B. OpenAI’s CEO Sam Altman predicts revenue will grow to hundreds of billions by 2030. 2030 is also when the company is guiding positive free cash flow.

The source does not mention OpenAI projecting $9B losses in 2025. The source states that OpenAI is guiding positive free cash flow in 2030, not profitability.

Claims (1)
In 2025 alone, the five largest AI-adjacent companies (Microsoft, Google, Amazon, Meta, and Oracle) guided for \$355-400 billion in combined capital expenditure, with an estimated 50-80% directed toward AI infrastructure. Individual AI labs are raising and spending at levels that would have seemed implausible two years earlier: OpenAI anchors the \$500 billion Stargate project, Anthropic has raised \$37B+ at a \$350B valuation on \$9B ARR, and Google has committed \$75B in 2025 capex largely for AI.
Claims (1)
In 2025 alone, the five largest AI-adjacent companies (Microsoft, Google, Amazon, Meta, and Oracle) guided for \$355-400 billion in combined capital expenditure, with an estimated 50-80% directed toward AI infrastructure. Individual AI labs are raising and spending at levels that would have seemed implausible two years earlier: OpenAI anchors the \$500 billion Stargate project, Anthropic has raised \$37B+ at a \$350B valuation on \$9B ARR, and Google has committed \$75B in 2025 capex largely for AI.
Not verifiable50%Feb 22, 2026
The Stargate Project is a $500 billion AI data center plan for OpenAI

Failed to parse LLM response

Claims (1)
totaling \$500 billion over 4+ years. Cost drivers include:
Claims (1)
In 2025 alone, the five largest AI-adjacent companies (Microsoft, Google, Amazon, Meta, and Oracle) guided for \$355-400 billion in combined capital expenditure, with an estimated 50-80% directed toward AI infrastructure. Individual AI labs are raising and spending at levels that would have seemed implausible two years earlier: OpenAI anchors the \$500 billion Stargate project, Anthropic has raised \$37B+ at a \$350B valuation on \$9B ARR, and Google has committed \$75B in 2025 capex largely for AI.
Claims (1)
- [OpenAI Chair Bret Taylor stating AI is "probably a bubble"](https://www.cnbc.com/2026/01/22/openai-chair-bret-taylor-ai-bubble-correction.html) (January 2026)
Accurate100%Feb 22, 2026
Bret Taylor said AI is "probably" a bubble, and he expects to see a correction over the next few years.
Claims (1)
data center power consumption is approximately 40 TWh/year, projected by Goldman Sachs to reach 945 TWh by 2030. This is driving investment in dedicated power generation, including nuclear small modular reactors (SMRs), natural gas plants, and large-scale solar/battery installations.
Inaccurate65%Feb 22, 2026
Now, as the pace of efficiency gains in electricity use slows and the AI revolution gathers steam, Goldman Sachs Research estimates that data center power demand will grow 160% by 2030.

WRONG NUMBERS: The claim states that data center power consumption is approximately 40 TWh/year, but the source says it is about 200 TWh/year. WRONG NUMBERS: The claim states that data center power consumption is projected to reach 945 TWh by 2030, but the source does not provide this specific number. It only states that data center power demand will grow 160% by 2030. MISLEADING PARAPHRASE: The claim mentions investment in dedicated power generation, including nuclear small modular reactors (SMRs), natural gas plants, and large-scale solar/battery installations. The source only mentions that tech firms are expected to underwrite new renewables and commercialize emerging nuclear generation capabilities.

12AnthropicAnthropic
★★★★☆
13Epoch AIEpoch AI

Epoch AI is a research organization collecting and analyzing data on AI model training compute, computational performance, and technological trends in artificial intelligence.

★★★★☆
15McKinsey State of AI 2025McKinsey & Company
★★★☆☆
16Dario AmodeiAnthropic
★★★★☆
17AI Alignment ForumAlignment Forum·Blog post
★★★☆☆

Open Philanthropy provides grants across multiple domains including global health, catastrophic risks, and scientific progress. Their focus spans technological, humanitarian, and systemic challenges.

Open Philanthropy provides strategic grants across multiple domains including global health, catastrophic risks, scientific progress, and AI safety. Their portfolio aims to maximize positive impact through targeted philanthropic investments.

The 80,000 Hours AI Safety Career Guide argues that future AI systems could develop power-seeking behaviors that threaten human existence. The guide outlines potential risks and calls for urgent research and mitigation strategies.

★★★☆☆
21Federal Reservefederalreserve.gov·Government
Citation verification: 2 verified, 2 flagged, 5 unchecked of 10 total

Related Pages

Top Related Pages

Risks

Concentrated Compute as a Cybersecurity RiskFinancial Stability Risks from AI Capital Expenditure

Approaches

Constitutional AI

Analysis

AI Safety Research Value ModelWinner-Take-All Concentration ModelAnthropic Valuation AnalysisRacing Dynamics Impact ModelAI Compute Scaling MetricsProjecting Compute Spending

Safety Research

Scalable OversightInterpretability

Organizations

AnthropicOpenAI80,000 HoursGoogle DeepMindNVIDIAAI Revenue Sources

Concepts

Transformative AI