Longterm Wiki
Updated 2026-03-12HistoryData
Page StatusContent
Edited 1 day ago1.9k words14 backlinksUpdated every 6 weeksDue in 6 weeks
55QualityAdequate •55ImportanceUseful60ResearchModerate
Summary

Ajeya Cotra is a member of technical staff at METR and former senior advisor at Coefficient Giving (formerly Open Philanthropy), where she led technical AI safety grantmaking including a \$25M agent benchmarks RFP. Author of the Bio Anchors AI timelines report (15% transformative AI by 2036, 50% by 2060), she has become influential for her work on intelligence explosion dynamics, crunch time strategy (the 6-12 month window after AI automates AI R&D), and the case for using early transformative AI for defensive work.

Content5/13
LLM summaryScheduleEntityEdit historyOverview
Tables6/ ~8Diagrams0/ ~1Int. links18/ ~16Ext. links6/ ~10Footnotes0/ ~6References0/ ~6Quotes0Accuracy0RatingsN:4 R:6 A:5 C:6.5Backlinks14
Issues1
QualityRated 55 but structure suggests 93 (underrated by 38 points)

Ajeya Cotra

Person

Ajeya Cotra

Ajeya Cotra is a member of technical staff at METR and former senior advisor at Coefficient Giving (formerly Open Philanthropy), where she led technical AI safety grantmaking including a \$25M agent benchmarks RFP. Author of the Bio Anchors AI timelines report (15% transformative AI by 2036, 50% by 2060), she has become influential for her work on intelligence explosion dynamics, crunch time strategy (the 6-12 month window after AI automates AI R&D), and the case for using early transformative AI for defensive work.

Affiliationmetr
RoleMember of Technical Staff
Known ForBio Anchors AI timelines report, Crunch time framework, AI safety grantmaking at Coefficient Giving
Related
Organizations
METRCoefficient Giving
People
Holden KarnofskyPaul Christiano
1.9k words · 14 backlinks
Person

Ajeya Cotra

Ajeya Cotra is a member of technical staff at METR and former senior advisor at Coefficient Giving (formerly Open Philanthropy), where she led technical AI safety grantmaking including a \$25M agent benchmarks RFP. Author of the Bio Anchors AI timelines report (15% transformative AI by 2036, 50% by 2060), she has become influential for her work on intelligence explosion dynamics, crunch time strategy (the 6-12 month window after AI automates AI R&D), and the case for using early transformative AI for defensive work.

Affiliationmetr
RoleMember of Technical Staff
Known ForBio Anchors AI timelines report, Crunch time framework, AI safety grantmaking at Coefficient Giving
Related
Organizations
METRCoefficient Giving
People
Holden KarnofskyPaul Christiano
1.9k words · 14 backlinks

Overview

Ajeya Cotra is a member of technical staff at METR (Model Evaluation and Threat Research) and formerly a senior advisor at Coefficient Giving (formerly Coefficient Giving), where she spent nine years doing AI strategy research and leading technical AI safety grantmaking. She is among the most respected and accurate forecasters of AI developments, placing 3rd out of 413 participants in an AI forecasting competition, and her work on timelines, capability evaluations, and threat modeling has been widely influential in AI safety circles.

Cotra is best known for the Bio Anchors report, developed with Holden Karnofsky, which estimates AI development timelines by comparing required computation to biological systems. The framework projects roughly 15% probability of transformative AI by 2036 and 50% by 2060. More recently, she has developed influential thinking on the "crunch time" window --- the potentially brief period (perhaps 6--12 months) between AI automating AI research and the arrival of uncontrollably powerful superintelligence --- and the case for redirecting AI labor toward alignment, biodefense, cyberdefense, and collective decision-making during that window.

DimensionDetails
Current RoleMember of Technical Staff, METR (since late 2025)
Previous RoleSenior Advisor, Coefficient Giving (2016--2025)
EducationUC Berkeley (graduated 2016)
Key PublicationBio Anchors report on AI timelines
Forecasting3rd out of 413 in AI development forecasting competition
GrantmakingLed $25M+ agent benchmarks RFP; $2--3M evidence-gathering RFP
Core ThesisEarly 2030s: top-human-expert-dominating AI; followed by 6--12 month crunch time window

Career Evolution

Research at Coefficient Giving (2016--2023)

Cotra joined GiveWell (the predecessor organization) in 2016 immediately after graduating from UC Berkeley, drawn by the organization's intellectual depth and commitment to rigorous charity evaluation. For her first six to seven years, she focused primarily on deep research rather than grantmaking --- an unusual role at a grantmaking organization, driven by demand from leadership (particularly Holden Karnofsky) for foundational AI strategy work.

Key research outputs during this period include:

  • Bio Anchors Report (2020): A framework for estimating AI timelines by comparing required computation to biological systems, projecting 15% probability of transformative AI by 2036 and 50% by 2060. This became one of the most cited references in AI timelines discourse.
  • AI Takeoff Speeds Analysis: Research on how quickly capabilities might advance once key thresholds are reached.
  • Threat Modeling: Work on how AI could lead to catastrophic outcomes, including through deceptive alignment and power-seeking behavior.

Technical AI Safety Grantmaking (2023--2025)

In late 2023, Cotra transitioned to leading Coefficient Giving's technical AI safety grantmaking portfolio, which had been orphaned after previous program officers departed. She brought a distinctive approach emphasizing deep inside-view understanding of research directions rather than relying primarily on heuristics about researcher quality.

Her approach involved forming detailed views about how specific research directions (interpretability, control, evaluations) would connect to preventing AI takeover, then using those views to co-create grant opportunities with researchers. This contrasted with the organization's more typical approach of faster, less deeply justified grantmaking.

Major grantmaking achievements:

InitiativeAmountFocus
Agent Benchmarks RFP (late 2023)$25MFunded realistic agent benchmarks including Cybench; pushed for harder, more realistic tasks than existing benchmarks
Evidence-Gathering RFP$2--3MFunded surveys, RCTs, and other non-benchmark evidence about AI impact, including the LEAP panel at the Forecasting Research Institute
FTX Emergency Grants (2022)≈50 grantsRapid response grants to researchers affected by the FTX Foundation collapse

Sabbatical and Transition to METR (2025)

After Karnofsky's departure from Coefficient Giving in 2023, Cotra found the working environment increasingly difficult --- the loss of engaged intellectual partnership, challenges with management, and the tension between her desire for deep understanding and the pace demands of grantmaking. In September 2025, she took a four-month sabbatical.

During the sabbatical, she reflected on career patterns, participated in the inaugural Curve Conference (bringing together AI skeptics and safety researchers), and considered going independent as a writer. She ultimately returned to Coefficient Giving in a senior advisor role, helping new GCR director Emily Oehlsen develop strategy, before joining METR as a member of technical staff --- a role more aligned with her desire for deep research.

Key Ideas and Frameworks

The Crunch Time Framework

Cotra's most distinctive recent contribution is the "crunch time" framework for thinking about the intelligence explosion. The core argument:

  1. AI will likely automate AI R&D in the early 2030s, producing what Ryan Greenblatt calls "top-human-expert-dominating AI" --- systems better than any human expert at all remote computer-based tasks.
  2. A narrow window follows (perhaps 6--12 months by default) before AI becomes uncontrollably powerful.
  3. During this window, the optimal strategy is to redirect as much AI labor as possible from further capability acceleration toward protective work: alignment research, biodefense, cyberdefense, AI for better collective decision-making, and other defensive measures.
  4. This plan is the stated strategy of all major frontier labs (OpenAI, Anthropic, Google DeepMind), though none have quantitative commitments about what fraction of AI labor they will redirect.

Why the Plan Might Fail

Cotra identifies several failure modes for the crunch time strategy:

Failure ModeLikelihoodDescription
Insufficient redirectionHighestCompanies face competitive pressure and simply don't redirect enough AI labor from capabilities to safety work
No meaningful windowModerateAI jumps from narrow capability to overwhelming superintelligence in days or weeks, leaving no time to respond
Capability mismatchModerateAIs good at AI R&D but not at safety research, biodefense, moral philosophy, or other needed work
Misaligned AI helpersModerateEarly transformative AIs have incentives to undermine alignment, defensive work, and epistemics --- not just alignment research
Execution failureModerateEven with commitment, organizations can't pivot fast enough; decision-making lags prevent rapid reallocation

The Transparency Imperative

Cotra argues that transparency about AI capabilities is essential for detecting the onset of an intelligence explosion. Her proposed transparency regime includes:

  • Calendar-cadence benchmark reporting: Labs should report their highest internal benchmark scores every three months, not just at product launch --- because danger could come from purely internal deployment.
  • Internal AI adoption metrics: The fraction of pull requests mostly written and reviewed by AI (not just lines of code), tracking how much decision-making authority is being ceded to AI systems.
  • Safety incident reporting: Whether models have lied about important matters or covered up logs in real internal use.
  • Observed productivity measures: The ultimate indicator --- whether labs are discovering insights faster internally, signaling the intelligence explosion is underway.

She argues this information should be public rather than shared only with governments, because detecting and responding to an intelligence explosion requires society-wide common knowledge and the ability for outside experts to weigh in.

The 1,000-Fold Disagreement

Cotra highlights an extraordinary range of views on AI's economic impact. At one extreme, mainstream economists expect AI to add roughly 0.3 percentage points to economic growth. At the other, futurist-oriented researchers project 1,000%+ annual economic growth --- a disagreement spanning three to four orders of magnitude.

She traces this to two competing priors:

  • The slow camp leans on 150 years of ~2% growth in frontier economies despite radical technological change (electricity, radio, television, computers, internet), plus a general prior that things are "always harder and slower than you think."
  • The fast camp leans on 10,000-year economic history showing acceleration, plus models where AI closing the full loop of producing more AI (cognitive and physical) removes the constraints that kept growth at 2%.

Three Types of Intelligence Explosion

Drawing on Tom Davidson's work at Forethought, Cotra emphasizes that automating AI R&D is only one of three feedback loops needed for a full intelligence explosion:

  1. Software improvement: AI improves AI training algorithms and architectures
  2. Hardware production: AI automates chip design, fabrication, equipment manufacturing, and raw material processing
  3. Physical automation: AI controls robots that close the loop of manufacturing everything needed to make more AI

She expects the software loop to kick in first (early 2030s), with hardware and physical automation following within one to two years, enabled by rapid advances in robotics.

Views and Positions

AI Timelines

MilestoneCotra's EstimateBasis
Top-human-expert-dominating AIEarly 2030sBio Anchors framework, current capability trends
AI automating AI R&DEarly-to-mid 2030sSoftware feedback loop analysis
Full physical automation1--2 years after software automationRobotics progress, bootstrapping from cognitive AI
World as different as hunter-gatherer era vs. todayBy 2050"10,000 years of progress" driven by AI

AI Safety Strategy

Cotra supports a multi-pronged approach:

  • AI Control techniques for getting useful work from potentially misaligned early transformative AI
  • Transparency requirements for frontier labs, especially internal capability metrics and safety incidents
  • Gradual slowdown rather than hard pause-and-unpause, preferring to stretch a one-year default trajectory to 10--20 years
  • Using AI labor for defense: Alignment, biodefense, cyberdefense, epistemics, coordination, and moral philosophy
  • Foundation strategy shift: Philanthropies like Coefficient Giving should prepare to spend heavily on AI inference compute rather than human researcher salaries during crunch time

Effective Altruism

Cotra has been involved in effective altruism since age 13. She identifies three things that originally drew her to the movement:

  1. Expanding moral circle: Caring about distant, different beings (globally, temporally, across species)
  2. Intellectual depth: Rigorous, quantitative, "do-the-homework" approach to figuring out how to help
  3. Extreme transparency and integrity: GiveWell's mistakes page, refusal of donation matching, proactive honesty

She observes that while the first remains strong, the second and third have eroded as EA has shifted from a research-and-persuasion movement to one wielding significant resources in adversarial political environments. She believes EA's comparative advantage lies in incubating speculative cause areas --- like digital sentience, space governance, and value lock-in --- at a stage when they are too unconventional for mainstream engagement.

Influence and Track Record

Rob Wiblin noted in an October 2025 interview that many of Cotra's earlier predictions and concerns had proven prescient:

Topic (from 2023 interview)Subsequent Validation
METR evaluating autonomous capabilitiesBecame super influential in policy circles
Probes to monitor dangerous conversationsStandard practice; one of the most useful interpretability outputs
Chain of thought monitoringStill the dominant AI oversight technique
Growing situational awareness in AI modelsNow a completely mainstream topic
Deceptive alignmentResearch confirmed models do hide misbehavior when trained against it
Models getting "schemier" with RLConfirmed by research; observed in practice
SycophancyMajor recognized problem in deployed models

Key Relationships

Person/OrgRelationship
Holden KarnofskyFormer manager at Coefficient Giving; intellectual collaborator on Bio Anchors and AI strategy
Coefficient GivingNine years (2016--2025); research and grantmaking
METRCurrent employer; aligned with her view of METR as "the world's early warning system for intelligence explosion"
Redwood ResearchConsidered joining; close alignment with their AI control work
Paul ChristianoIntellectual influence; ARC/METR ecosystem

Sources

Structured Data

4 factsView full profile →
Employed By
METR
as of Mar 2026
Role / Title
Member of Technical Staff at METR (formerly Senior Advisor at Coefficient Giving)
as of Mar 2026

All Facts

People
PropertyValueAs OfSource
Role / TitleMember of Technical Staff at METR (formerly Senior Advisor at Coefficient Giving)Mar 2026
Employed ByMETRMar 2026
Biographical
PropertyValueAs OfSource
Notable ForBio Anchors AI timelines report, AI safety grantmaking, intelligence explosion analysis, crunch time frameworkMar 2026
General
PropertyValueAs OfSource
Websitehttps://metr.org

Related Pages

Top Related Pages

Organizations

Redwood ResearchMATS ML Alignment Theory Scholars programAlignment Research Center80,000 Hours

Concepts

AI TimelinesSelf-Improvement and Recursive EnhancementBiosecurity Overview

Risks

Deceptive AlignmentSycophancyAI Value Lock-in

Safety Research

AI Evaluations

Key Debates

AI Governance and Policy