Skip to content

Holden Karnofsky

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:40 (Adequate)⚠️
Importance:25 (Peripheral)
Last edited:2026-01-29 (3 days ago)
Words:1.7k
Backlinks:1
Structure:
📊 14📈 0🔗 41📚 030%Score: 10/15
LLM Summary:Holden Karnofsky directed $300M+ in AI safety funding through Open Philanthropy, growing the field from ~20 to 400+ FTE researchers and developing influential frameworks like the 'Most Important Century' thesis (15% transformative AI by 2036, 50% by 2060). His funding decisions include a $580M Anthropic investment and establishment of 15+ university AI safety programs.
Issues (1):
  • QualityRated 40 but structure suggests 67 (underrated by 27 points)
See also:EA Forum
Researcher

Holden Karnofsky

Importance25
RoleFormer Co-CEO (now at Anthropic)
Known ForDirecting billions toward AI safety, effective altruism leadership, AI timelines work
Related
Organizations
Researchers

Holden Karnofsky was co-CEO of Coefficient Giving (formerly Open Philanthropy), the most influential grantmaker in AI safety and existential risk. Through Coefficient, he directed over $100 million toward AI safety research and governance, fundamentally transforming it from a fringe academic interest into a well-funded field with hundreds of researchers. In 2025, he joined Anthropic.

His strategic thinking has shaped how the effective altruism community prioritizes AI risk through frameworks like the “Most Important Century” thesis. This argues we may live in the century that determines humanity’s entire future trajectory due to transformative AI development.

Funding AchievementAmountImpact
Total AI safety grants$300M+Enabled field growth from ~dozens to hundreds of researchers
Anthropic investment$580M+Created major safety-focused AI lab
Field building grants$50M+Established academic programs and research infrastructure
Risk CategoryKarnofsky’s AssessmentEvidenceTimeline
Transformative AI~15% by 2036, ≈50% by 2060Bio anchors frameworkThis century
Existential importance”Most important century”AI could permanently shape humanity’s trajectory2021-2100
TractabilityHigh enough for top priorityOpen Phil’s largest focus area allocationCurrent
Funding adequacySeverely underfundedStill seeking to grow field substantiallyOngoing

Early Career (2007-2014): Building Effective Altruism

Section titled “Early Career (2007-2014): Building Effective Altruism”
PeriodRoleKey Achievements
2007-2011Co-founder, GiveWellPioneered rigorous charity evaluation methodology
2011-2014Launch Coefficient GivingExpanded beyond global health to cause prioritization
2012-2014EA movement buildingHelped establish effective altruism as global movement

Initial AI engagement:

  • 2014: First significant AI safety grants through Coefficient (then Open Philanthropy)
  • 2016: Major funding to Center for Human-Compatible AI (CHAI)
  • 2017: Early OpenAI funding (before pivot to for-profit)
  • 2018: Increased conviction leading to AI as top priority

Major funding decisions:

Strategic Frameworks and Intellectual Contributions

Section titled “Strategic Frameworks and Intellectual Contributions”

Core argument structure:

ComponentClaimImplication
Technology potentialTransformative AI possible this centuryCould exceed agricultural/industrial revolution impacts
Speed differentialAI transition faster than historical precedentsLess time to adapt and coordinate
Leverage momentOur actions now shape outcomesUnlike past revolutions where individuals had little influence
ConclusionThis century uniquely importantJustifies enormous current investment

Supporting evidence:

Developed with Ajeya Cotra, this framework estimates AI development timelines by comparing required computation to biological systems:

Anchor TypeComputation EstimateTimeline Implication
Human brain≈10^15 FLOP/sMedium-term (2030s-2040s)
Human lifetime≈10^24 FLOPLonger-term (2040s-2050s)
Evolution≈10^41 FLOPMuch longer-term if needed
Research AreaFunding FocusKey RecipientsRationale
Technical alignment$100M+Anthropic, Redwood ResearchDirect work on making AI systems safer
AI governance$80M+Center for Security and Emerging Technology, policy fellowshipsInstitutional responses to AI development
Field building$50M+University programs, individual researchersGrowing research community
Compute governance$20M+Compute monitoring researchOversight of AI development resources

Key principles:

  • Hits-based giving: Expect most grants to have limited impact, few to be transformative
  • Long time horizons: Patient capital for 5-10 year research projects
  • Active partnership: Strategic guidance beyond just funding
  • Portfolio diversification: Multiple approaches given uncertainty

Notable funding decisions:

  • Anthropic investment: $580M to create safety-focused competitor to OpenAI
  • MIRI funding: Early support for foundational AI alignment research
  • Policy fellowships: Placing AI safety researchers in government positions

Based on public statements and Coefficient Giving priorities from 2023-2024, Karnofsky’s views reflect a combination of timeline estimates derived from technical forecasting and strategic assessments about field readiness and policy urgency:

Expert/SourceEstimateReasoning
Transformative AI (2022)15% by 2036, 50% by 2060Derived from the bio anchors framework developed with Ajeya Cotra, which estimates AI development timelines by comparing required computation to biological systems. This central estimate suggests transformative AI is more likely than not within this century, though substantial uncertainty remains around both shorter and longer timelines.
Field adequacy (2024)Still severely underfundedDespite directing over $100M toward AI safety and growing the field from approximately 20 to 400+ FTE researchers, Coefficient Giving continues aggressive hiring and grantmaking. This assessment reflects the belief that the scale of the challenge—ensuring safe development of transformative AI—far exceeds current resources and talent devoted to it.
Policy urgency (2024)High priorityCoefficient has significantly increased governance focus, funding policy research, placing fellows in government positions, and supporting regulatory frameworks. This shift recognizes that technical alignment work alone is insufficient—institutional and policy responses are critical to managing AI development trajectories and preventing racing dynamics.
YearKey UpdateReasoning
2021”Most Important Century” seriesCrystallized long-term strategic thinking
2022Increased policy focusRecognition of need for governance alongside technical work
2023Anthropic model successValidation of safety-focused lab approach
2024Accelerated timelines concernShorter timelines than bio anchors suggested
Metric20152024Growth Factor
FTE researchers≈20≈40020x
Annual funding<$5M>$200M40x
University programs015+New category
Major organizations2-320+7x

Academic legitimacy:

  • Funding enabled AI safety courses at major universities
  • Supported tenure-track positions focused on alignment research
  • Created pathway for traditional CS researchers to enter field

Policy influence:

  • Funded experts now advising US AI Safety Institute
  • Supported research informing EU AI Act
  • Built relationships between AI safety community and policymakers
UncertaintyStakesCurrent Evidence
AI timeline accuracyEntire strategy timingMixed signals from recent capabilities
Technical tractabilityFunding allocation efficiencyEarly positive results but limited validation
Governance effectivenessPolicy investment valueUnclear institutional responsiveness
Anthropic successLarge investment justificationStrong early results but long-term unknown

Within EA community:

  • Some argue for longtermist focus beyond AI
  • Others prefer global health and development emphasis
  • Debate over concentration vs. diversification of funding

With AI safety researchers:

  • Tension between technical alignment focus and governance approaches
  • Disagreement over open vs. closed development funding
  • Questions about emphasis on capabilities research safety benefits

Most influential posts:

Communication approach:

  • Transparent reasoning and uncertainty acknowledgment
  • Accessible explanations of complex topics
  • Regular updates as views evolve
  • Direct engagement with critics and alternative viewpoints
PlatformReachImpact
Congressional testimonyDirect policy influenceInformed AI regulation debate
Academic conferencesResearch communityShaped university AI safety programs
EA Global talksMovement directionInfluenced thousands of career decisions
Podcast interviewsPublic understandingMainstream exposure for AI safety ideas

Immediate priorities:

  1. Anthropic scaling: Supporting responsible development of powerful systems
  2. Governance acceleration: Policy research and implementation support
  3. Technical diversification: Funding multiple alignment research approaches
  4. International coordination: Supporting global AI safety cooperation

Emerging areas:

  • Compute governance infrastructure
  • AI evaluation methodologies
  • Corporate AI safety practices
  • Prediction market applications

Field development goals:

  • Self-sustaining research ecosystem independent of Coefficient Giving
  • Government funding matching or exceeding philanthropic support
  • Integration of safety research into mainstream AI development
  • International coordination mechanisms for AI governance
CriticismKarnofsky’s ResponseCounter-evidence
Over-concentration of powerFunding diversification, transparencyMultiple other major funders emerging
Field capture riskPortfolio approach, external evaluationContinued criticism tolerated and addressed
Timeline overconfidenceExplicit uncertainty, range estimatesRegular updating based on new evidence
Governance skepticismMeasured expectations, multiple approachesEarly policy wins demonstrate tractability

Resource allocation:

  • Should Coefficient Giving fund more basic research vs. applied safety work?
  • Optimal balance between technical and governance approaches?
  • Geographic distribution of funding (US-centric concerns)

Strategic approach:

  • Speed vs. care in scaling funding
  • Competition vs. cooperation with AI labs
  • Public advocacy vs. behind-the-scenes influence
TypeSourceDescription
BlogCold TakesKarnofsky’s strategic thinking and analysis
OrganizationCoefficient GivingGrant database and reasoning
ResearchBio Anchors ReportTechnical forecasting methodology
TestimonyCongressional HearingPolicy positions and recommendations
TypeSourceFocus
AcademicEA ResearchCritical analysis of funding decisions
JournalisticMIT Technology ReviewExternal perspective on influence
PolicyRAND CorporationGovernment research on philanthropic AI funding
  • Dario Amodei - CEO of Anthropic, major funding recipient
  • Paul Christiano - Technical alignment researcher, influenced Karnofsky’s views
  • Nick Bostrom - Author of “Superintelligence,” early influence on Coefficient AI focus
  • Eliezer Yudkowsky - MIRI founder, recipient of early Coefficient AI safety grants