Longterm Wiki
Navigation
Updated 2026-03-12HistoryData
Page StatusContent
Edited 1 day ago1.1k words26 backlinksUpdated every 6 weeksDue in 6 weeks
53QualityAdequate •27.5ImportancePeripheral40ResearchLow
Summary

Profile of Jaan Tallinn documenting \$150M+ lifetime AI safety giving (86% of \$51M in 2024), primarily through SFF (\$34.33M distributed in a 2025 grant round). Co-founded CSER (2012) and FLI (2014), led Anthropic's \$124M Series A (2021), early DeepMind investor.

Content10/13
LLM summaryScheduleEntityEdit history1Overview
Tables6/ ~4Diagrams1Int. links19/ ~8Ext. links16/ ~5Footnotes0/ ~3References7/ ~3Quotes0Accuracy0RatingsN:2.5 R:6.5 A:3 C:7.5Backlinks26
Change History1
Source unsourced facts3 weeks ago

Sourced 25 of 30 previously unsourced facts across all 4 fact files (anthropic, sam-altman, openai, jaan-tallinn). Created 21 new resource entries in news-media.yaml and ai-labs.yaml with proper SHA256-based IDs. Added 8 new publications (Bloomberg, The Information, Quartz, Benzinga, Britannica, World, Sherwood News). Fixed date accuracy issues (Worldcoin stats from 2024 to 2025-05, OpenAI revenue from Oct to Jun 2024) and improved notes. Source coverage improved from 29% to 88%.

opus-4-6 · ~45min

Issues2
QualityRated 53 but structure suggests 93 (underrated by 40 points)
Links9 links could use <R> components

Jaan Tallinn

Person

Jaan Tallinn

Profile of Jaan Tallinn documenting \$150M+ lifetime AI safety giving (86% of \$51M in 2024), primarily through SFF (\$34.33M distributed in a 2025 grant round). Co-founded CSER (2012) and FLI (2014), led Anthropic's \$124M Series A (2021), early DeepMind investor.

1.1k words · 26 backlinks

Quick Assessment

DimensionAssessmentEvidence
Giving ScaleMajor Individual Donor$51M in 2024; $150M+ estimated lifetime
Primary VehicleSurvival and Flourishing Fund (SFF)S-process algorithmic allocation; $34.33M distributed in a 2025 grant round
AI Safety Focus≈86% of givingRemainder: biosecurity (≈7%), forecasting, fertility, longevity, other GCR
Wealth SourceTech Exits + InvestmentsSkype (sold 2005), Kazaa; DeepMind (acquired 2014); Anthropic Series A (2021)
Organizations FoundedCSER, FLICentre for the Study of Existential Risk (Cambridge, 2012); Future of Life Institute (2014)
SourceLink
Wikipediaen.wikipedia.org
LessWrong2024 Philanthropy Overview
FLI Profilefutureoflife.org

Personal Details

AttributeDetails
BornFebruary 14, 1972, Tallinn, Estonia
EducationBSc in Theoretical Physics, University of Tartu (1996)
FamilyMarried with six children (spouse's name not publicly disclosed)1
Estimated Net WorthApproximately $900 million (2019 estimate; significant crypto holdings)2
Board PositionsCenter for AI Safety (Board), UN AI Advisory Body, Bulletin of the Atomic Scientists (Board of Sponsors)

Overview

Jaan Tallinn is an Estonian programmer, entrepreneur, and philanthropist. He co-founded Skype (2003, acquired by Microsoft for $8.5B in 2011) and developed the FastTrack P2P protocol behind Kazaa. After reading Eliezer Yudkowsky's writings on AI risk in 2009, he redirected his philanthropy toward existential risk reduction, donating an estimated $150M+ to AI safety and related causes.3

He co-founded two major organizations — the Centre for the Study of Existential Risk (CSER) at Cambridge in 2012, and the Future of Life Institute (FLI) in 2014. He was an early DeepMind investor and board member (2011), and led Anthropic's $124M Series A at a $550M pre-money valuation (2021), taking a board observer role.4

Tallinn describes his AI investment rationale as "having a voice of concern from the inside," though he has acknowledged the tension: "On the one hand, it's great to have this safety-focused thing. On the other hand, this is AI Proliferation." Whether minority board observer positions translate into meaningful safety influence is not established by independent evidence.3

His 2024 giving of approximately $51M concluded a formal five-year pledge (2020–2024) denominated in ETH (20,000 ETH/year). As of early 2025, no successor multi-year pledge has been announced, though he committed at least $10M to the 2025 SFF round.5

Philanthropic Activities

Loading diagram...

Key Giving Milestones

YearAmountNotes
2012≈$200KCSER seed funding6
2019≈$2MSFF established
2020$10–15M5-year pledge began (20K ETH/year)7
2021$15–20MAlso led Anthropic $124M Series A (investment, not giving)4
2022$25–30MLightspeed Grants began (formally launched June 2023)8
2023$30–35MPost-FTX expansion to fill funding gaps
2024$51M+Concluded 5-year commitment5
2025$34.33MSFF grant round (86% to AI safety)9

Primary Vehicles

Survival and Flourishing Fund (SFF) — Tallinn's primary giving vehicle since 2019. Uses the S-process algorithmic allocation with a network of recommenders (12 in the 2024 round). The 2025 round distributed $34.33M: 86% to AI safety, 7% to biosecurity, 7% to other causes. Notable recipients include MIRI, Center for AI Safety, Apollo Research, METR, FAR AI, Palisade Research, and SecureBio.9

Lightspeed Grants — Fast-turnaround grantmaking run by Lightcone Infrastructure, primarily funded by Tallinn. Approximately $8M distributed since 2022.8

AI Investments

Tallinn has invested over $100M in 100–200 technology startups through Metaplanet Holdings.10

InvestmentYearDetails
DeepMind2011Series A investor and board member; Google acquired for $400–650M (2014)11
Anthropic2021Led $124M Series A; board observer. See Anthropic (Funder) for stake analysis4

Public Advocacy

Tallinn has been an active advocate for AI safety governance, serving on the UN AI Advisory Body and the EU Commission's High-Level Expert Group on AI. He has called for liability laws covering "both the users and developers of AI technology accountable for harms and risks produced by AI, including near-miss incidents."12

Key public positions signed:

  • 2023: FLI open letter calling for 6-month pause on training beyond GPT-4 (30,000+ signatures)
  • 2023: CAIS extinction risk statement
  • 2025: FLI statement calling for prohibition on superintelligence development until provably safe13

Criticisms

Capabilities acceleration: Critics argue investing in AI companies like Anthropic accelerates the technologies Tallinn views as dangerous. Tallinn has acknowledged: "this is proliferation... creating Anthropic might add to the competitive landscape, thus speeding development."3

Near-term vs. speculative risk: AI ethics researchers Timnit Gebru and Margaret Mitchell argued the 2023 FLI pause letter ignored "active harms" from existing AI systems. Gebru and Torres have characterized Tallinn as a subscriber to the "TESCREAL bundle" of ideologies, arguing these frameworks distort AI research priorities.1415

Influence concentration: SFF's S-process uses a small network of recommenders (12 in the 2024 round), concentrating significant influence over the AI safety field in a tightly connected group.

Key Uncertainties

UncertaintyDescription
Post-Pledge GivingNo formal multi-year pledge post-2024; committed at least $10M to 2025 SFF round5
Inside InfluenceWhether board observer role actually changes Anthropic's decisions — no independent verification
Wealth VariabilitySignificant crypto holdings mean giving capacity fluctuates with ETH/BTC prices

Sources

Footnotes

  1. Wikipedia and Lifeboat Foundation profile confirm married with six children; spouse name not publicly disclosed.

  2. "He's Worried A.I. May Destroy Humanity", Fortune, November 2020.

  3. "Co-founder of Skype invested in hot AI startups but thinks he failed", Semafor, April 2023. 2 3

  4. Anthropic raises $124 million, Anthropic, May 2021. 2 3

  5. Jaan Tallinn, "2024 Philanthropy Overview", LessWrong, early 2025. 2 3

  6. Centre for the Study of Existential Risk — Wikipedia.

  7. Jaan Tallinn, "Philanthropic Pledge", LessWrong, February 2020.

  8. Lightspeed Grants, launched June 2023. 2

  9. SFF 2025 funding by cause area — EA Forum. 2

  10. "Skype co-founder reveals he's invested over $130 million into start-ups", CNBC, November 2020.

  11. Google DeepMind — Wikipedia; acquisition price reported between $400M and $650M.

  12. Tallinn's statements on AI liability and datacenter regulation, 2023.

  13. Jaan Tallinn — Wikipedia, updated 2025.

  14. Margaret Mitchell and others critiqued the FLI pause letter for ignoring present AI harms, 2023.

  15. Timnit Gebru and Émile P. Torres, TESCREAL critique, 2023.

References

2en.wikipedia.org·Reference
3lesswrong.com·Blog post
4CAIS SurveysCenter for AI Safety

The Center for AI Safety conducts technical and conceptual research to mitigate potential catastrophic risks from advanced AI systems. They take a comprehensive approach spanning technical research, philosophy, and societal implications.

★★★★☆
5forum.effectivealtruism.org·Blog post
6en.wikipedia.org·Reference

Anthropic raises $124 million to build more reliable, general AI systems \ Anthropic Announcements Anthropic raises $124 million to build more reliable, general AI systems May 28, 2021 Anthropic, an AI safety and research company, has raised $124 million in a Series A.

Structured Data

3 factsView full profile →
Birth Year
1,972

All Facts

Biographical
PropertyValueAs OfSource
Net Worth$900 million2025
Birth Year1,972
EducationUniversity of Tartu; Gustav Adolf Grammar School

Related Pages

Top Related Pages

Organizations

AnthropicPalisade ResearchSurvival and Flourishing FundFuture of Life Institute (FLI)LessWrongFAR AI

Approaches

AI Safety Intervention PortfolioAI Safety Field Building Analysis

Analysis

Model Organisms of MisalignmentAI Risk Portfolio AnalysisAnthropic IPO

Concepts

EA Shareholder Diversification from AnthropicFunders Overview

Risks

AI Proliferation

Other

Sam Bankman-Fried

Historical

Mainstream Era