Longterm Wiki
Updated 2026-03-13HistoryData
Citations verified16 accurate9 flagged21 unchecked
Page StatusContent
Edited today7.7k wordsUpdated every 12 monthsDue in 52 weeks
53QualityAdequate •50ImportanceUseful
Summary

A comprehensive impact ledger of EA/longtermism's track record organized by year and topic, covering verified wins (GiveWell's \$1B+ directed, ~100,000 lives saved through AMF, 10K GWWC pledges) and significant setbacks (FTX collapse, FHI closure, funding decline, internal fragmentation), with cumulative statistics, year-by-year highlights, a policy record table, and honest counterfactual attribution notes throughout.

Content6/13
LLM summaryScheduleEntityEdit history1Overview
Tables5/ ~31Diagrams0/ ~3Int. links77/ ~62Ext. links20/ ~39Footnotes1/ ~23References44/ ~23Quotes25/46Accuracy25/46RatingsN:4 R:6 A:4 C:7
Change History1
Create EA and longtermist wins/losses impact ledger page#3423 weeks ago

Created a comprehensive 350-line wiki page tracking EA and longtermist wins and losses, broken down by year and topic (2006–2025), with cumulative organization statistics, a Notable Policy Record table, counterfactual attribution analysis, and documented wins/losses across global health, AI safety, animal welfare, and community building. Used crux content create (premium tier) followed by crux content improve (deep tier, 2 adversarial iterations, ~58 research sources). Fixed multiple rounds of numeric ID stability issues affecting existing YAML entities. Fixed file truncation after improve pipeline. Completed paranoid review and corrected: Anthropic founding year (2022→2021), GiveWell $1.45B wrong citation, duplicate/orphaned footnotes, FTX dollar inconsistency ($130M→$160M), MIRI pivot year (2023→2024 in table), Nathan Robinson article date (2022→2023), GiveWell founding year (2007→2006), AMF grant figure ($97M→$96.3M). Pinned numericId: E868 in frontmatter to prevent future ID stability issues.

claude-sonnet-4-6 · ~3h · ~$50

Issues2
QualityRated 53 but structure suggests 93 (underrated by 40 points)
Links11 links could use <R> components

EA and Longtermist Wins and Losses

Quick Assessment

DimensionAssessment
ScopeTracks outcomes of EA and longtermist interventions across global health, existential risk, AI safety, animal welfare, and community building
Key WinsGiveWell directing $1.45B+ to effective charities; Against Malaria Foundation saving ≈100,000 lives; 10,000 GWWC pledges; AI safety policy wins in California and New York; 92% cage-free commitment fulfillment
Key LossesFTX collapse damaging credibility and funding; closure of Future of Humanity Institute in 2024; internal tensions between neartermist and longtermist factions; epistemological critiques of longtermist modeling; declining total grantmaking since 2022 peak
Movement StatusCEA reports 20–25% engagement growth in 2025, reversing moderate declines in 2023–2024; forum usage and EAG attendance had not returned to 2022 peak levels as of end-2024; funding landscape stabilizing but below 2022 peak
Primary DebateWhether longtermist priorities (AI safety, existential risk) crowd out or complement neartermist work (global health, animal welfare)
SourceLink
WikipediaLongtermism
EA ForumCelebrating Wins Discussion Thread
Centre for Effective Altruismcentreforeffectivealtruism.org
EA Forum (Funding Data)Historical EA Funding Data: 2025 Update

Overview

Effective altruism (EA) is a philosophical and social movement that uses evidence and reason to identify the most effective ways to benefit others. Within EA, longtermism emphasizes positively influencing the long-term future — particularly by reducing existential risks from advanced AI, engineered pandemics, and other catastrophic threats.1 Tracking the movement's successes and failures matters both for internal strategic learning and for external evaluation of whether EA's approach to philanthropy and cause prioritization delivers on its ambitious claims.

The movement has achieved substantial concrete wins: GiveWell raised $415 million and directed $397 million to cost-effective programs in metrics year 2024,2 the Against Malaria Foundation — consistently one of GiveWell's top-recommended organizations — has protected over 667 million people,3 and CEA reports strong engagement growth in 2025 after post-FTX declines.4 On the longtermist side, hundreds of millions of dollars have flowed to existential risk research, AI safety work has gained mainstream policy traction, and California has signed bills directly regulating AI risk.5

However, the movement has also suffered significant setbacks. The collapse of FTX and fraud conviction of Sam Bankman-Fried — who had publicly framed his financial activities in longtermist terms — severely damaged EA's credibility.6 The closure of the Future of Humanity Institute at Oxford in April 2024 — the field's founding research institution — marked the largest single institutional loss in the longtermist research ecosystem.7 Internal tensions between neartermist and longtermist factions persist, with some global health and animal welfare advocates reporting that the longtermist turn has made their work harder and worsened their reputations.8 Critics from multiple directions challenge longtermism's epistemological foundations, arguing that its calculations are highly sensitive to speculative assumptions about the far future.9

EA grantmaking reached its peak around 2022 and has contracted since, though the 80,000 Hours 2021 estimate of approximately $46 billion committed to EA causes (growing at approximately 37% annually since 2015) reflects the scale of capital that had been pledged rather than deployed.10 Open Philanthropy, the largest single EA-aligned funder, has directed more than $4 billion in total grants since its founding in 2017.11

Cumulative Statistics

The table below aggregates self-reported impact figures from major EA-aligned organizations. Figures are drawn from organizational reports, EA Forum posts, and external analyses. Counterfactual attribution — the question of how much of this impact would have occurred without EA — is addressed in the notes column and in the dedicated attribution section below.

OrganizationKey MetricFiguresSource / Notes
GiveWellTotal directed to top charities (since 2009)$1.45B+ directed; est. 340,000 lives saved total; 74,000 in 2024 alone from $96.3M directed to AMFGiveWell 2024 metrics; self-reported; methodology documented publicly. Counterfactual: fraction of donors would have given comparably without GiveWell recommendations — contested. A specific "50–80% counterfactual" figure has circulated in EA discussions but does not appear in GiveWell's published methodology; treat as an illustration, not a published estimate.
Open PhilanthropyTotal giving; farm animal welfare; AI safety; global healthTotal giving exceeds $4B since founding in 2017 (as of June 2025); AI safety spending approximately $336M (≈12% of total) through 2023; 3B+ farm animals' lives improved via corporate commitments; 100,000+ lives saved via global health grantsOpen Phil grants database; Open Philanthropy Wikipedia; LessWrong AI safety funding overview (2023). "3B animals" aggregates corporate commitment reach, not verified individual welfare improvements. Through 2022, roughly 70% of Open Phil total funding went to Global Health and Wellbeing, 30% to longtermist portfolio.
Giving What We CanPledges and estimated donation flow10,000+ members with 10% pledge; $2.5B+ in pledges made; $40M+ donated by membersGWWC 10,000 pledge milestone; Oxford press release (2022). Pledge totals are lifetime commitments, not yet-donated amounts. GWWC uses two complementary methodologies: a Lifetime Giving Method and a Realised Giving Method, both documented in its 2023–2024 impact evaluation.
Founders PledgeEntrepreneurs pledged and donated≈1,900 entrepreneurs pledged ≈$10B; $1.1B donated as of April 2024Founders Pledge 2024 impact report; pledges contingent on exit events; donated figure is more reliable than pledge total.
80,000 HoursCareer changes and hours redirected3,000+ significant career changes; ≈80M more hours on important problems80K impact page; self-reported; "significant career change" definition is methodologically contested.
Charity EntrepreneurshipCharities incubated and reach50 charities incubated in 6 years; $38M raised; 75M people and 1B animals servedCharity Entrepreneurship impact page; "served" figures aggregate program reach, not verified outcome counts.
Evidence Action / Deworm the WorldDeworming treatments delivered198M children reached in 2024 alone (record); 2B+ treatments delivered since 2012; <$0.50 cost per child per treatmentGFDW update (May 2025); Evidence Action Tanzania expansion (Jan 2026); GiveWell-recommended $4.4M renewal grant secured programs through 2026.

Counterfactual note on cumulative statistics: These figures represent gross reach and estimated impact, not net counterfactual impact. GiveWell's methodology attempts to estimate counterfactual value (would a donor have given elsewhere?), but for most metrics the counterfactual fraction is unknown. The EA Forum's 2025 funding data post notes that self-reported organizational metrics are difficult to compare across cause areas.12


Year-by-Year Highlights

This section provides a scannable chronological record of major events in the EA and longtermist movements. Type classifications: Win = outcome broadly favorable to EA/longtermist goals; Loss = setback or failure; Mixed = ambiguous or contested outcome. Attribution confidence reflects how directly EA action caused the outcome versus broader trends.

YearCause AreaEventTypeNotes
2006InfrastructureGiveWell founded by Holden Karnofsky and Elie HassenfeldWinFirst systematic charity evaluator using empirical evidence; catalyzed evidence-based philanthropy field.
2009InfrastructureGiving What We Can founded by Toby Ord; GiveWell begins directing significant fundsWinInstitutionalized the 10% pledge norm; early GiveWell grants began redirecting millions toward top charities. By January 2013, GWWC had welcomed its 300th member.
2011Infrastructure80,000 Hours and CEA founded; term "effective altruism" coinedWinCareer-focused EA organization and coordination hub for EA movement; hosted EA Global conferences.
2015AI SafetyOpen Philanthropy begins major AI safety fundingWinCredited with jumpstarting AI safety as a funded research field; seeded organizations including MIRI and later Redwood Research and ARC. Counterfactual: AI safety was growing independently via DeepMind and academic channels; Open Phil funding likely accelerated rather than originated the field.
2019Animal WelfareTarget, Aramark, and Compass Group make cage-free commitmentsWinPart of broader corporate campaign wave; EA-funded groups (Open Wing Alliance) played documented role. Counterfactual: Broad consumer pressure and EU regulatory trajectory were also driving corporate policy shifts; EA-funded campaigns' distinctively attributable effect is stronger in emerging markets with less independent consumer pressure.
2019InfrastructureApproximately $416M donated to effective charities identified by EA movement (37% annual growth rate since 2015)WinWikipedia / EA growth data. Reflects movement scaling through the late 2010s.
2020Global HealthCOVID-19 pandemic highlights gaps in biosecurity infrastructureMixed/LossDemonstrated relevance of EA biosecurity work; also revealed that EA-adjacent pandemic preparedness funding had not prevented the pandemic. Johns Hopkins Center for Health Security (Open Phil–funded) provided early warnings.
2020InfrastructureGiveWell exceeds $300M directed in a single yearWinFirst time GiveWell surpassed this threshold; driven partly by COVID-era philanthropy surge. Open Philanthropy supported $100M in GiveWell recommendations in 2020, rising to $300M in 2021.13
2021AI SafetyOpen Philanthropy launches major AI safety push; funds Redwood Research and ARCWinSubstantially expanded AI safety research capacity. Redwood focused on adversarial training; ARC on evaluations. Open Phil recommended over $400M in grants in 2021 total.13 Counterfactual: Both organizations were founded by researchers with pre-existing safety interest; Open Phil funding enabled scaling.
2021Animal WelfareSuccessful corporate campaigns reach billions of animals via cage-free pledgesWinOpen Wing Alliance and allied campaigns documented commitments covering estimated 2B+ hens globally. Fulfillment rates remained uncertain at time of commitment.
2021AI SafetyAnthropic founded by Dario Amodei and team departing OpenAIWinCreated a major safety-focused AI lab. EA/longtermist network connections played a role in early fundraising; Open Philanthropy invested. Counterfactual: Anthropic's founders had independent safety motivations; the lab likely would have been founded without EA support, though perhaps with less early capital.
2022CommunityFTX collapse; Sam Bankman-Fried arrested for fraudLossMost damaging single event for EA credibility. FTX Future Fund had committed approximately $160M in total grantee commitments before collapse; those commitments were voided. (Approximately $100M in grants had already been disbursed; the remaining commitments were cancelled. See FTX Future Fund section below.) Forum usage, EAG attendance, EA Funds donations, and virtual programming all declined in subsequent years.14
2022Animal WelfareMultiple EU cage-free policy wins; Spain, France, Germany commit to phase-outsWinRegulatory rather than corporate wins; EA animal welfare funders had supported European campaign infrastructure.
2022InfrastructureWill MacAskill's What We Owe the Future reaches bestseller statusMixedBrought longtermism to mainstream audience; also increased scrutiny and criticism, coinciding with FTX collapse three months after publication.
2023AI SafetyBletchley Declaration signed at UK AI Safety Summit; 28 nations commit to AI risk cooperationWinFirst multilateral government statement on frontier AI risks; UK government convened the summit under PM Rishi Sunak; agreed to support a "State of the Science" report led by Yoshua Bengio and announced the world's first AI Safety Institute. EA-aligned researchers contributed to technical agenda-setting. Counterfactual: Post-ChatGPT AI anxiety was mainstream by late 2023; governments were independently motivated to act. EA's marginal contribution to summit outcomes has not been documented with named individuals or government acknowledgments.
2024AI SafetyMIRI announces significant staff cuts and strategic pivotLossMIRI scaled back its technical alignment research program, concluding in a January 2024 strategy update that it believes the alignment field is "very unlikely" to make progress quickly enough to prevent human extinction. The organization pivoted toward policy and public communications, with technical research "no longer our top priority, at least for the foreseeable future." MIRI's annual spending ranged from $5.4M–$7.7M (2019–2023).15
2023Global HealthWHO approves R21/Matrix-M malaria vaccine; Open Philanthropy co-funded developmentWinR21 is a second malaria vaccine (alongside RTS,S) with high efficacy. Open Phil provided funding to the Jenner Institute's development work. Counterfactual: R21 development was primarily funded by Wellcome Trust and Serum Institute of India; Open Phil's role was contributory but not foundational.
2023PolicyUS Supreme Court upholds California Prop 12 animal welfare law in National Pork Producers v. Ross (May 2023)WinUpheld California's right to enforce space standards for farm animals sold in California. EA animal welfare groups supported the legal defense. Sets national precedent for state animal welfare laws.
2023AI SafetyGPT-4 release makes AI safety concerns mainstreamMixedPublic concern about AI capabilities validates EA's long-standing warnings; however, mainstream attention also means AI safety is no longer primarily an EA-driven field, reducing EA's comparative influence going forward.
2023Cultivated MeatUSDA approves cultivated chicken for sale by UPSIDE Foods and Good MeatWinFirst US approval for commercially sold cultivated meat; Good Food Institute (EA-supported) contributed to the regulatory pathway.
2024AI SafetyCalifornia SB 1047 vetoed by Governor NewsomLossSB 1047 would have imposed safety requirements on large AI model developers. Anthropic helped influence revisions to the bill and announced support for the amended version in August 2024, with CEO Dario Amodei writing to Governor Newsom that "the new SB 1047 is substantially improved to the point where we believe its benefits likely outweigh its costs." Opponents of the bill, including OpenAI, Meta, Y Combinator, and Andreessen Horowitz, argued the bill's thresholds and liability provisions could stifle innovation.16
2024AI SafetyUK AI Safety Institute and US AI Safety Institute formally establishedWinBoth institutes conduct frontier AI evaluations; UK AISI had EA-connected staff in founding roles. Counterfactual: Government AI safety interest was driven substantially by post-ChatGPT political pressure independent of EA advocacy.
2024AI SafetySeoul AI Summit; Seoul Declaration signedWinFollow-up to Bletchley; 16 additional nations joined commitments; focused on AI safety testing and information sharing among frontier labs. EA influence in technical agenda-setting noted by participants, though primary drivers were government-led.
2024InstitutionalFuture of Humanity Institute closes after 19 yearsLossOxford's Faculty of Philosophy closed FHI on April 16, 2024, citing "increasing administrative headwinds." Founded by Nick Bostrom in 2005, FHI had coined much of the field's core terminology ("existential risk," "information hazard," "unilateralist's curse") and served as the primary academic home for longtermist research. The Faculty had imposed a freeze on FHI fundraising and hiring beginning in 2020; in late 2023 it announced contracts of remaining staff would not be renewed. Senior staff attempted multiple rescue options including restructuring, college affiliation, spinning out, and interdepartmental transfer — none succeeded.7 FHI's final report noted that "FHI's mission has replicated and spread and diversified" into dozens of successor organizations, though the loss of the Oxford institutional anchor was significant.
2024Animal WelfareCalifornia and Washington state pass octopus farming bansWinFirst US legislation banning a specific type of cephalopod aquaculture; animal welfare advocates (some EA-aligned) supported the bills; cephalopod sentience research partly EA-funded.
2024Global HealthLead Exposure Elimination Project (LEEP) and allied funders launch $100M+ collaborative fund for global lead reductionWinLead exposure is a GiveWell-recommended cause area; LEEP is a Charity Entrepreneurship incubatee with documented direct policy wins in Malawi, Madagascar, and other countries. Open Philanthropy launched the Lead Exposure Action Fund (LEAF), a collaborative fund exceeding $100M, with LEEP among its first grantees.17
2024Global HealthGiveWell directs $96.3M to Against Malaria Foundation — largest single grant in GiveWell historyWinExpected to distribute 17M+ nets across Chad, DRC, Nigeria, and Zambia; estimated to avert 20,000+ deaths.5 GiveWell raised approximately $415M total in 2024, up from approximately $355M in 2023.2
2024CommunityEA Forum engagement metrics increase after post-FTX lowsMixedForum activity showed signs of recovery but did not return to 2022 peak levels as of end-2024; CEA's own 2025 strategy report noted "forum usage metrics have been on a steady decline since FTX's collapse in late 2022."14
2025AI SafetyCalifornia SB 53 signed into lawWinSB 53 requires AI developers to maintain safety and security protocols; narrower than SB 1047 but represented a legislative foothold for AI safety regulation after the SB 1047 veto. EA-aligned advocates supported passage.
2025CommunityGiving What We Can hits 10,000 pledgesWinMilestone reported in GWWC communications; each pledge estimated to generate $15,000 in counterfactual donations to high-impact charities over a lifetime, per GWWC's documented Lifetime Giving Method.18
2025CommunityEA Global London reaches approximately 1,600 attendeesWinIndicator of community recovery post-FTX; attendance figures from CEA strategy reports.14
2025Animal Welfare92% of corporate cage-free commitments with 2024 or earlier deadlines fulfilledWinTracked by Open Wing Alliance; reported in EA Forum wins thread.5 Represents a shift in hen welfare conditions across the food industry relative to prior years. Note: this figure tracks formal policy transitions, not verified supply chain welfare outcomes at the farm level.
2025Animal WelfareLewis Bollard appearance on Dwarkesh Podcast raises $2M+ for animal charities, estimated to help ≈4M animalsWinDemonstrates EA media strategy effectiveness for cause area fundraising.5
2025CommunityCEA reports 20–25% growth in engagement across all tiersWinSubstantially exceeds 7.5–10% growth target set in CEA's 2025–26 strategy; reverses moderate declines in 2023–2024 without increasing spending. Growth is measured year-over-year from 2024 baselines, not from 2022 peak levels.4

Notable Policy Record

This table covers major policy outcomes with documented EA involvement. "EA Attribution" rates how much of the outcome is plausibly attributable to EA advocacy versus independent forces, using a qualitative scale: High = EA was a leading driver; Moderate = EA was a significant but non-decisive contributor; Low = EA was a minor participant in a broader movement; Contested = attribution is disputed.

PolicyYearOutcomeEA AttributionNotes
California Prop 12 (Farm Animal Confinement)2018 (passed), 2023 (upheld)Win — US Supreme Court upholds in Nat'l Pork Producers v. Ross (2023)ModerateHumane Society led the ballot initiative; EA-aligned animal welfare funders supported defense. Sets national precedent for state animal welfare laws.
California SB 1047 (Safe and Secure Innovation for Frontier AI Models Act)2024Loss — Vetoed by Governor Newsom, September 2024ModerateEA-aligned advocates including Legal Priorities Project supported passage. Anthropic's position shifted from opposition (July 2024) to cautious support for the amended bill (August 2024) before the veto.16 OpenAI and Google opposed throughout. Newsom cited chilling effect on AI innovation.
California SB 53 (AI Safety Protocols)2025Win — Signed into lawModerateNarrower successor to SB 1047; requires documented safety protocols for frontier AI developers. EA-aligned advocates supported.
New York RAISE Act (Responsible AI Safety and Education Act)2024–2025Partial / PendingLow–ModerateBill introduced with EA-adjacent support; had not passed as of early 2026. Highlights limits of state-level AI safety legislation outside California.
Bletchley Declaration (UK AI Safety Summit)November 2023Win — 28 nations signModerateUK government led under PM Rishi Sunak; EA-aligned researchers contributed to technical agenda-setting. Post-ChatGPT political momentum was the primary driver. No named individuals or government acknowledgments documenting EA's specific causal contributions have been published.
Seoul Declaration (Seoul AI Summit)May 2024Win — 16 additional nations joinModerateFollow-up to Bletchley; advanced commitments on safety testing. EA influence in technical agenda-setting noted by participants; primary drivers were government-led processes.
EU AI Act2024 (passed)MixedLowPrimarily driven by EU internal politics and civil society groups; EA-aligned groups contributed technical input on frontier model provisions. Final text modified frontier model provisions relative to EA advocates' preferences.
NIST AI Risk Management Framework (AI RMF)2023Win — Adopted as voluntary US standardLow–ModerateEA-adjacent researchers contributed comments during public consultation. Industry adoption is voluntary; enforcement limited.
WHO R21/Matrix-M Malaria Vaccine ApprovalOctober 2023Win — WHO prequalification grantedLow–ModerateOpen Philanthropy co-funded Jenner Institute development; primary funders were Wellcome Trust and Serum Institute of India.
Lead Paint and Gasoline Regulation Progress (low-income countries)2022–2025Win (partial) — Multiple countries adopt stricter lead standardsModerateLEEP (Charity Entrepreneurship incubatee) documented direct policy wins in Malawi, Madagascar, and other countries. Open Philanthropy's 2024 grants included the Lead Exposure Action Fund.17
California/Washington Octopus Farming Bans2024Win — Signed into law in both statesModerateEA-aligned animal welfare advocates supported; cephalopod sentience research (partly EA-funded) informed legislative framing.
US USDA Cultivated Meat Approvals (UPSIDE Foods, Good Meat)June 2023Win — First US commercial approvalsLow–ModerateGood Food Institute (EA-supported) contributed to regulatory pathway; primary drivers were the companies themselves and USDA process.

Policy record notes: The absence of biosecurity and pandemic preparedness policy entries reflects a genuine gap in the EA attribution record. EA-adjacent organizations including Johns Hopkins Center for Health Security contributed to pandemic preparedness frameworks, and Open Philanthropy has funded biosecurity work, but tracking specific policy outcomes attributable to EA advocacy in this domain requires further documentation. Similarly, nuclear risk reduction receives Open Philanthropy funding (grants to organizations including the Federation of American Scientists and Ploughshares Fund) but no specific policy wins have been tracked here due to attribution difficulty.


History and Movement Trajectory

EA emerged from proto-communities in the late 2000s and early 2010s, coalescing around organizations like GiveWell (founded 2006), Giving What We Can (founded 2009 at Oxford by Toby Ord), and 80,000 Hours (founded 2011).1 CEA was founded in 2012 as an umbrella organization for GWWC and 80,000 Hours, and the term "effective altruism" was coined that year.19 By January 2013, GWWC had welcomed its 300th member; by 2022, more than 7,000 people from 95 countries had taken a pledge, collectively representing over $2.5 billion in pledged lifetime donations.19 An estimated $416 million was donated to effective charities identified by the movement in 2019, representing approximately 37% annual growth since 2015.1

By 2021, an estimated $46 billion in funding had been committed to EA causes — growing approximately 37% per year since 2015, with much of the growth concentrated in 2020–2021 — with around $420 million deployed annually (roughly 1% of committed capital).10 In early 2020, roughly 60% of EA annual deployment flowed through Open Philanthropy, 20% from GiveWell, and 20% from other sources.10

The longtermist turn within EA accelerated in the late 2010s and early 2020s. Key markers of this shift include Open Philanthropy's first major AI safety grants in 2015; the growing influence of the Future of Humanity Institute at Oxford (founded 2005 by Nick Bostrom), which coined core longtermist terminology and hosted researchers including Anders Sandberg and Toby Ord; and Toby Ord's 2020 book The Precipice, which articulated a systematic framework for existential risk reduction. Will MacAskill's 2022 book What We Owe the Future brought longtermism to mainstream audiences, achieving bestseller status.20 That same year, the FTX Future Fund had committed more than $130 million in grants, mostly to longtermist causes, within four months of launching in February 2022.2021

Open Philanthropy recommended over $400 million in grants in 2021, including $300 million in support for GiveWell's recommendations (up from $100 million in 2020).13 Through 2022, roughly 70% of Open Philanthropy's total funding went toward areas in its Global Health and Wellbeing portfolio, and 30% went toward areas in its longtermist portfolio — making the "EA = longtermism" characterization inaccurate as a description of funding distribution.22 Following the FTX collapse in late 2022, Open Philanthropy paused most new longtermist funding commitments pending further review.22

EA grantmaking has been on a downward trend since the 2022 peak, though GiveWell has maintained strong funding levels; in metrics year 2024 (February 2024 to January 2025), GiveWell raised $415 million and directed $397 million to cost-effective programs.2 Open Philanthropy expected to recommend over $700 million in grants in 2023; as of June 2025, it had directed more than $4 billion in total grants since founding.11 No single public source provides a comprehensive aggregate of all EA-aligned giving for 2023 or 2024 across all funders, which limits precise quantification of the post-2022 decline.

By 2025, CEA reported a turnaround, with 20–25% year-over-year engagement increases across all tiers — substantially exceeding its 7.5–10% growth target and reversing moderate declines in 2023–2024.4 This growth metric refers to year-over-year change from 2024 baselines, not recovery to 2022 peak levels; CEA's own strategy documents explicitly noted that engagement with its programs had declined year-over-year during 2023–2024 before beginning to recover.14

The FHI closed in April 2024 after the Faculty of Philosophy declined to renew staff contracts, following years of administrative friction and a hiring and fundraising freeze imposed in 2020.19 The specific reasons the Faculty imposed constraints on FHI have not been made fully public; factors cited in reporting include bureaucratic disputes over FHI's operating style, personnel issues, and controversy over a resurfaced 1996 email from Bostrom. Senior researchers attempted multiple rescue options — none succeeded.

Community size and demographics: As of January 2023, there were 362 known active EA groups worldwide — up from 233 in late 2020 (approximately 55% growth over two years), with active groups in 56 countries.23 CEA supports EA groups across many countries and has held EAGx conferences internationally.24 In 2024, 189 organizers from 34 countries participated in CEA's Organizer Support Programme.25 The 2024 EA Survey (conducted by Rethink Priorities) found that the USA (34.4%) and UK (13.5%) remained the countries with the largest proportions of EA respondents; 32% came from Europe (excluding the UK) and 20% from the rest of the world.26 Local group membership rates varied substantially by country: US respondents (21.9%) and UK respondents (25.8%) had among the lowest rates of local group membership, while many countries showed 40–80% local group membership rates among survey respondents.

Documented Wins

Global Health and Development

EA's most verifiable successes lie in global health. GiveWell has directed $397 million to cost-effective programs in metrics year 2024 alone,2 and the Against Malaria Foundation — consistently one of GiveWell's top-recommended organizations — has distributed enough insecticide-treated nets to protect over 667 million people, with an estimated 270,000 lives saved since its inception.3 GiveWell announced its largest single grant ever of $96.3 million to the Against Malaria Foundation.5

Other concrete outcomes include significant reductions in parasitic disease through deworming programs. Evidence Action's Deworm the World program launched in Tanzania in January 2026, targeting more than 10 million children currently at risk of or infected with soil-transmitted helminths and/or schistosomiasis, partnering with the government to provide deworming after external funding for the program was suspended in 2025.27 Globally, Deworm the World reached 198 million children in India, Kenya, Nigeria, Pakistan, and Malawi in 2024 alone — a record number — and has delivered more than 2 billion treatments since its founding at an estimated cost of less than $0.50 per child per treatment. A GiveWell-recommended $4.4 million renewal grant secures programs in India, Kenya, Nigeria, and Pakistan until 2026.28

Counterfactual note: GiveWell's own methodology acknowledges that counterfactual attribution is difficult — some fraction of donors to top charities would have donated similarly without GiveWell's recommendations, and some fraction of lives saved via malaria nets would have occurred via alternative programs. GiveWell applies a "leverage and funging adjustment" to cost-effectiveness estimates based on the probability that other counterfactual funding scenarios would occur in the absence of their charities' philanthropic spending, and applies this methodology to specific grantees.29

No comprehensive independent peer-reviewed evaluations of GiveWell's aggregate impact claims have been published. GiveWell's methodology is documented publicly, and interventions like insecticide-treated nets and deworming have been studied via randomized controlled trials (e.g., through J-PAL-affiliated researchers), but external assessments of whether GiveWell-recommended programs achieve their stated impact in aggregate have not appeared in major peer-reviewed outlets.

Animal Welfare

Animal welfare has emerged as an increasingly prominent EA cause area. According to a 2025 EA Forum post tracking wins, 92% of corporate cage-free egg commitments with 2024 or earlier deadlines have been fulfilled.5 Lewis Bollard's appearance on the Dwarkesh Podcast raised over $2 million for effective animal welfare charities, estimated to help approximately 4 million animals.5 The 2024 EA Survey showed animal welfare joining the top tier of cause prioritization among community members.30 Open Wing Alliance secured 141 new cage-free commitments in 2024 alone, per Open Philanthropy's 2024 progress report.17

Counterfactual note: Corporate cage-free commitments began accelerating in 2015–2016, partly driven by Humane Society campaigns and EU regulatory trends that predated significant EA funding of animal welfare campaigns. Open Wing Alliance (the main EA-funded vehicle) estimates it played a more decisive role in accelerating commitments in Southeast Asia and Latin America specifically, where independent consumer pressure was lower — a more plausible counterfactual claim than for US/EU markets.5 No published study or comparative campaign analysis has been identified that quantitatively estimates the geographic differential in EA-attributed impact versus broader societal trends.

Quality of fulfillment data: The 92% cage-free fulfillment figure tracks whether companies made formal policy transitions, not verified supply chain welfare outcomes at the farm level. The gap between corporate policy adoption and verified welfare improvement at the farm level remains an open measurement challenge.

AI Safety and Existential Risk Policy

The longtermist wing can point to growing policy influence. California and New York have signed bills directly related to AI risk regulation, following advocacy efforts supported by EA-aligned organizations.5 The Existential Risk Persuasion Tournament (XPT), which ran June–October 2022 (approximately 50 domain experts, approximately 50 superforecasters), found that approximately 42% of expert existential risk forecasters reported having attended an EA meetup, compared to 9% of superforecasters — providing the closest published proxy for EA community overlap with the x-risk researcher pool.31 This measures conference attendance, not career origin, and does not establish that EA was the primary cause of participants' entry into the field. AI safety has expanded significantly within AI/ML research communities, with historical ties to rationalist communities like LessWrong playing a role in building early researcher networks.32

A 2022 survey of AI safety researchers at organizations including OpenAI, DeepMind, FAR AI, Open Philanthropy, Rethink Priorities, MIRI, Redwood, and GovAI found that all respondents had at least familiarity with EA and/or rationalist communities, with most being actively involved in at least one — and that Superintelligence and 80,000 Hours writing were each mentioned by three people as influential on their decision to work on AI safety.32 This is consistent with substantial EA influence on early researcher recruitment but does not establish the proportion of current AI safety researchers at major labs who entered the field through EA channels; no such systematic survey has been published.

Counterfactual note on AI safety wins: The central attribution challenge is temporal. EA began funding AI safety in 2015 when it was a niche concern; by 2023–2024, AI safety was mainstream following ChatGPT's release. Policy wins like Bletchley and Seoul were substantially driven by post-ChatGPT political momentum rather than EA-specific advocacy. The field would have grown substantially with the commercial AI trajectory regardless of EA funding. EA's contribution is more plausibly attributed to earlier field-building (2015–2021) than to post-2022 policy outcomes. A "commonly cited estimate" that EA meaningfully accelerated AI safety policy timelines has circulated in community discussions, but no named source, published model, or methodology has been identified — this claim should be treated as a qualitative community judgment rather than a quantitative estimate.

Community and Funding Infrastructure

Giving What We Can reached 10,000 members taking the 10% pledge, with each pledge estimated to generate $15,000 in counterfactual donations to high-impact charities over a lifetime, per GWWC's Lifetime Giving Method.18 Partnership pledge drives at EA Global and EAGx events generated 203 new pledges during Q2–Q4 2024, estimated to produce $9.8 million in lifetime donations.14 This $9.8M figure is derived from GWWC's lifetime donation methodology applied to new pledges; the per-pledge lifetime estimate uses empirical GWWC retention data, external reference classes, and temporal discounts including inflation and global catastrophic risk uncertainty.18 Open Philanthropy directed $87 million to GiveWell-recommended charities in 2024.17


Documented Losses and Setbacks

The FTX Collapse

The collapse of FTX and the fraud conviction of Sam Bankman-Fried, who had publicly framed his financial activities using longtermist reasoning about maximizing utility, damaged EA's credibility and raised concerns about ends-justify-the-means thinking within utilitarian frameworks.6 Total EA grantmaking has been on a downward trend since 2022.12

The FTX Future Fund launched in February 2022 and shut down in November 2022. During its existence, it made grants worth approximately $100 million and committed to $160 million in total grantee commitments as of September 2022.21 The fund's team resigned en masse, stating they were "unable to perform our work or process grants" and had "fundamental questions about the legitimacy and integrity of the business operations" funding the Future Fund.33 All unfulfilled commitments were voided when FTX entered bankruptcy.

Among affected organizations: SBF's biggest grants went to pandemic prevention and EA institutions including CEA, the Long Term Future Fund, Lightcone Infrastructure, The Atlas Fellowship, and Constellation. University-affiliated research initiatives received more than $13 million in total; twenty academics at institutions including Cornell, Princeton, Brown, and Cambridge received individual grants exceeding $100,000 each.34 Other affected grantees included HelixNano and Our World in Data.35 The full organizational landscape — which entities closed versus found alternative funding — has not been comprehensively documented in any single public source.

CEA's own reports and the 2025 EA Forum funding data post document the subsequent multi-year decline in forum engagement, EAG attendance, EA Funds donations, and virtual programming participation that followed the collapse.1214

The Closure of the Future of Humanity Institute

The closure of the Future of Humanity Institute (FHI) at Oxford on April 16, 2024 represents the largest single institutional loss in the longtermist research ecosystem.7 Founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School, FHI had Nick Bostrom as director and staff including Anders Sandberg and Toby Ord. The institute coined much of the field's foundational terminology — "existential risk," "existential hope," "information hazard," "unilateralist's curse" — and hosted research that shaped global biosecurity and AI safety policy debates.

Beginning in 2020, the Faculty of Philosophy imposed a freeze on FHI fundraising and hiring. In late 2023, the Faculty announced it would not renew contracts of remaining FHI staff. Senior researchers attempted multiple rescue options — leadership restructuring, joining a college, spinning out of the university entirely, and an interdepartmental transfer to Physics — none succeeded. The University of Oxford's closing statement cited "increasing administrative headwinds within the Faculty of Philosophy."7 The specific reasons the Faculty imposed constraints on FHI have not been made fully public; factors cited in reporting include bureaucratic disputes over FHI's fast-moving, externally-networked operating style, personnel issues, and the controversy over a resurfaced 1996 email from Bostrom.36

FHI's final report, authored by Anders Sandberg, stated that "FHI's mission has replicated and spread and diversified" into dozens of organizations and thousands of individuals — noting successor institutions including the UK AI Safety Institute, the Global Priorities Institute (also at Oxford), and the Institute for Ethics in AI.7 Oxford retains other EA-adjacent research institutions, limiting but not eliminating the institutional loss.

Internal Tensions

The shift toward longtermism has generated friction within the EA community. Some EA advocates focused on global health and animal welfare have reported that longtermism and x-risk work has made their lives harder, worsened their reputations, and occupied valued community niches.8 The 2024 EA Survey revealed diverging priorities: highly engaged respondents rated AI risks, animal welfare, and EA movement building more highly, while less engaged members emphasized climate change, global health, and poverty.30 An EA Forum post on the relationship between the EA community and AI safety noted that AI safety dominance has alienated global health and development EAs, creating perceptions of exclusion at events, and that community members sometimes perceive EA as an AI/longtermism-only movement — a perception that risks talent loss in other cause areas.37

Funding Decline

While GiveWell has maintained relatively stable and growing funding from non-Open Philanthropy donors (approximately $415M raised in 2024), the broader EA funding landscape contracted since the FTX collapse.122 Open Philanthropy's available assets fell by roughly half over the course of 2022, though they had since recovered about half of the total losses by 2023; this led Open Philanthropy to raise the cost-effectiveness bar for Global Health and Wellbeing grants by roughly a factor of two.38 Open Philanthropy does not publish audited balance sheets publicly; the "recovered half" figure appears in a Coefficient Giving summary of Open Philanthropy's own communications rather than in an independently audited source.38

MIRI's strategic pivot away from technical alignment research (announced January 2024) was one visible indicator of funding pressure and strategic reorientation at longtermist-focused organizations. MIRI's annual spending ranged from $5.4M–$7.7M during 2019–2023, with a high in 2020 and a low in 2022; projected 2024 spending was $5.6M. MIRI's own framing of the change was a strategic scaling back rather than a traditional layoff.15

Policy Setbacks

California SB 1047's veto by Governor Newsom in September 2024 was the most visible EA-supported AI safety policy loss. The bill would have imposed safety requirements on large AI model developers. Anthropic's position on the bill evolved: the company initially stated it did not support SB 1047 in its original form in July 2024, proposed specific amendments (including removing pre-harm civil penalty authority for the attorney general and shifting the legal standard from "reasonable assurance" to "reasonable care"), saw those amendments partially adopted, then offered cautious support for the amended version in August 2024 — while noting "there are still some aspects of the bill which seem concerning or ambiguous."16 At least 113 current and former AI company employees, including some at Anthropic, signed a letter to Newsom supporting the bill before his veto.39 Newsom vetoed the bill citing economic competitiveness concerns.


Counterfactual Attribution Analysis

Evaluating EA's impact requires distinguishing between outcomes EA caused and outcomes EA coincided with. The following framework summarizes attribution confidence across cause areas.

High-confidence EA attribution:

  • GiveWell's redirection of individual donor funds to specific charities: GiveWell's recommendation methodology is the proximate cause of most directed funds; without GiveWell, many donors would have given to lower-impact charities or not at all. GiveWell's counterfactual methodology is documented publicly, though specific estimates vary by grantee and funding round.29
  • Charity Entrepreneurship incubatees: organizations including LEEP that would not exist without CE's program and which have documented policy wins in multiple countries.
  • Giving What We Can pledges: the pledge mechanism directly influenced donation behavior for 10,000+ members, with methodology documented in GWWC's 2023–2024 impact evaluation.18

Moderate-confidence EA attribution:

  • AI safety researcher pipeline (pre-2022): EA significantly influenced career entry for many researchers now at AI safety organizations. Evidence includes the XPT data (42% of expert x-risk forecasters attended EA meetups)31 and qualitative surveys showing universal familiarity with EA among early AI safety researchers.[^49] EA's marginal contribution to the field's current size is harder to isolate given the post-2022 commercial AI boom.
  • Corporate animal welfare commitments in emerging markets: EA-funded campaigns have a more plausible counterfactual impact in markets with less independent consumer pressure (Southeast Asia, Latin America) than in US/EU markets where broader societal trends were also driving change. No published comparative analysis quantifying this geographic differential has been identified.
  • Open Philanthropy's R21 co-funding: contributory but not decisive; Wellcome Trust and Serum Institute of India were primary funders.

Low-confidence EA attribution:

  • Multilateral AI safety declarations (Bletchley, Seoul): post-ChatGPT political momentum was the primary driver; EA-aligned researchers contributed technical input but no named individuals or government acknowledgments documenting EA's specific causal role have been published.
  • EU AI Act provisions: primarily driven by EU parliamentary process and civil society groups; EA input was marginal.
  • US AI safety mainstream adoption post-2022: the field scaled with AI capabilities, not primarily with EA funding.

Active attribution disputes:

  • Whether longtermist cause prioritization crowded out neartermist funding. Global health advocates argue yes; longtermists argue the donor pools are largely separate. Open Phil's own published allocation data (70% global health / 30% longtermist through 2022) partially addresses this for financial capital, but the question of talent and attention allocation is harder to resolve.22
  • Whether EA's early AI safety focus shaped OpenAI's and Anthropic's safety cultures, versus those organizations developing safety norms independently.

Criticisms and Limitations

Longtermist Modeling Vulnerabilities

Research published on the EA Forum has identified critical vulnerabilities in longtermist calculations: small changes in baseline existential risk assumptions can dominate tenfold differences in period risk reduction estimates, and the value of existential risk mitigation remains highly sensitive to estimated future risk levels.9 The "time of perils hypothesis" — used to defend both the claim that existential risks are high and that reducing them is especially valuable — requires speculative claims about how risks will diminish after a certain period.9

Philosopher Steven Pinker has critiqued longtermism for potentially prioritizing any scenario, no matter how improbable, as long as it can be framed as having arbitrarily large effects far in the future.40 A related concern is that longtermism's predictive confidence should be very low given the exponential branching of possible futures.40

Systemic and Philosophical Objections

Critics argue that EA focuses on symptoms via charity rather than addressing root causes like institutional reform, debt, and power inequality. Philosopher Amia Srinivasan has argued that "Effective Altruism doesn't try to understand how power works, except to better align itself with it. In this sense it leaves everything just as it is."41 In academic formulations, Srinivasan contends that by insisting on quantifying the value of actions in terms of potential effects and probabilities of success, EA "effectively endorses the prevailing global capitalist institutional order — precisely the order that must be changed to address global poverty meaningfully."42 Nathan Robinson published a widely discussed critique in Current Affairs in 2023 making related institutional arguments.43 Brian Berkey's peer-reviewed "Institutional Critique of Effective Altruism" (Utilitas, Cambridge, 2017) formalizes these claims in academic philosophy.42

EA proponents have responded to institutional critiques by noting that 80,000 Hours' stated priorities include "mitigating great power conflict," "global governance," and "space governance" — pointing to engagement with structural questions that go beyond individual charity.44

Evidence Standards and Epistemic Concerns

GiveWell itself has acknowledged that seeking strong evidence and a straightforward, documented case for impact can be in tension with maximizing impact — reflecting a deep epistemic split within EA between evidence-based giving and longtermism's higher-risk, higher-reward approach.6 Jacob Steinhardt has argued that EA has a history of making overconfident claims with insufficient research, citing Peter Singer's 2009 claim that a life could be saved for $200 — a figure substantially revised by 2011.45

Ben Kuhn has argued that EA exhibits "epistemic inertia," where community consensus becomes less responsive to new evidence and arguments, partly because group norms fail to adequately guard against motivated cognition.46


Current Strategic Direction

CEA is pursuing what it describes as a growth-focused stewardship strategy for 2025–2026, aiming to build sustainable momentum. The organization acknowledges that meaningful results may not materialize until the end of 2026, as it requires building new infrastructure and iterating on strategy.14

Will MacAskill has argued in a widely discussed October 2025 post that EA should expand beyond traditional cause areas to address the transition to a post-AGI society, rejecting both "legacy movement" and "refocus on classic causes" framings. MacAskill specified that in terms of what people do, perhaps 15% of EA effort (scaling to 30% or more over time) should be primarily working on cause areas beyond the "classic" four (AI safety, biorisk, animal welfare, global health) — including AI welfare, AI character, AI persuasion and epistemic disruption, human power concentration, and space governance. He further proposed that in terms of curriculum content, perhaps 30% or more should cover non-classic cause areas.47 These figures are from MacAskill's own October 2025 post, based on a memo written for the Meta Coordination Forum, and are not derived from an independent survey or community mandate.

GiveWell continues recommending charities in mental health, malnutrition, and lead exposure while broadening its work beyond charity evaluation to identify new grantees and explore underexplored cause areas.48 Longtermist projects represented less than one-third of total EA funding as of August 2022, with the majority of EA work falling outside longtermist frameworks.49


Key Uncertainties

  • Post-FTX funding trajectory: Whether EA grantmaking stabilizes at current levels, continues declining, or recovers remains unclear. Open Philanthropy has begun efforts to diversify its donor base beyond Good Ventures.17 No comprehensive public breakdown of total EA-aligned giving for 2023 or 2024 across all funders has been published, making quantification of the post-2022 decline difficult.
  • AI safety policy impact: While California and New York have passed AI safety legislation, the actual effect of these bills on reducing catastrophic AI risk is untested and contested.
  • Longtermist epistemology: The sensitivity of longtermist calculations to baseline assumptions about existential risk levels means that the case for longtermist interventions could strengthen or weaken substantially with new evidence.9
  • Community cohesion: Whether EA can maintain unity across its neartermist and longtermist wings, or whether increasing specialization leads to effective fragmentation, remains an open question.830
  • Reputational recovery: The extent to which the FTX scandal permanently affected EA's ability to attract talent and funding versus representing a temporary setback is still unfolding.6
  • AI safety attribution: As AI safety becomes mainstream post-ChatGPT, EA's distinctive comparative advantage in the field may diminish even as absolute activity increases — a strategic question the community has not fully resolved.37
  • Cage-free commitment quality: The 92% fulfillment figure for commitments with 2024 deadlines tracks whether companies made formal policy transitions, not verified supply chain welfare outcomes at the farm level. The gap between corporate policy adoption and verified welfare improvement remains an open measurement challenge.
  • FHI institutional legacy: Whether the closure of FHI represents a net setback for longtermist research — or whether successor organizations (UK AISI, Global Priorities Institute, etc.) have absorbed and expanded FHI's research agenda — is contested within the community.7
  • Mental health as EA cause area: GiveWell recommends StrongMinds as a top charity focused on mental health, but no aggregate impact metrics for mental health interventions comparable to the lives-saved estimates for malaria or deworming programs have been published in this page's tracked data.

Sources

Footnotes

  1. Effective Altruism - WikipediaEffective Altruism - Wikipedia 2 3

  2. GiveWell 2024 Metrics and ImpactGiveWell 2024 Metrics and Impact 2 3 4 5

  3. Against Malaria Foundation — Impact MetricsAgainst Malaria Foundation — Impact Metrics; GiveWell — Against Malaria Foundation Review 2

  4. CEA Is Growing Again: 25% More People Engaged - EA ForumCEA Is Growing Again: 25% More People Engaged - EA Forum; Building Sustainable Momentum: Progress Report on CEA's 2025–26 Strategy - EA Forum (February 2026) 2 3

  5. Celebrating Wins Discussion Thread - EA ForumCelebrating Wins Discussion Thread - EA Forum 2 3 4 5 6 7 8 9

  6. How Effective Altruism Lost Its Way - QuilletteHow Effective Altruism Lost Its Way - Quillette 2 3 4

  7. Future of Humanity Institute - WikipediaFuture of Humanity Institute - Wikipedia; FHI Closing Statement - fhi.ox.ac.uk 2 3 4 5 6

  8. EA and Longtermism: Not a Crux for Saving the World - EA ForumEA and Longtermism: Not a Crux for Saving the World - EA Forum 2 3

  9. Sensitive Assumptions in Longtermist Modeling - EA ForumSensitive Assumptions in Longtermist Modeling - EA Forum 2 3 4

  10. Effective Altruism Is Growing Fast (80,000 Hours, 2021)Effective Altruism Is Growing Fast (80,000 Hours, 2021) 2 3

  11. Open Philanthropy — Our Grants DatabaseOpen Philanthropy — Our Grants Database 2

  12. Historical EA Funding Data: 2025 Update - EA ForumHistorical EA Funding Data: 2025 Update - EA Forum 2 3 4

  13. Open Philanthropy — Our GrantsOpen Philanthropy — Our Grants 2 3

  14. Stewardship: CEA's 2025-26 Strategy - EA ForumStewardship: CEA's 2025-26 Strategy - EA Forum 2 3 4 5 6 7

  15. MIRI's 2024 End-of-Year Update - intelligence.orgMIRI's 2024 End-of-Year Update - intelligence.org 2

  16. Anthropic CEO Backs New California AI Legislation, with Some Reservations - Pure AI (August 23, 2024)Anthropic CEO Backs New California AI Legislation, with Some Reservations - Pure AI (August 23, 2024); Anthropic does not support California AI bill SB 1047 - Axios (July 25, 2024) 2 3

  17. Our Progress in 2024 and Plans for 2025 - Open PhilanthropyOur Progress in 2024 and Plans for 2025 - Open Philanthropy 2 3 4 5

  18. Giving What We Can Impact Evaluation 2023–2024Giving What We Can Impact Evaluation 2023–2024 2 3 4

  19. Centre for Effective Altruism - WikipediaCentre for Effective Altruism - Wikipedia 2 3

  20. Effective Altruism, Longtermism, and William MacAskill Interview - TIMEEffective Altruism, Longtermism, and William MacAskill Interview - TIME 2

  21. FTX Future Fund — About (archived)FTX Future Fund — About (archived) 2

  22. Citation rc-6648 (data unavailable — rebuild with wiki-server access) 2 3

  23. Growth and Engagement in EA Groups: 2022 Groups Census Results - EA ForumGrowth and Engagement in EA Groups: 2022 Groups Census Results - EA Forum

  24. Centre for Effective Altruism Global ReachCentre for Effective Altruism Global Reach

  25. What Has the CEA Uni Groups Team Been Up To? – Our 2023/2024 Review - EA ForumWhat Has the CEA Uni Groups Team Been Up To? – Our 2023/2024 Review - EA Forum

  26. The 2024 EA Survey — Cause Prioritization (Rethink Priorities)The 2024 EA Survey — Cause Prioritization (Rethink Priorities)

  27. Evidence Action Expands to Tanzania to Eliminate Parasitic Worms as a Public Health Problem - Evidence ActionEvidence Action Expands to Tanzania to Eliminate Parasitic Worms as a Public Health Problem - Evidence Action

  28. Deworm the World: Update 2025 - GFDWDeworm the World: Update 2025 - GFDW

  29. 2023 Cost-Effectiveness Analysis Changelog - GiveWell2023 Cost-Effectiveness Analysis Changelog - GiveWell 2

  30. EA Community Cause Prioritization - Rethink Priorities Research DigestEA Community Cause Prioritization - Rethink Priorities Research Digest 2 3

  31. Announcing 'Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament' - EA ForumAnnouncing 'Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament' - EA Forum 2

  32. AI Safety and Neighboring Communities: A Quick Start Guide - Alignment ForumAI Safety and Neighboring Communities: A Quick Start Guide - Alignment Forum 2

  33. The FTX Future Fund Team Has Resigned - EA ForumThe FTX Future Fund Team Has Resigned - EA Forum

  34. Sam Bankman-Fried's Charitable Empire Included Hundreds of Millions to EA Institutions — Inside Philanthropy (2023)Sam Bankman-Fried's Charitable Empire Included Hundreds of Millions to EA Institutions — Inside Philanthropy (2023)

  35. Grantees Affected by FTX Collapse Including Our World in Data (BBC, 2022)Grantees Affected by FTX Collapse Including Our World in Data (BBC, 2022)

  36. FHI Is Closing — Discussion Thread (LessWrong, April 2024)

  37. Relationship Between EA Community and AI Safety - EA ForumRelationship Between EA Community and AI Safety - EA Forum 2

  38. Our Progress in 2023 and Plans for 2024 - Open Philanthropy (via Coefficient Giving)Our Progress in 2023 and Plans for 2024 - Open Philanthropy (via Coefficient Giving) 2

  39. Over 100 AI Employees Wrote to Gov. Newsom Urging Him to Sign SB 1047 - Wired (September 2024)

  40. How Effective Altruism Lost Its Way - QuilletteHow Effective Altruism Lost Its Way - Quillette 2

  41. Why Am I Not an Effective Altruist? - Why Philanthropy Matters (citing Srinivasan)Why Am I Not an Effective Altruist? - Why Philanthropy Matters (citing Srinivasan)

  42. The Institutional Critique of Effective Altruism - Brian Berkey, Utilitas (2017)The Institutional Critique of Effective Altruism - Brian Berkey, Utilitas (2017) 2

  43. Why Effective Altruism and Longtermism Are Toxic Ideologies - Current AffairsWhy Effective Altruism and Longtermism Are Toxic Ideologies - Current Affairs

  44. 80,000 Hours Problem Profiles

  45. Another Critique of Effective Altruism - LessWrongAnother Critique of Effective Altruism - LessWrong

  46. Citation rc-1e66 (data unavailable — rebuild with wiki-server access)

  47. Effective Altruism in the Age of AGI - Will MacAskill SubstackEffective Altruism in the Age of AGI - Will MacAskill Substack

  48. EA Organization Updates Thread: February 2026 - EA ForumEA Organization Updates Thread: February 2026 - EA Forum

  49. Longtermism - WikipediaLongtermism - Wikipedia

References

Claims (1)
| 2025 | Community | Giving What We Can hits 10,000 pledges | Win | Milestone reported in GWWC communications; each pledge estimated to generate \$15,000 in counterfactual donations to high-impact charities over a lifetime, per GWWC's documented Lifetime Giving Method. |
Claims (1)
Ben Kuhn has argued that EA exhibits "epistemic inertia," where community consensus becomes less responsive to new evidence and arguments, partly because group norms fail to adequately guard against motivated cognition.
Accurate100%Feb 22, 2026
Both of these phenomena add what I call &ldquo;epistemic inertia&rdquo; to the effective-altruist consensus: effective altruists become more subject to pressures on their beliefs other than those from a truth-seeking process, meaning that the EA consensus becomes less able to update on new evidence or arguments and preventing the movement from moving forward.
Claims (1)
The movement has achieved substantial concrete wins: GiveWell raised \$415 million and directed \$397 million to cost-effective programs in metrics year 2024, the Against Malaria Foundation — consistently one of GiveWell's top-recommended organizations — has protected over 667 million people, and CEA reports strong engagement growth in 2025 after post-FTX declines. On the longtermist side, hundreds of millions of dollars have flowed to existential risk research, AI safety work has gained mainstream policy traction, and California has signed bills directly regulating AI risk.
Inaccurate75%Feb 22, 2026
Thanks to the generosity of more than 30,000 donors, GiveWell raised $415 million and directed $397 million to cost-effective programs in metrics year 2024 (February 2024 to January 2025).

unsupported: The claim that the Against Malaria Foundation has protected over 667 million people is not supported by the source. unsupported: The claim that CEA reports strong engagement growth in 2025 after post-FTX declines is not supported by the source. unsupported: The claim that hundreds of millions of dollars have flowed to existential risk research is not supported by the source. unsupported: The claim that AI safety work has gained mainstream policy traction is not supported by the source. unsupported: The claim that California has signed bills directly regulating AI risk is not supported by the source.

4EA Organization Updates Thread: February 2026 - EA Forumforum.effectivealtruism.org·Blog post
Claims (1)
GiveWell continues recommending charities in mental health, malnutrition, and lead exposure while broadening its work beyond charity evaluation to identify new grantees and explore underexplored cause areas. Longtermist projects represented less than one-third of total EA funding as of August 2022, with the majority of EA work falling outside longtermist frameworks.
Minor issues80%Feb 22, 2026
We still recommend charities working on mental health, malnutnitrion and lead exposure.

The claim mentions that longtermist projects represented less than one-third of total EA funding as of August 2022, with the majority of EA work falling outside longtermist frameworks. This information is not present in the provided source text. The source mentions GiveWell continues recommending charities in mental health, malnutrition, and lead exposure, but does not mention that they are broadening its work beyond charity evaluation.

Claims (1)
Will MacAskill's 2022 book What We Owe the Future brought longtermism to mainstream audiences, achieving bestseller status. That same year, the FTX Future Fund had committed more than \$130 million in grants, mostly to longtermist causes, within four months of launching in February 2022.
Claims (1)
GiveWell itself has acknowledged that seeking strong evidence and a straightforward, documented case for impact can be in tension with maximizing impact — reflecting a deep epistemic split within EA between evidence-based giving and longtermism's higher-risk, higher-reward approach. Jacob Steinhardt has argued that EA has a history of making overconfident claims with insufficient research, citing Peter Singer's 2009 claim that a life could be saved for \$200 — a figure substantially revised by 2011.
Accurate90%Feb 22, 2026
The history of effective altruism is littered with over-confident claims, many of which have later turned out to be false. In 2009, Peter Singer claimed that you could save a life for $200 (and many others repeated his claim). While the number was already questionable at the time, by 2011 we discovered that the number was completely off.
Claims (1)
| 2024 | AI Safety | California SB 1047 vetoed by Governor Newsom | Loss | SB 1047 would have imposed safety requirements on large AI model developers. Anthropic helped influence revisions to the bill and announced support for the amended version in August 2024, with CEO Dario Amodei writing to Governor Newsom that "the new SB 1047 is substantially improved to the point where we believe its benefits likely outweigh its costs." Opponents of the bill, including OpenAI, Meta, Y Combinator, and Andreessen Horowitz, argued the bill's thresholds and liability provisions could stifle innovation. |
Minor issues90%Feb 22, 2026
Opponents of the bill, which includes OpenAI, Meta, Y Combinator, and venture capital firm Andreessen Horowitz, argue that the bill's thresholds and liability provisions could stifle innovation and unfairly burden smaller developers.

The claim states that Dario Amodei wrote to Governor Newsom in August 2024, but the article specifies the letter was written on Aug. 21. The claim states that SB 1047 would have imposed safety requirements on large AI model developers, but the article states that the bill mandates safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power.

Claims (1)
California and New York have signed bills directly related to AI risk regulation, following advocacy efforts supported by EA-aligned organizations. The Existential Risk Persuasion Tournament (XPT), which ran June–October 2022 (approximately 50 domain experts, approximately 50 superforecasters), found that approximately 42% of expert existential risk forecasters reported having attended an EA meetup, compared to 9% of superforecasters — providing the closest published proxy for EA community overlap with the x-risk researcher pool. This measures conference attendance, not career origin, and does not establish that EA was the primary cause of participants' entry into the field.
Accurate100%Feb 22, 2026
"The sample drew heavily from the Effective Altruism (EA) community: about 42% of experts and 9% of superforecasters reported that they had attended an EA meetup".
Claims (1)
GiveWell applies a "leverage and funging adjustment" to cost-effectiveness estimates based on the probability that other counterfactual funding scenarios would occur in the absence of their charities' philanthropic spending, and applies this methodology to specific grantees.
Accurate100%Feb 22, 2026
To account for this, we apply a “leverage and funging adjustment” to our cost-effectiveness estimates, based in part on the probability that other counterfactual funding scenarios would occur in the absence of our charities’ philanthropic spending.
Claims (1)
In this sense it leaves everything just as it is." In academic formulations, Srinivasan contends that by insisting on quantifying the value of actions in terms of potential effects and probabilities of success, EA "effectively endorses the prevailing global capitalist institutional order — precisely the order that must be changed to address global poverty meaningfully." Nathan Robinson published a widely discussed critique in Current Affairs in 2023 making related institutional arguments. Brian Berkey's peer-reviewed "Institutional Critique of Effective Altruism" (Utilitas, Cambridge, 2017) formalizes these claims in academic philosophy.
Minor issues85%Feb 22, 2026
As the philosopher Amia Srinivasan wrote in a widely read critique of Will MacAskill’s 2015 book Giving What We Can : “Effective Altruism doesn’t try to understand how power works, except to better align itself with it. In this sense it leaves everything just as it is. This is no doubt comforting to those who enjoy the status quo – and may in part account for the movement’s success.”

The claim mentions Nathan Robinson published a critique in *Current Affairs* in 2023, but the source does not mention this. The claim mentions Brian Berkey's peer-reviewed "Institutional Critique of Effective Altruism" (*Utilitas*, Cambridge, 2017), but the source does not mention this.

Claims (1)
In this sense it leaves everything just as it is." In academic formulations, Srinivasan contends that by insisting on quantifying the value of actions in terms of potential effects and probabilities of success, EA "effectively endorses the prevailing global capitalist institutional order — precisely the order that must be changed to address global poverty meaningfully." Nathan Robinson published a widely discussed critique in Current Affairs in 2023 making related institutional arguments. Brian Berkey's peer-reviewed "Institutional Critique of Effective Altruism" (Utilitas, Cambridge, 2017) formalizes these claims in academic philosophy.
Accurate100%Feb 22, 2026
In recent years, the effective altruism movement has generated much discussion about the ways in which we can most effectively improve the lives of the global poor, and pursue other morally important goals. One of the most common criticisms of the movement is that it has unjustifiably neglected issues related to institutional change that could address the root causes of poverty, and instead focused its attention on encouraging individuals to direct resources to organizations that directly aid people living in poverty.
12Open Philanthropy Wikipediaen.wikipedia.org·Reference
Claims (1)
Open Philanthropy recommended over \$400 million in grants in 2021, including \$300 million in support for GiveWell's recommendations (up from \$100 million in 2020). Through 2022, roughly 70% of Open Philanthropy's total funding went toward areas in its Global Health and Wellbeing portfolio, and 30% went toward areas in its longtermist portfolio — making the "EA = longtermism" characterization inaccurate as a description of funding distribution. Following the FTX collapse in late 2022, Open Philanthropy paused most new longtermist funding commitments pending further review.
Claims (1)
While GiveWell has maintained relatively stable and growing funding from non-Open Philanthropy donors (approximately \$415M raised in 2024), the broader EA funding landscape contracted since the FTX collapse. Open Philanthropy's available assets fell by roughly half over the course of 2022, though they had since recovered about half of the total losses by 2023; this led Open Philanthropy to raise the cost-effectiveness bar for Global Health and Wellbeing grants by roughly a factor of two. Open Philanthropy does not publish audited balance sheets publicly; the "recovered half" figure appears in a Coefficient Giving summary of Open Philanthropy's own communications rather than in an independently audited source.
Claims (1)
The collapse of FTX and fraud conviction of Sam Bankman-Fried — who had publicly framed his financial activities in longtermist terms — severely damaged EA's credibility. The closure of the Future of Humanity Institute at Oxford in April 2024 — the field's founding research institution — marked the largest single institutional loss in the longtermist research ecosystem. Internal tensions between neartermist and longtermist factions persist, with some global health and animal welfare advocates reporting that the longtermist turn has made their work harder and worsened their reputations. Critics from multiple directions challenge longtermism's epistemological foundations, arguing that its calculations are highly sensitive to speculative assumptions about the far future.
Inaccurate30%Feb 22, 2026
At least some EAs focused on global health and wellbeing, and on animal welfare, feel that we are making their lives harder, worsening their reputations, and occupying niches they value with LT/x-risk stuff (like making EAG disproportionately x-risk/LT-focused).

unsupported unsupported unsupported unsupported

15Sensitive Assumptions in Longtermist Modeling - EA Forumforum.effectivealtruism.org·Blog post
Claims (1)
The collapse of FTX and fraud conviction of Sam Bankman-Fried — who had publicly framed his financial activities in longtermist terms — severely damaged EA's credibility. The closure of the Future of Humanity Institute at Oxford in April 2024 — the field's founding research institution — marked the largest single institutional loss in the longtermist research ecosystem. Internal tensions between neartermist and longtermist factions persist, with some global health and animal welfare advocates reporting that the longtermist turn has made their work harder and worsened their reputations. Critics from multiple directions challenge longtermism's epistemological foundations, arguing that its calculations are highly sensitive to speculative assumptions about the far future.
Inaccurate30%Feb 22, 2026
Sensitive assumptions in longtermist modeling — EA Forum This website requires javascript to properly function.

unsupported unsupported unsupported misleading_paraphrase

16Centre for Effective Altruism Global Reachcentreforeffectivealtruism.org
Claims (1)
Community size and demographics: As of January 2023, there were 362 known active EA groups worldwide — up from 233 in late 2020 (approximately 55% growth over two years), with active groups in 56 countries. CEA supports EA groups across many countries and has held EAGx conferences internationally. In 2024, 189 organizers from 34 countries participated in CEA's Organizer Support Programme. The 2024 EA Survey (conducted by Rethink Priorities) found that the USA (34.4%) and UK (13.5%) remained the countries with the largest proportions of EA respondents; 32% came from Europe (excluding the UK) and 20% from the rest of the world. Local group membership rates varied substantially by country: US respondents (21.9%) and UK respondents (25.8%) had among the lowest rates of local group membership, while many countries showed 40–80% local group membership rates among survey respondents.
Claims (1)
Community size and demographics: As of January 2023, there were 362 known active EA groups worldwide — up from 233 in late 2020 (approximately 55% growth over two years), with active groups in 56 countries. CEA supports EA groups across many countries and has held EAGx conferences internationally. In 2024, 189 organizers from 34 countries participated in CEA's Organizer Support Programme. The 2024 EA Survey (conducted by Rethink Priorities) found that the USA (34.4%) and UK (13.5%) remained the countries with the largest proportions of EA respondents; 32% came from Europe (excluding the UK) and 20% from the rest of the world. Local group membership rates varied substantially by country: US respondents (21.9%) and UK respondents (25.8%) had among the lowest rates of local group membership, while many countries showed 40–80% local group membership rates among survey respondents.
Minor issues85%Feb 22, 2026
To get a more accurate growth rate between 2020 and 2022, we could compare the total number of known active EA groups at the end of 2020 and 2022. Based on the 2020 survey , there were 233 known active EA groups as of late 2020. [12] As of January 2023, there were 362 known active EA groups. This suggests a growth rate of around 55% over the two years.

The source only mentions data up to January 2023, and does not include any information about 2024. The source does not mention CEA holding EAGx conferences internationally. The source does not mention the 2024 EA Survey or the statistics related to it.

Claims (1)
Community size and demographics: As of January 2023, there were 362 known active EA groups worldwide — up from 233 in late 2020 (approximately 55% growth over two years), with active groups in 56 countries. CEA supports EA groups across many countries and has held EAGx conferences internationally. In 2024, 189 organizers from 34 countries participated in CEA's Organizer Support Programme. The 2024 EA Survey (conducted by Rethink Priorities) found that the USA (34.4%) and UK (13.5%) remained the countries with the largest proportions of EA respondents; 32% came from Europe (excluding the UK) and 20% from the rest of the world. Local group membership rates varied substantially by country: US respondents (21.9%) and UK respondents (25.8%) had among the lowest rates of local group membership, while many countries showed 40–80% local group membership rates among survey respondents.
Claims (1)
AI safety has expanded significantly within AI/ML research communities, with historical ties to rationalist communities like LessWrong playing a role in building early researcher networks.
20Celebrating Wins Discussion Thread - EA Forumforum.effectivealtruism.org·Blog post
Claims (1)
The movement has achieved substantial concrete wins: GiveWell raised \$415 million and directed \$397 million to cost-effective programs in metrics year 2024, the Against Malaria Foundation — consistently one of GiveWell's top-recommended organizations — has protected over 667 million people, and CEA reports strong engagement growth in 2025 after post-FTX declines. On the longtermist side, hundreds of millions of dollars have flowed to existential risk research, AI safety work has gained mainstream policy traction, and California has signed bills directly regulating AI risk.
Minor issues80%Feb 22, 2026
CEA is growing again: 25% more people engaged with our programs in 2025

The claim states that GiveWell raised $415 million and directed $397 million to cost-effective programs in metrics year 2024, but this information is not found in the source. The source only mentions GiveWell announcing its largest single grant ever of $96.3 million. The claim states that the Against Malaria Foundation has protected over 667 million people, but the source only mentions that AMF has prevented an estimated 270,000 deaths since its inception. The claim states that CEA reports strong engagement growth in 2025 after post-FTX declines, but the source states that CEA is growing again with 25% more people engaged with their programs in 2025. The claim states that California has signed bills directly regulating AI risk, but the source states that California signed the first USA bill to directly regulate AI catastrophic risk into law, and New York followed suit.

21Centre for Effective Altruism - Wikipediaen.wikipedia.org·Reference
Claims (1)
EA emerged from proto-communities in the late 2000s and early 2010s, coalescing around organizations like GiveWell (founded 2006), Giving What We Can (founded 2009 at Oxford by Toby Ord), and 80,000 Hours (founded 2011). CEA was founded in 2012 as an umbrella organization for GWWC and 80,000 Hours, and the term "effective altruism" was coined that year. By January 2013, GWWC had welcomed its 300th member; by 2022, more than 7,000 people from 95 countries had taken a pledge, collectively representing over \$2.5 billion in pledged lifetime donations. An estimated \$416 million was donated to effective charities identified by the movement in 2019, representing approximately 37% annual growth since 2015.
Claims (1)
He further proposed that in terms of curriculum content, perhaps 30% or more should cover non-classic cause areas. These figures are from MacAskill's own October 2025 post, based on a memo written for the Meta Coordination Forum, and are not derived from an independent survey or community mandate.
23CEA Is Growing Again: 25% More People Engaged - EA Forumforum.effectivealtruism.org·Blog post
Claims (1)
The movement has achieved substantial concrete wins: GiveWell raised \$415 million and directed \$397 million to cost-effective programs in metrics year 2024, the Against Malaria Foundation — consistently one of GiveWell's top-recommended organizations — has protected over 667 million people, and CEA reports strong engagement growth in 2025 after post-FTX declines. On the longtermist side, hundreds of millions of dollars have flowed to existential risk research, AI safety work has gained mainstream policy traction, and California has signed bills directly regulating AI risk.
Inaccurate30%Feb 22, 2026
CEA is growing again: 25% more people engaged with our programs in 2025 — EA Forum This website requires javascript to properly function.

WRONG NUMBERS: GiveWell amounts are not mentioned in the source. WRONG NUMBERS: Against Malaria Foundation numbers are not mentioned in the source. OVERCLAIMS: The source only discusses CEA's growth, not the entire EA movement. MISLEADING PARAPHRASE: The claim implies that AI safety work gaining mainstream policy traction and California signing bills directly regulating AI risk are achievements of the EA movement, but the source does not mention these. FABRICATED DETAILS: The source does not mention hundreds of millions of dollars flowing to existential risk research.

Claims (1)
In this sense it leaves everything just as it is." In academic formulations, Srinivasan contends that by insisting on quantifying the value of actions in terms of potential effects and probabilities of success, EA "effectively endorses the prevailing global capitalist institutional order — precisely the order that must be changed to address global poverty meaningfully." Nathan Robinson published a widely discussed critique in Current Affairs in 2023 making related institutional arguments. Brian Berkey's peer-reviewed "Institutional Critique of Effective Altruism" (Utilitas, Cambridge, 2017) formalizes these claims in academic philosophy.
Nathan Robinson published a widely discussed critique in *Current Affairs* in 2023 making related institutional arguments.
25Historical EA Funding Data: 2025 Update - EA Forumforum.effectivealtruism.org·Blog post
Claims (1)
The EA Forum's [2025 funding data post](https://forum.effectivealtruism.org/posts/NWHb4nsnXRxDDFGLy/historical-ea-funding-data-2025-update) notes that self-reported organizational metrics are difficult to compare across cause areas.
Claims (2)
The collapse of FTX and fraud conviction of Sam Bankman-Fried — who had publicly framed his financial activities in longtermist terms — severely damaged EA's credibility. The closure of the Future of Humanity Institute at Oxford in April 2024 — the field's founding research institution — marked the largest single institutional loss in the longtermist research ecosystem. Internal tensions between neartermist and longtermist factions persist, with some global health and animal welfare advocates reporting that the longtermist turn has made their work harder and worsened their reputations. Critics from multiple directions challenge longtermism's epistemological foundations, arguing that its calculations are highly sensitive to speculative assumptions about the far future.
Minor issues85%Feb 22, 2026
When the cryptocurrency exchange FTX imploded in November 2022, the focus quickly turned to the motivations of its founder and CEO, Sam Bankman-Fried.

The article mentions the conviction of Sam Bankman-Fried on charges of fraud and money laundering, but does not explicitly state that this damaged EA's credibility. This is implied, but not directly stated. The article does not mention the closure of the Future of Humanity Institute (FHI) at Oxford in April 2024. It only mentions that Nick Bostrom directs Oxford’s Future of Humanity Institute and that Ord is a senior research fellow at Oxford’s Future of Humanity Institute. The article does not explicitly state that the FHI closure was the largest single institutional loss in the longtermist research ecosystem. This is an overclaim. The article does not explicitly state that critics challenge longtermism's epistemological foundations, but it does mention that there is a disconnect between an almost-neurotic focus on hard evidence of effectiveness in some areas of EA and a willingness to accept extremely abstract and conjectural “evidence” in others.

Philosopher Steven Pinker has critiqued longtermism for potentially prioritizing any scenario, no matter how improbable, as long as it can be framed as having arbitrarily large effects far in the future. A related concern is that longtermism's predictive confidence should be very low given the exponential branching of possible futures.
Accurate100%Feb 22, 2026
Pinker has explained what’s wrong with this approach, arguing that longtermism “runs the danger of prioritizing any outlandish scenario, no matter how improbable, as long as you can visualize it having arbitrarily large effects far in the future.”
27The FTX Future Fund Team Has Resigned - EA Forumforum.effectivealtruism.org·Blog post
Claims (1)
During its existence, it made grants worth approximately \$100 million and committed to \$160 million in total grantee commitments as of September 2022. The fund's team resigned en masse, stating they were "unable to perform our work or process grants" and had "fundamental questions about the legitimacy and integrity of the business operations" funding the Future Fund. All unfulfilled commitments were voided when FTX entered bankruptcy.
Claims (1)
A GiveWell-recommended \$4.4 million renewal grant secures programs in India, Kenya, Nigeria, and Pakistan until 2026.
Accurate100%Feb 22, 2026
Funding : A GiveWell‑recommended US $4.4 million renewal grant secures programmes in India, Kenya, Nigeria and Pakistan until 2026 (GiveWell 2024) .
Claims (1)
Will MacAskill's 2022 book What We Owe the Future brought longtermism to mainstream audiences, achieving bestseller status. That same year, the FTX Future Fund had committed more than \$130 million in grants, mostly to longtermist causes, within four months of launching in February 2022.
Minor issues90%Feb 22, 2026
Four months after launching in February this year, the FTX Future Fund had committed more than $130 million in grants, mostly to longtermist causes.

The claim that *What We Owe the Future* achieved bestseller status is not directly supported by the source. The source only mentions the book's existence and its arguments. The source states that the FTX Future Fund committed more than $130 million in grants within four months of launching in February 2022, but it does not explicitly state that these grants were 'mostly to longtermist causes.' While the article mentions the fund's connection to longtermism, it doesn't specify the exact proportion of grants allocated to such causes.

30Longtermism - Wikipediaen.wikipedia.org·Reference
Claims (1)
GiveWell continues recommending charities in mental health, malnutrition, and lead exposure while broadening its work beyond charity evaluation to identify new grantees and explore underexplored cause areas. Longtermist projects represented less than one-third of total EA funding as of August 2022, with the majority of EA work falling outside longtermist frameworks.
31Future of Humanity Institute - Wikipediaen.wikipedia.org·Reference
Claims (1)
The collapse of FTX and fraud conviction of Sam Bankman-Fried — who had publicly framed his financial activities in longtermist terms — severely damaged EA's credibility. The closure of the Future of Humanity Institute at Oxford in April 2024 — the field's founding research institution — marked the largest single institutional loss in the longtermist research ecosystem. Internal tensions between neartermist and longtermist factions persist, with some global health and animal welfare advocates reporting that the longtermist turn has made their work harder and worsened their reputations. Critics from multiple directions challenge longtermism's epistemological foundations, arguing that its calculations are highly sensitive to speculative assumptions about the far future.
Claims (1)
Evidence Action's Deworm the World program launched in Tanzania in January 2026, targeting more than 10 million children currently at risk of or infected with soil-transmitted helminths and/or schistosomiasis, partnering with the government to provide deworming after external funding for the program was suspended in 2025. Globally, Deworm the World reached 198 million children in India, Kenya, Nigeria, Pakistan, and Malawi in 2024 alone — a record number — and has delivered more than 2 billion treatments since its founding at an estimated cost of less than \$0.50 per child per treatment.
Minor issues85%Feb 22, 2026
Evidence Action is launching its Deworm the World program in Tanzania, partnering with the government to combat parasitic worm infections. More than 10 million children across the country are currently at risk of or infected with soil-transmitted helminths and/or schistosomiasis.

The source does not mention reaching 198 million children in India, Kenya, Nigeria, Pakistan, and Malawi in 2024 alone. The program launched in January 2026, not January 2025 as the claim states.

33Relationship Between EA Community and AI Safety - EA Forumforum.effectivealtruism.org·Blog post
Claims (1)
Some EA advocates focused on global health and animal welfare have reported that longtermism and x-risk work has made their lives harder, worsened their reputations, and occupied valued community niches. The 2024 EA Survey revealed diverging priorities: highly engaged respondents rated AI risks, animal welfare, and EA movement building more highly, while less engaged members emphasized climate change, global health, and poverty. An EA Forum post on the relationship between the EA community and AI safety noted that AI safety dominance has alienated global health and development EAs, creating perceptions of exclusion at events, and that community members sometimes perceive EA as an AI/longtermism-only movement — a perception that risks talent loss in other cause areas.
Inaccurate50%Feb 22, 2026
I worry about this because I&#x27;ve talked to GHD EAs at EAGs, and sometimes the vibe is a bit "we&#x27;re not sure this place is really for us anymore" (especially among non-biosecurity people).

unsupported: The claim that some EA advocates focused on global health and animal welfare have reported that longtermism and x-risk work has made their lives harder, worsened their reputations, and occupied valued community niches is not supported by the source. unsupported: The claim that the 2024 EA Survey revealed diverging priorities is not supported by the source. The source does not mention a 2024 EA Survey. misleading_paraphrase: The claim that an EA Forum post on the relationship between the EA community and AI safety noted that AI safety dominance has alienated global health and development EAs, creating perceptions of exclusion at events is a misleading paraphrase. The source does not explicitly state that AI safety dominance has alienated global health and development EAs, creating perceptions of exclusion at events. However, one commenter mentions talking to GHD EAs at EAGs, and sometimes the vibe is a bit "we're not sure this place is really for us anymore" (especially among non-biosecurity people). misleading_paraphrase: The claim that community members sometimes perceive EA as an AI/longtermism-only movement — a perception that risks talent loss in other cause areas is a misleading paraphrase. The source does not explicitly state that community members sometimes perceive EA as an AI/longtermism-only movement — a perception that risks talent loss in other cause areas. However, one commenter mentions that people deep into AI or newly converted are much more likely to think that EA revolves essentially around AI, and people outside of AI might think 'Oh that's what the community is about now' and don't feel like they belong here.

Claims (1)
University-affiliated research initiatives received more than \$13 million in total; twenty academics at institutions including Cornell, Princeton, Brown, and Cambridge received individual grants exceeding \$100,000 each. Other affected grantees included HelixNano and Our World in Data. The full organizational landscape — which entities closed versus found alternative funding — has not been comprehensively documented in any single public source.
35Stewardship: CEA's 2025-26 Strategy - EA Forumforum.effectivealtruism.org·Blog post
Claims (1)
| 2022 | Community | FTX collapse; Sam Bankman-Fried arrested for fraud | Loss | Most damaging single event for EA credibility. FTX Future Fund had committed approximately \$160M in total grantee commitments before collapse; those commitments were voided. (Approximately \$100M in grants had already been disbursed; the remaining commitments were cancelled. See FTX Future Fund section below.) Forum usage, EAG attendance, EA Funds donations, and virtual programming all declined in subsequent years. |
Unsupported0%Feb 22, 2026
EA has been in a defensive crouch for much of the past two-plus years. We have defied predictions that the collapse of FTX would cause the collapse of EA, but our momentum stalled.

The source does not mention the FTX collapse, Sam Bankman-Fried's arrest, the FTX Future Fund's grantee commitments, or the impact on forum usage, EAG attendance, EA Funds donations, and virtual programming.

Claims (1)
University-affiliated research initiatives received more than \$13 million in total; twenty academics at institutions including Cornell, Princeton, Brown, and Cambridge received individual grants exceeding \$100,000 each. Other affected grantees included HelixNano and Our World in Data. The full organizational landscape — which entities closed versus found alternative funding — has not been comprehensively documented in any single public source.
Claims (1)
According to a 2025 EA Forum post tracking wins, 92% of corporate cage-free egg commitments with 2024 or earlier deadlines have been fulfilled. Lewis Bollard's appearance on the Dwarkesh Podcast raised over \$2 million for effective animal welfare charities, estimated to help approximately 4 million animals. The 2024 EA Survey showed animal welfare joining the top tier of cause prioritization among community members. Open Wing Alliance secured 141 new cage-free commitments in 2024 alone, per Open Philanthropy's 2024 progress report.
Inaccurate30%Feb 22, 2026
AI risks and global health remain top priorities, but animal welfare has joined the top tier.

WRONG NUMBERS: The source does not mention the 92% fulfillment rate of corporate cage-free egg commitments. WRONG ATTRIBUTION: The source does not mention Lewis Bollard or the Dwarkesh Podcast. UNSUPPORTED: The source does not mention Open Wing Alliance or the number of cage-free commitments they secured in 2024.

Claims (1)
EA grantmaking reached its peak around 2022 and has contracted since, though the 80,000 Hours 2021 estimate of approximately \$46 billion committed to EA causes (growing at approximately 37% annually since 2015) reflects the scale of capital that had been pledged rather than deployed. Open Philanthropy, the largest single EA-aligned funder, has directed more than \$4 billion in total grants since its founding in 2017.
39Effective Altruism - Wikipediaen.wikipedia.org·Reference
Claims (1)
Within EA, longtermism emphasizes positively influencing the long-term future — particularly by reducing existential risks from advanced AI, engineered pandemics, and other catastrophic threats. Tracking the movement's successes and failures matters both for internal strategic learning and for external evaluation of whether EA's approach to philanthropy and cause prioritization delivers on its ambitious claims.
Claims (1)
The movement has achieved substantial concrete wins: GiveWell raised \$415 million and directed \$397 million to cost-effective programs in metrics year 2024, the Against Malaria Foundation — consistently one of GiveWell's top-recommended organizations — has protected over 667 million people, and CEA reports strong engagement growth in 2025 after post-FTX declines. On the longtermist side, hundreds of millions of dollars have flowed to existential risk research, AI safety work has gained mainstream policy traction, and California has signed bills directly regulating AI risk.
Inaccurate30%Feb 22, 2026
People protected 667,715,569

WRONG NUMBERS: The source states that the Against Malaria Foundation has protected 667,715,569 people, not 667 million. WRONG DATE: The source is dated 2026, not 2024 or 2025. UNSUPPORTED: The source does not mention GiveWell, CEA, existential risk research, AI safety work, or California bills regulating AI risk.

Claims (1)
Community size and demographics: As of January 2023, there were 362 known active EA groups worldwide — up from 233 in late 2020 (approximately 55% growth over two years), with active groups in 56 countries. CEA supports EA groups across many countries and has held EAGx conferences internationally. In 2024, 189 organizers from 34 countries participated in CEA's Organizer Support Programme. The 2024 EA Survey (conducted by Rethink Priorities) found that the USA (34.4%) and UK (13.5%) remained the countries with the largest proportions of EA respondents; 32% came from Europe (excluding the UK) and 20% from the rest of the world. Local group membership rates varied substantially by country: US respondents (21.9%) and UK respondents (25.8%) had among the lowest rates of local group membership, while many countries showed 40–80% local group membership rates among survey respondents.
Inaccurate65%Feb 22, 2026
189 organisers from 34 countries participated in OSP this year!

unsupported: The claim that there were 362 known active EA groups worldwide as of January 2023 is not supported by the source. unsupported: The claim that the number of active EA groups increased from 233 in late 2020 is not supported by the source. unsupported: The claim that the growth rate of active EA groups was approximately 55% over two years is not supported by the source. unsupported: The claim that active EA groups existed in 56 countries is not supported by the source. minor_issues: The claim that 189 organizers from 34 countries participated in CEA's Organizer Support Programme in 2024 is slightly inaccurate. The source states that 189 organizers from 34 countries participated in OSP this year, which refers to Fall 2024. unsupported: The claim about the 2024 EA Survey and the proportions of EA respondents from different countries (USA, UK, Europe, rest of the world) is not supported by the source. unsupported: The claim about local group membership rates varying by country and the specific percentages for US and UK respondents, as well as the range for other countries, is not supported by the source.

Open Philanthropy provides strategic grants across multiple domains including global health, catastrophic risks, scientific progress, and AI safety. Their portfolio aims to maximize positive impact through targeted philanthropic investments.

Claims (2)
| 2020 | Infrastructure | GiveWell exceeds \$300M directed in a single year | Win | First time GiveWell surpassed this threshold; driven partly by COVID-era philanthropy surge. Open Philanthropy supported \$100M in GiveWell recommendations in 2020, rising to \$300M in 2021. |
EA grantmaking reached its peak around 2022 and has contracted since, though the 80,000 Hours 2021 estimate of approximately \$46 billion committed to EA causes (growing at approximately 37% annually since 2015) reflects the scale of capital that had been pledged rather than deployed. Open Philanthropy, the largest single EA-aligned funder, has directed more than \$4 billion in total grants since its founding in 2017.
44An Overview of the AI Safety Funding Situation (LessWrong)LessWrong·Stephen McAleese·2023·Blog post

Analyzes AI safety funding from sources like Open Philanthropy, Survival and Flourishing Fund, and academic institutions. Estimates total global AI safety spending and explores talent versus funding constraints.

★★★☆☆
Citation verification: 7 verified, 8 flagged, 21 unchecked of 46 total

Related Pages

Top Related Pages

Organizations

AnthropicGiving What We CanOpenAIFTX Future FundJohns Hopkins Center for Health SecurityFuture of Humanity Institute

Policy

New York RAISE ActBletchley DeclarationNIST AI Risk Management Framework (AI RMF)EU AI Act

Other

Will MacAskillSam Bankman-Fried

Concepts

FTX Collapse: Lessons for EA Funding ResilienceLongtermism Credibility After Ftx

Analysis

Anthropic (Funder)Relative Longtermist Value ComparisonsDonations List Website

Historical

The MIRI Era