Skip to content

Long-Term Future Fund (LTFF)

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:56 (Adequate)⚠️
Importance:52 (Useful)
Last edited:2026-01-29 (3 days ago)
Words:4.8k
Structure:
📊 35📈 3🔗 2📚 5011%Score: 14/15
LLM Summary:LTFF is a regranting program that has distributed $20M since 2017 (approximately $10M to AI safety) with median grants of $25K, filling a critical niche between personal savings and institutional funders like Coefficient Giving (median $257K). In 2023, LTFF granted $6.67M with a 19.3% acceptance rate, targeting 21-day decision turnarounds, and serves as an important pipeline for researchers before joining major labs or receiving larger grants.
Issues (2):
  • QualityRated 56 but structure suggests 93 (underrated by 37 points)
  • Links11 links could use <R> components
DimensionAssessmentEvidence
Cumulative GrantmakingHighOver $20M since 2017, ≈$10M to AI safety
Annual Volume (2023)Medium-High$6.67M total grants
Grant SizeSmallMedian $25K (vs Coefficient median $257K)
Acceptance RateSelective19.3% (May 2023-March 2024)
Decision SpeedFastTarget 21 days, aim for 42 days maximum
FocusAI Safety Primary≈67% of grants AI-related
NicheIndividual ResearchersFills gap between personal savings and institutional grants
Pipeline RoleCriticalMany grantees later join major labs or receive Coefficient funding
AttributeDetails
Full NameLong-Term Future Fund
ParentEA Funds (Centre for Effective Altruism)
Part ofEffective Ventures (fiscal umbrella)
TypeRegranting Program
FoundedFebruary 2017 (EA Funds launch); restructured October 2018
Annual Grantmaking$5-8M (recent years)
Cumulative GrantmakingOver $20M (2017-2024)
Websitefunds.effectivealtruism.org/funds/far-future
Application Portalfunds.effectivealtruism.org/apply
Grant ReportsEA Forum LTFF Topic

The Long-Term Future Fund (LTFF) is a regranting program that supports individuals and small organizations working on AI safety and existential risk reduction. As part of EA Funds (operated by the Centre for Effective Altruism), LTFF fills an important niche in the AI safety funding landscape: providing fast, flexible funding for projects too small for major foundations but too large for personal savings. Since its launch in 2017, LTFF has distributed over $20 million in grants, with approximately half going directly to AI safety work.

LTFF’s focus on individuals distinguishes it from funders like Coefficient Giving (which primarily funds organizations with median grants of $257K) or the Survival and Flourishing Fund (which runs larger grant rounds with median grants of $100K). With a median grant size of just $25K, LTFF is uniquely positioned to fund career transitions, upskilling periods, and early-stage research that would be too small for other institutional funders. Many AI safety researchers receive their first external funding through LTFF before joining established organizations or receiving larger grants from Coefficient Giving.

The fund operates with a team of permanent fund managers plus rotating guest evaluators—researchers and practitioners with relevant expertise in AI safety, forecasting, and related fields. This distributed model allows faster decision-making (targeting 21-day turnaround) while maintaining quality control through collective judgment. The fund’s philosophy leans toward funding when at least one manager is “very excited” about a grant, even if others are more neutral—a hits-based giving approach that has produced notable successes including early funding for Manifold Markets, David Krueger’s AI safety lab at Cambridge, and numerous researchers who later joined frontier AI labs.

LTFF has weathered significant funding disruption following the FTX collapse in late 2022, which affected the broader EA funding ecosystem. The fund has since stabilized with approximately 40-50% of funding coming from Coefficient Giving and the remainder from direct donations and other institutional sources. Despite these challenges, LTFF has maintained steady grantmaking volume of approximately $5-8 million annually.

PeriodTotal GrantsAI Safety PortionNotable Developments
2017≈$500K≈30%EA Funds launch; Nick Beckstead sole manager
2018≈$1.5M≈40%New management team (Habryka et al.); more speculative grants
2019≈$1.4M≈50%Expanded grant writeups; increased transparency
2020≈$1.4M≈55%COVID impact; increased applications
2021≈$4.8M≈60%Significant scaling; added guest managers
2022≈$4.5M≈65%FTX disruption; funding uncertainty
2023≈$6.67M≈67%Post-FTX stabilization; 19.3% acceptance rate
2024≈$8M (projected)≈70%Continued growth; focus on technical safety
Cumulative≈$20M+≈$10MOver 1,000 grants since founding

The fund’s evolution reflects broader trends in AI safety: early grants went primarily to established organizations under Nick Beckstead’s management, while later rounds under the Habryka-led team shifted toward more speculative individual grants with detailed public writeups. This transition was analyzed in a 2021 retrospective that found a 30-40% success rate among 2018-2019 grantees—consistent with appropriate risk-taking for hits-based giving.

PeriodMonthly ApplicationsAcceptance RateNotes
H2 2021≈35/month≈25%Post-pandemic increase
2022≈69/month≈22%Double previous year
Early 2023≈90/month≈20%Continued growth
2023-2024≈80/month19.3%Stabilized volume

The fund now processes approximately 80-90 applications monthly, reflecting both increased interest in AI safety careers and LTFF’s reputation as an accessible entry point for funding.

CategoryPercentageAnnual AmountDescription
Technical AI Safety Research35%≈$2.3MIndependent research, small teams, lab collaborations
Upskilling & Career Transitions25%≈$1.7MMATS funding, course attendance, self-study periods
Field-Building20%≈$1.3MConferences, community events, infrastructure
AI Governance & Policy10%≈$670KPolicy research, advocacy, institutional engagement
Other X-Risk10%≈$670KBiosecurity, nuclear risk, forecasting
Size RangeFrequencyMedianTypical Use Case
$1K - $10K20%$5KConference travel, short projects, equipment
$10K - $25K35%$18KMATS supplements, 3-6 month projects
$25K - $50K25%$35K6-12 month research, career transitions
$50K - $100K15%$70KYear-long independent research
$100K - $200K5%$150KMulti-year support, org incubation

For comparison, the cost of funding one year of researcher upskilling in AI safety is estimated at approximately $53K by LTFF.

Loading diagram...
StageTargetMaximumNotes
Application SubmissionAnytimeRollingNo deadlines; continuous acceptance
Initial Response21 days42 daysFormal target since 2023
Time-Sensitive ApplicationsFasterVariesMark as urgent in application
Grant Disbursement1-2 weeks4 weeksAfter approval and terms
Total Time4-6 weeks8 weeksIncluding disbursement

The fund announced rolling applications in 2023, eliminating the previous round-based system. Applications marked as time-sensitive receive expedited review.

Based on public grant reports, committee comments, and fund manager reflections:

FactorWeightDescriptionEvidence Types
Relevant Track RecordCriticalPast experience in AI safety or related fieldPapers, projects, employment history
Clear Theory of ChangeCriticalHow does this reduce x-risk specifically?Logical chain from activity to impact
Person-Level FitHighCan this person execute effectively?References, past output quality
Counterfactual ImpactHighWould this happen without LTFF?Alternative funding sources, personal savings
Concrete DeliverablesMediumWhat will you produce and when?Milestones, measurable outputs
Cost-EffectivenessMediumIs this efficient use of funds?Budget justification, alternatives
Portfolio FitMediumDoes this complement other grants?Novelty, strategic gaps

From the 2023 Ask Us Anything session and grant reports:

Strong applications typically have:

  • Demonstrated interest in priority areas (AI safety, biosecurity) through previous work
  • Specific, time-bound project plans with clear milestones
  • Realistic budget with justification for each line item
  • Acknowledgment of key uncertainties and how they’ll be addressed
  • Evidence that the applicant has thought carefully about the problem space

Common weaknesses:

  • Vague project descriptions without concrete outputs
  • Generic “I want to do AI safety research” without specifics
  • Overconfidence about impact without evidence
  • Poor fit between applicant background and proposed work
  • Lack of engagement with existing literature or community

What fund managers emphasize:

  • “Things like general competence, past track record, clear value proposition and neglectedness matter”
  • They use “model-driven granting”—assessing whether the proposed plan will actually work
  • A well-justified application can change a fund manager’s mind even on skeptical topics
  • There’s healthy disagreement within the fund; one excited manager can get a grant funded

From public statements by fund managers:

CategoryReason
General technology/science improvementUnless differentially benefits x-risk reduction
Economic growth accelerationNot clearly an x-risk intervention
AI capabilities researchCould accelerate rather than reduce risk
Mechanistic interpretability (2024+)Now less neglected due to field growth
Large organizational grantsBetter suited to Coefficient or SFF
GranteeAmountYearPurposeOutcome
Manifold Markets (Grugett, Chen)$200,000Feb 20224-month runway to build prediction market platformManifold grew to major EA forecasting tool; credited as “strong signal EA community cared”
David Krueger’s Lab (Cambridge)$200,0002023PhD students and compute at new AI safety labEstablished academic AI safety presence at Cambridge
Gabriel Mukobi$40,6802023-24Stanford CS master’s with AI governance focusAccepted to 4/6 PhD programs; multiple publications
Lisa Thiergart & Monte MacDiarmid$40,0002024Activation addition interpretability paperConference publication on LM steering
Joshua Clymer$1,5002023A100 GPU compute for instruction-following experimentsFirst technical AI safety project
Einar Urdshals≈$50K2023Career transition from physics PhD to AI safetyMentored independent research
SecureBioVaries2025Field-building events for GCR in BostonSpring/Summer 2025 program
Julian Guidote & Ben Chancey≈$15K20259-week stipend for Mandatory AI Safety “Red Bonds” policy paperPolicy proposal development
Group/LabAmountPeriodFocus
SERI MATS ScholarsMultiple2021+Program supplements and independent research
AI Safety CampMultiple2019+Research mentorship programs
AXRP Podcast (Daniel Filan)Multiple2020+AI safety podcast production
Robert MilesMultiple2019+AI safety YouTube content

Upskilling Grants: LTFF supports individuals seeking to build AI safety skills through various pathways.

Grant TypeTypical SizePurposeRequirements
MATS Supplements$10-25KLiving expenses during programMATS acceptance
Course Attendance$5-15KRegistration + living costsProgram acceptance
Self-Study Period$20-50KIndependent learning runwayStudy plan, mentorship
Research Visits$10-30KCollaboration at other orgsHost organization agreement
PhD/Master’s Support$20-80KTuition and living expensesUniversity admission

Independent Research Grants: The core of LTFF’s portfolio.

Grant TypeTypical SizeDurationRequirements
Exploration Grant$15-40K3-6 monthsResearch direction, preliminary work
Research Grant$40-80K6-12 monthsTrack record, detailed proposal
Multi-Year Support$80-200K1-2 yearsProven track record, clear milestones
Bridge Funding$20-60K3-9 monthsGap between positions/grants

LTFF is “pretty happy to offer ‘bridge’ funding for people who don’t quite meet [major lab] hiring bars yet, but are likely to in the next few years.”

Field-Building Grants: Supporting the AI safety ecosystem.

Grant TypeTypical SizePurposeExamples
Conference Support$10-50KEvent organizationResearch retreats, workshops
Community Building$15-40KLocal group supportUniversity groups, city hubs
Infrastructure$25-100KTools and platformsForecasting tools, research databases
Content Creation$10-40KEducational materialsYouTube, podcasts, writeups

LTFF uses a distinctive governance model with permanent fund managers plus rotating guest managers who provide specialized expertise.

Permanent Fund Managers:

ManagerRoleBackgroundFocus Areas
Caleb ParikhInterim Fund Chair, EA Funds Project LeadML research background; evaluated over $34M in applicationsTechnical safety, field-building
Oliver HabrykaPermanent ManagerCEO of Lightcone Infrastructure (LessWrong); cofounder of Lighthaven venueCommunity infrastructure, epistemics
Linchuan ZhangPermanent Manager, EA Funds StaffSenior Researcher at Rethink Priorities; COVID forecasting backgroundExistential security research

Recent Guest Fund Managers (2023-2024):

Guest ManagerAffiliationExpertise
Lawrence ChanARC EvalsAI evaluations, safety research
Clara CollierIndependentAI governance
Daniel EthIndependentTechnical AI safety
Lauro LangoscoDeepMind (previously)ML research
Thomas LarsenIndependentAI safety research
Eli LiflandForecastingQuantitative analysis
PeriodChairNotable Changes
2017-2018Nick Beckstead (sole manager)Initial fund; focus on established orgs
Oct 2018-2020Matt FallshawNew team: Habryka, Toner, Wage, Zhu
2020-2023Asya BergalExpanded guest managers; transparency
2023-presentCaleb Parikh (interim)Asya stepped down to reduce Coefficient overlap

Asya Bergal stepped down from the chair role in late 2023 to reduce overlap between Coefficient Giving (where she works as a Program Associate) and LTFF.

The committee uses explicit criteria combined with significant judgment:

PrincipleDescription
Hits-Based GivingWilling to fund speculative grants with high potential upside
One Excited Manager RuleGrants often funded when one manager is very excited, even if others are neutral
Model-Driven GrantingAssess whether proposed plans will actually work, not just stated intentions
Healthy DisagreementFund managers regularly disagree; diversity of views is feature not bug
Part-Time CapacityMost managers have demanding day jobs, limiting deep evaluation time

From the May 2023-March 2024 payout report: “The fund’s general policy has been to lean towards funding when one fund manager is very excited about a grant even if other fund managers are more neutral. The underlying model is that individual excitement is more likely to identify grants with significant impact potential in a hits-based giving framework.”

Loading diagram...

LTFF occupies a specific niche in the AI safety funding landscape. Understanding its position relative to other funders helps applicants choose the right source.

FunderAnnual AI SafetyMedian GrantFocusApplication Style
Coefficient Giving$70M+ (2022)$257KOrganizations, large projectsProactive research, RFPs
Survival and Flourishing Fund$10-15M$100KOrganizations, researchersAnnual S-Process + rolling
Long-Term Future Fund$4-5M$25KIndividuals, small projectsOpen applications
Lightspeed Grants$5-10MVariesRapid response, individualsApplication rounds
Manifund$1-2M$10-50KIndividuals, experimentsRegranters + applications
AI Risk Mitigation Fund$1-2MVariesAI safety specificallyApplications

Source: Overview of the AI Safety Funding Situation

ScenarioBest FunderWhy
Individual seeking $20-50K for 6-month researchLTFFSweet spot for LTFF’s median grant size
Organization seeking $500K+ annual budgetCoefficient or SFFToo large for LTFF; need institutional funder
Career transition/upskillingLTFFExplicitly welcomes these applications
MATS living expenses supplementLTFFEstablished pipeline for program participants
Policy/governance organizationCoefficientNeeds diverse funder base for credibility
Rapid response to opportunity (< 2 weeks)Lightspeed GrantsFaster than LTFF’s 21-day target
Experimental project needing community validationManifundRegranter model tests interest
Large academic lab fundingCoefficient or SFF$200K+ grants more common there
RelationshipDescription
LTFF ← Coefficient≈40-50% of LTFF funding comes from Coefficient regranting
LTFF → CoefficientMany LTFF grantees later receive Coefficient funding
LTFF ↔ SFFSimilar cause focus, complementary scale
LTFF ↔ EAIFAI-focused projects go to LTFF; general EA meta to EAIF
LTFF → Major LabsMany grantees eventually hired by Anthropic, DeepMind, etc.
Loading diagram...

Both LTFF and EAIF are part of EA Funds but serve different purposes:

DimensionLTFFEAIF
FocusLongtermism, AI safety, x-riskEA community building, meta-work
Cause AreaSpecific (AI safety, bio, etc.)Cause-agnostic EA infrastructure
Application VolumeHigher (≈80/month)Lower (fewer applications)
Institutional Funding≈40-50% from Coefficient≈80% from Coefficient
Strategic DirectionMore stableHigher rate of strategic changes

When unsure: If your project focuses on AI safety specifically, apply to LTFF. If it’s about EA community building broadly, apply to EAIF. The funds can transfer applications between them if you apply to the wrong one.

EA Funds launched on February 28, 2017, created by the Centre for Effective Altruism (CEA) while going through Y Combinator’s accelerator program. The creation was inspired by the success of the EA Giving Group donor-advised fund run by Nick Beckstead and the donor lottery run by Paul Christiano and Carl Shulman in December 2016.

Initially, Nick Beckstead served as the sole manager of the Long-Term Future Fund. During this period, grants went mostly to established organizations like CSER, FLI, Charity Entrepreneurship, and Founder’s Pledge, with minimal public writeups. Beckstead stepped down in August 2018.

Transition to Active Grantmaking (2018-2020)

Section titled “Transition to Active Grantmaking (2018-2020)”

In October 2018, a new management team was announced: Matt Fallshaw (chair), Helen Toner, Oliver Habryka, Matt Wage, and Alex Zhu, with Nick Beckstead and Jonas Vollmer as advisors. This transition marked a fundamental shift:

AspectBeckstead EraPost-2018 Era
RecipientsPrimarily organizationsMore individuals, speculative projects
Grant SizeLargerWider range, more small grants
TransparencyMinimal writeupsDetailed public justifications
Risk ProfileConservativeMore hits-based

The new approach generated some controversy—certain grants were “scathingly criticized in the comments.” However, a 2021 retrospective found a 30-40% success rate among 2018-2019 grantees, suggesting appropriate risk-taking for hits-based giving.

YearTotal GrantsKey Developments
2021≈$4.8MAdded guest managers; expanded capacity
Q4 2021$2.1M34 grantees in single quarter
2022≈$4.5MPeak pre-FTX; Manifold Markets grant

The fund processed 878 applications from March 2022 to March 2023, funding 263 grants worth approximately $9.1M total (average $34.6K per grant).

The November 2022 FTX collapse significantly impacted the EA funding ecosystem. While LTFF itself did not receive direct FTX funding, the downstream effects included:

  • Increased uncertainty among applicants and donors
  • Some grantees lost expected Future Fund grants
  • Broader EA funding contraction

LTFF remained relatively stable, with Coefficient Giving (then Open Philanthropy) providing approximately 40-50% of funding and the remainder from direct donations. The fund issued statements about funding constraints in early 2023.

Metric20232024 (projected)
Total Grants$6.67M$8M+
AI Safety Portion≈67%≈70%
Acceptance Rate19.3%≈18-20%
Monthly Applications≈80-90≈80-90

Recent strategic shifts include:

  • Less funding for mechanistic interpretability due to field becoming less neglected
  • Continued focus on upskilling and career transitions
  • More stringent evaluation of claims about AI safety relevance

LTFF plays a critical role in the AI safety talent pipeline. Many researchers receive their first external funding through LTFF before achieving one of several outcomes:

OutcomeDescriptionEvidence
Major Lab HiresResearchers later hired by Anthropic, DeepMind, OpenAIMultiple MATS scholars post-LTFF
Academic PositionsFaculty or postdoc roles in AI safetyDavid Krueger, Gabriel Mukobi PhD admissions
Research OutputPublished papers, tools, analysesInterpretability papers, safety tooling
Career PivotsSuccessful transitions into AI safetyPhysics PhDs, software engineers entering field
Capability BuildingSkills developed through funded trainingMATS completions, self-study periods

LTFF grantees often exhibit extraordinary earning potential. As fund managers note, many “are excellent researchers (or have the potential to become one in a few years) and could easily take jobs in big tech or finance, and some could command high salaries (over $400k/year) while conducting similar research at AI labs.”

GranteeLTFF SupportSubsequent Achievement
Gabriel Mukobi$40.7K for Stanford master’sAccepted to 4/6 PhD programs; multiple publications
Manifold Markets$200K early-stageGrew to major EA forecasting platform
AXRP PodcastMultiple grantsEstablished AI safety podcast with consistent output
Robert MilesMultiple grantsMajor AI safety YouTube creator, mainstream reach
Mechanistic interpretability researchersMultipleField growth attributed partly to early LTFF support

A 2021 retrospective analysis of 2018-2019 grantees found:

FindingInterpretation
30-40% success rateAppropriate risk level for hits-based giving
Track record correlationGrantees with prior relevant experience performed better
Renewal patternsSuccessful grantees often received follow-on funding

The fund’s impact extends beyond direct grantees. By funding early-stage researchers who later join major labs or receive larger grants, LTFF serves as a “farm team” for the AI safety field.

StrengthDescriptionEvidence
SpeedMuch faster than major foundations21-day target vs months at Coefficient
FlexibilityFunds individuals, not just organizationsMedian grant $25K; individuals welcome
AccessibilityLower barriers to applicationRolling applications; quick online form
Risk ToleranceWilling to fund early-stage ideasHits-based approach; one excited manager can approve
TransparencyPublishes detailed grant reasoningPayout reports on EA Forum with justifications
Renewal SupportHappy to renew successful grants”Would be happy to be primary funder for years”
Bridge FunctionSupports people building toward larger opportunitiesExplicit bridge funding category
LimitationDescriptionMitigation
Scale$5-8M is small relative to field needsComplements rather than replaces major funders
CapacityPart-time managers limit deep evaluationGuest managers provide additional bandwidth
Individual FocusLess suited for large organizationsRefer to Coefficient or SFF for large orgs
ConcentrationHeavy AI safety focus; other x-risks less fundedApply to specialized funds for non-AI work
Funding Dependency≈40-50% from CoefficientDiversified direct donation base
Feedback QualityRejection feedback varies in depthExplicit in AMAs that feedback is limited
ChallengeDescription
Grantmaker BottleneckAs Coefficient staff noted, “a key bottleneck is that they currently don’t have enough qualified AI Safety grantmakers to hand out money fast enough”
Neglectedness ShiftsAreas like mechanistic interpretability become less neglected, requiring strategic adjustment
Talent RetentionFund managers have demanding day jobs; turnover creates institutional memory loss
Validation Without DependencyGoal is to help researchers become self-sustaining, not LTFF-dependent
StepDetailsTips
1. Review GuidelinesCheck EA Funds website for current prioritiesNote recent payout reports for examples
2. Assess FitConfirm LTFF is right fund (vs EAIF, Animal Welfare, Global Health)AI safety → LTFF; general EA → EAIF
3. Prepare ApplicationDescribe project, background, budget, theory of changeBe specific; include milestones
4. Submit OnlineVia EA Funds application portalMark time-sensitive if urgent
5. Wait for ResponseTarget 21 days, max 42 daysCheck spam folder; follow up if over 6 weeks
6. Respond to QuestionsCommittee may request clarificationRespond promptly; additional info often requested
7. Negotiate TermsDiscuss grant structure, milestones, reportingStandard terms; flexibility for edge cases

The application form includes:

  • Project description: What you’ll do and why it matters
  • Background: Relevant experience and qualifications
  • Budget: Itemized costs with justification
  • Timeline: Key milestones and deliverables
  • Theory of change: How this reduces existential risk
  • Counterfactual: What happens without LTFF funding
  • Other funding: Alternative sources, if any
StageDetails
Grant AgreementReview and sign terms (usually standard)
DisbursementFunds sent within 1-2 weeks of signing
ReportingProgress updates as agreed; typically light-touch
RenewalApply again if continuing work; track record helps
SourcePercentageNotes
Coefficient Giving≈40-50%Regranting arrangement; largest single source
Direct Donations≈40-50%Individual EA donors; recurring and one-time
Other Institutional≈10%Other foundations, DAFs
MethodDetails
Direct Donationfunds.effectivealtruism.org (credit card, bank transfer)
Every.orgTax-deductible for US donors
ManifundRegranting via LTFF project page
DAF GrantsSpecify “Long-Term Future Fund” via CEA

Donors considering significant contributions ($50K+) can contact the fund directly. Linchuan Zhang has volunteered to speak with donors about LTFF’s strategy and current funding needs.