Long-Term Future Fund (LTFF)
- QualityRated 56 but structure suggests 93 (underrated by 37 points)
- Links11 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Cumulative Grantmaking | High | Over $20M since 2017, ≈$10M to AI safety |
| Annual Volume (2023) | Medium-High | $6.67M total grants |
| Grant Size | Small | Median $25K (vs Coefficient median $257K) |
| Acceptance Rate | Selective | 19.3% (May 2023-March 2024) |
| Decision Speed | Fast | Target 21 days, aim for 42 days maximum |
| Focus | AI Safety Primary | ≈67% of grants AI-related |
| Niche | Individual Researchers | Fills gap between personal savings and institutional grants |
| Pipeline Role | Critical | Many grantees later join major labs or receive Coefficient funding |
Organization Details
Section titled “Organization Details”| Attribute | Details |
|---|---|
| Full Name | Long-Term Future Fund |
| Parent | EA Funds (Centre for Effective Altruism) |
| Part of | Effective Ventures (fiscal umbrella) |
| Type | Regranting Program |
| Founded | February 2017 (EA Funds launch); restructured October 2018 |
| Annual Grantmaking | $5-8M (recent years) |
| Cumulative Grantmaking | Over $20M (2017-2024) |
| Website | funds.effectivealtruism.org/funds/far-future |
| Application Portal | funds.effectivealtruism.org/apply |
| Grant Reports | EA Forum LTFF Topic |
Overview
Section titled “Overview”The Long-Term Future Fund (LTFF) is a regranting program that supports individuals and small organizations working on AI safety and existential risk reduction. As part of EA Funds (operated by the Centre for Effective Altruism), LTFF fills an important niche in the AI safety funding landscape: providing fast, flexible funding for projects too small for major foundations but too large for personal savings. Since its launch in 2017, LTFF has distributed over $20 million in grants, with approximately half going directly to AI safety work.
LTFF’s focus on individuals distinguishes it from funders like Coefficient Giving (which primarily funds organizations with median grants of $257K) or the Survival and Flourishing Fund (which runs larger grant rounds with median grants of $100K). With a median grant size of just $25K, LTFF is uniquely positioned to fund career transitions, upskilling periods, and early-stage research that would be too small for other institutional funders. Many AI safety researchers receive their first external funding through LTFF before joining established organizations or receiving larger grants from Coefficient Giving.
The fund operates with a team of permanent fund managers plus rotating guest evaluators—researchers and practitioners with relevant expertise in AI safety, forecasting, and related fields. This distributed model allows faster decision-making (targeting 21-day turnaround) while maintaining quality control through collective judgment. The fund’s philosophy leans toward funding when at least one manager is “very excited” about a grant, even if others are more neutral—a hits-based giving approach that has produced notable successes including early funding for Manifold Markets, David Krueger’s AI safety lab at Cambridge, and numerous researchers who later joined frontier AI labs.
LTFF has weathered significant funding disruption following the FTX collapse in late 2022, which affected the broader EA funding ecosystem. The fund has since stabilized with approximately 40-50% of funding coming from Coefficient Giving and the remainder from direct donations and other institutional sources. Despite these challenges, LTFF has maintained steady grantmaking volume of approximately $5-8 million annually.
Historical Grantmaking
Section titled “Historical Grantmaking”Cumulative Grant History
Section titled “Cumulative Grant History”| Period | Total Grants | AI Safety Portion | Notable Developments |
|---|---|---|---|
| 2017 | ≈$500K | ≈30% | EA Funds launch; Nick Beckstead sole manager |
| 2018 | ≈$1.5M | ≈40% | New management team (Habryka et al.); more speculative grants |
| 2019 | ≈$1.4M | ≈50% | Expanded grant writeups; increased transparency |
| 2020 | ≈$1.4M | ≈55% | COVID impact; increased applications |
| 2021 | ≈$4.8M | ≈60% | Significant scaling; added guest managers |
| 2022 | ≈$4.5M | ≈65% | FTX disruption; funding uncertainty |
| 2023 | ≈$6.67M | ≈67% | Post-FTX stabilization; 19.3% acceptance rate |
| 2024 | ≈$8M (projected) | ≈70% | Continued growth; focus on technical safety |
| Cumulative | ≈$20M+ | ≈$10M | Over 1,000 grants since founding |
The fund’s evolution reflects broader trends in AI safety: early grants went primarily to established organizations under Nick Beckstead’s management, while later rounds under the Habryka-led team shifted toward more speculative individual grants with detailed public writeups. This transition was analyzed in a 2021 retrospective that found a 30-40% success rate among 2018-2019 grantees—consistent with appropriate risk-taking for hits-based giving.
Application Volume Trends
Section titled “Application Volume Trends”| Period | Monthly Applications | Acceptance Rate | Notes |
|---|---|---|---|
| H2 2021 | ≈35/month | ≈25% | Post-pandemic increase |
| 2022 | ≈69/month | ≈22% | Double previous year |
| Early 2023 | ≈90/month | ≈20% | Continued growth |
| 2023-2024 | ≈80/month | 19.3% | Stabilized volume |
The fund now processes approximately 80-90 applications monthly, reflecting both increased interest in AI safety careers and LTFF’s reputation as an accessible entry point for funding.
Grant Categories
Section titled “Grant Categories”Distribution by Category (2023-2024)
Section titled “Distribution by Category (2023-2024)”| Category | Percentage | Annual Amount | Description |
|---|---|---|---|
| Technical AI Safety Research | 35% | ≈$2.3M | Independent research, small teams, lab collaborations |
| Upskilling & Career Transitions | 25% | ≈$1.7M | MATS funding, course attendance, self-study periods |
| Field-Building | 20% | ≈$1.3M | Conferences, community events, infrastructure |
| AI Governance & Policy | 10% | ≈$670K | Policy research, advocacy, institutional engagement |
| Other X-Risk | 10% | ≈$670K | Biosecurity, nuclear risk, forecasting |
Grant Size Distribution
Section titled “Grant Size Distribution”| Size Range | Frequency | Median | Typical Use Case |
|---|---|---|---|
| $1K - $10K | 20% | $5K | Conference travel, short projects, equipment |
| $10K - $25K | 35% | $18K | MATS supplements, 3-6 month projects |
| $25K - $50K | 25% | $35K | 6-12 month research, career transitions |
| $50K - $100K | 15% | $70K | Year-long independent research |
| $100K - $200K | 5% | $150K | Multi-year support, org incubation |
For comparison, the cost of funding one year of researcher upskilling in AI safety is estimated at approximately $53K by LTFF.
Application Process
Section titled “Application Process”Timeline Commitments
Section titled “Timeline Commitments”| Stage | Target | Maximum | Notes |
|---|---|---|---|
| Application Submission | Anytime | Rolling | No deadlines; continuous acceptance |
| Initial Response | 21 days | 42 days | Formal target since 2023 |
| Time-Sensitive Applications | Faster | Varies | Mark as urgent in application |
| Grant Disbursement | 1-2 weeks | 4 weeks | After approval and terms |
| Total Time | 4-6 weeks | 8 weeks | Including disbursement |
The fund announced rolling applications in 2023, eliminating the previous round-based system. Applications marked as time-sensitive receive expedited review.
What Makes Strong Applications
Section titled “What Makes Strong Applications”Based on public grant reports, committee comments, and fund manager reflections:
| Factor | Weight | Description | Evidence Types |
|---|---|---|---|
| Relevant Track Record | Critical | Past experience in AI safety or related field | Papers, projects, employment history |
| Clear Theory of Change | Critical | How does this reduce x-risk specifically? | Logical chain from activity to impact |
| Person-Level Fit | High | Can this person execute effectively? | References, past output quality |
| Counterfactual Impact | High | Would this happen without LTFF? | Alternative funding sources, personal savings |
| Concrete Deliverables | Medium | What will you produce and when? | Milestones, measurable outputs |
| Cost-Effectiveness | Medium | Is this efficient use of funds? | Budget justification, alternatives |
| Portfolio Fit | Medium | Does this complement other grants? | Novelty, strategic gaps |
Application Tips from Fund Managers
Section titled “Application Tips from Fund Managers”From the 2023 Ask Us Anything session and grant reports:
Strong applications typically have:
- Demonstrated interest in priority areas (AI safety, biosecurity) through previous work
- Specific, time-bound project plans with clear milestones
- Realistic budget with justification for each line item
- Acknowledgment of key uncertainties and how they’ll be addressed
- Evidence that the applicant has thought carefully about the problem space
Common weaknesses:
- Vague project descriptions without concrete outputs
- Generic “I want to do AI safety research” without specifics
- Overconfidence about impact without evidence
- Poor fit between applicant background and proposed work
- Lack of engagement with existing literature or community
What fund managers emphasize:
- “Things like general competence, past track record, clear value proposition and neglectedness matter”
- They use “model-driven granting”—assessing whether the proposed plan will actually work
- A well-justified application can change a fund manager’s mind even on skeptical topics
- There’s healthy disagreement within the fund; one excited manager can get a grant funded
What LTFF Is Less Interested In
Section titled “What LTFF Is Less Interested In”From public statements by fund managers:
| Category | Reason |
|---|---|
| General technology/science improvement | Unless differentially benefits x-risk reduction |
| Economic growth acceleration | Not clearly an x-risk intervention |
| AI capabilities research | Could accelerate rather than reduce risk |
| Mechanistic interpretability (2024+) | Now less neglected due to field growth |
| Large organizational grants | Better suited to Coefficient or SFF |
Notable Grants and Grantees
Section titled “Notable Grants and Grantees”High-Profile Grants
Section titled “High-Profile Grants”| Grantee | Amount | Year | Purpose | Outcome |
|---|---|---|---|---|
| Manifold Markets (Grugett, Chen) | $200,000 | Feb 2022 | 4-month runway to build prediction market platform | Manifold grew to major EA forecasting tool; credited as “strong signal EA community cared” |
| David Krueger’s Lab (Cambridge) | $200,000 | 2023 | PhD students and compute at new AI safety lab | Established academic AI safety presence at Cambridge |
| Gabriel Mukobi | $40,680 | 2023-24 | Stanford CS master’s with AI governance focus | Accepted to 4/6 PhD programs; multiple publications |
| Lisa Thiergart & Monte MacDiarmid | $40,000 | 2024 | Activation addition interpretability paper | Conference publication on LM steering |
| Joshua Clymer | $1,500 | 2023 | A100 GPU compute for instruction-following experiments | First technical AI safety project |
| Einar Urdshals | ≈$50K | 2023 | Career transition from physics PhD to AI safety | Mentored independent research |
| SecureBio | Varies | 2025 | Field-building events for GCR in Boston | Spring/Summer 2025 program |
| Julian Guidote & Ben Chancey | ≈$15K | 2025 | 9-week stipend for Mandatory AI Safety “Red Bonds” policy paper | Policy proposal development |
Research Group Support
Section titled “Research Group Support”| Group/Lab | Amount | Period | Focus |
|---|---|---|---|
| SERI MATS Scholars | Multiple | 2021+ | Program supplements and independent research |
| AI Safety Camp | Multiple | 2019+ | Research mentorship programs |
| AXRP Podcast (Daniel Filan) | Multiple | 2020+ | AI safety podcast production |
| Robert Miles | Multiple | 2019+ | AI safety YouTube content |
Grant Type Details
Section titled “Grant Type Details”Upskilling Grants: LTFF supports individuals seeking to build AI safety skills through various pathways.
| Grant Type | Typical Size | Purpose | Requirements |
|---|---|---|---|
| MATS Supplements | $10-25K | Living expenses during program | MATS acceptance |
| Course Attendance | $5-15K | Registration + living costs | Program acceptance |
| Self-Study Period | $20-50K | Independent learning runway | Study plan, mentorship |
| Research Visits | $10-30K | Collaboration at other orgs | Host organization agreement |
| PhD/Master’s Support | $20-80K | Tuition and living expenses | University admission |
Independent Research Grants: The core of LTFF’s portfolio.
| Grant Type | Typical Size | Duration | Requirements |
|---|---|---|---|
| Exploration Grant | $15-40K | 3-6 months | Research direction, preliminary work |
| Research Grant | $40-80K | 6-12 months | Track record, detailed proposal |
| Multi-Year Support | $80-200K | 1-2 years | Proven track record, clear milestones |
| Bridge Funding | $20-60K | 3-9 months | Gap between positions/grants |
LTFF is “pretty happy to offer ‘bridge’ funding for people who don’t quite meet [major lab] hiring bars yet, but are likely to in the next few years.”
Field-Building Grants: Supporting the AI safety ecosystem.
| Grant Type | Typical Size | Purpose | Examples |
|---|---|---|---|
| Conference Support | $10-50K | Event organization | Research retreats, workshops |
| Community Building | $15-40K | Local group support | University groups, city hubs |
| Infrastructure | $25-100K | Tools and platforms | Forecasting tools, research databases |
| Content Creation | $10-40K | Educational materials | YouTube, podcasts, writeups |
Fund Management
Section titled “Fund Management”LTFF uses a distinctive governance model with permanent fund managers plus rotating guest managers who provide specialized expertise.
Current Committee (2024-2025)
Section titled “Current Committee (2024-2025)”Permanent Fund Managers:
| Manager | Role | Background | Focus Areas |
|---|---|---|---|
| Caleb Parikh | Interim Fund Chair, EA Funds Project Lead | ML research background; evaluated over $34M in applications | Technical safety, field-building |
| Oliver Habryka | Permanent Manager | CEO of Lightcone Infrastructure (LessWrong); cofounder of Lighthaven venue | Community infrastructure, epistemics |
| Linchuan Zhang | Permanent Manager, EA Funds Staff | Senior Researcher at Rethink Priorities; COVID forecasting background | Existential security research |
Recent Guest Fund Managers (2023-2024):
| Guest Manager | Affiliation | Expertise |
|---|---|---|
| Lawrence Chan | ARC Evals | AI evaluations, safety research |
| Clara Collier | Independent | AI governance |
| Daniel Eth | Independent | Technical AI safety |
| Lauro Langosco | DeepMind (previously) | ML research |
| Thomas Larsen | Independent | AI safety research |
| Eli Lifland | Forecasting | Quantitative analysis |
Committee Evolution
Section titled “Committee Evolution”| Period | Chair | Notable Changes |
|---|---|---|
| 2017-2018 | Nick Beckstead (sole manager) | Initial fund; focus on established orgs |
| Oct 2018-2020 | Matt Fallshaw | New team: Habryka, Toner, Wage, Zhu |
| 2020-2023 | Asya Bergal | Expanded guest managers; transparency |
| 2023-present | Caleb Parikh (interim) | Asya stepped down to reduce Coefficient overlap |
Asya Bergal stepped down from the chair role in late 2023 to reduce overlap between Coefficient Giving (where she works as a Program Associate) and LTFF.
Evaluation Philosophy
Section titled “Evaluation Philosophy”The committee uses explicit criteria combined with significant judgment:
| Principle | Description |
|---|---|
| Hits-Based Giving | Willing to fund speculative grants with high potential upside |
| One Excited Manager Rule | Grants often funded when one manager is very excited, even if others are neutral |
| Model-Driven Granting | Assess whether proposed plans will actually work, not just stated intentions |
| Healthy Disagreement | Fund managers regularly disagree; diversity of views is feature not bug |
| Part-Time Capacity | Most managers have demanding day jobs, limiting deep evaluation time |
From the May 2023-March 2024 payout report: “The fund’s general policy has been to lean towards funding when one fund manager is very excited about a grant even if other fund managers are more neutral. The underlying model is that individual excitement is more likely to identify grants with significant impact potential in a hits-based giving framework.”
Evaluation Process
Section titled “Evaluation Process”Comparison with Other Funders
Section titled “Comparison with Other Funders”LTFF occupies a specific niche in the AI safety funding landscape. Understanding its position relative to other funders helps applicants choose the right source.
Major AI Safety Funders Comparison
Section titled “Major AI Safety Funders Comparison”| Funder | Annual AI Safety | Median Grant | Focus | Application Style |
|---|---|---|---|---|
| Coefficient Giving | $70M+ (2022) | $257K | Organizations, large projects | Proactive research, RFPs |
| Survival and Flourishing Fund | $10-15M | $100K | Organizations, researchers | Annual S-Process + rolling |
| Long-Term Future Fund | $4-5M | $25K | Individuals, small projects | Open applications |
| Lightspeed Grants | $5-10M | Varies | Rapid response, individuals | Application rounds |
| Manifund | $1-2M | $10-50K | Individuals, experiments | Regranters + applications |
| AI Risk Mitigation Fund | $1-2M | Varies | AI safety specifically | Applications |
Source: Overview of the AI Safety Funding Situation
When to Apply to LTFF vs Other Funders
Section titled “When to Apply to LTFF vs Other Funders”| Scenario | Best Funder | Why |
|---|---|---|
| Individual seeking $20-50K for 6-month research | LTFF | Sweet spot for LTFF’s median grant size |
| Organization seeking $500K+ annual budget | Coefficient or SFF | Too large for LTFF; need institutional funder |
| Career transition/upskilling | LTFF | Explicitly welcomes these applications |
| MATS living expenses supplement | LTFF | Established pipeline for program participants |
| Policy/governance organization | Coefficient | Needs diverse funder base for credibility |
| Rapid response to opportunity (< 2 weeks) | Lightspeed Grants | Faster than LTFF’s 21-day target |
| Experimental project needing community validation | Manifund | Regranter model tests interest |
| Large academic lab funding | Coefficient or SFF | $200K+ grants more common there |
Funding Relationships
Section titled “Funding Relationships”| Relationship | Description |
|---|---|
| LTFF ← Coefficient | ≈40-50% of LTFF funding comes from Coefficient regranting |
| LTFF → Coefficient | Many LTFF grantees later receive Coefficient funding |
| LTFF ↔ SFF | Similar cause focus, complementary scale |
| LTFF ↔ EAIF | AI-focused projects go to LTFF; general EA meta to EAIF |
| LTFF → Major Labs | Many grantees eventually hired by Anthropic, DeepMind, etc. |
Common Funding Pathways
Section titled “Common Funding Pathways”LTFF vs EA Infrastructure Fund (EAIF)
Section titled “LTFF vs EA Infrastructure Fund (EAIF)”Both LTFF and EAIF are part of EA Funds but serve different purposes:
| Dimension | LTFF | EAIF |
|---|---|---|
| Focus | Longtermism, AI safety, x-risk | EA community building, meta-work |
| Cause Area | Specific (AI safety, bio, etc.) | Cause-agnostic EA infrastructure |
| Application Volume | Higher (≈80/month) | Lower (fewer applications) |
| Institutional Funding | ≈40-50% from Coefficient | ≈80% from Coefficient |
| Strategic Direction | More stable | Higher rate of strategic changes |
When unsure: If your project focuses on AI safety specifically, apply to LTFF. If it’s about EA community building broadly, apply to EAIF. The funds can transfer applications between them if you apply to the wrong one.
Historical Evolution
Section titled “Historical Evolution”Founding and Early Years (2017-2018)
Section titled “Founding and Early Years (2017-2018)”EA Funds launched on February 28, 2017, created by the Centre for Effective Altruism (CEA) while going through Y Combinator’s accelerator program. The creation was inspired by the success of the EA Giving Group donor-advised fund run by Nick Beckstead and the donor lottery run by Paul Christiano and Carl Shulman in December 2016.
Initially, Nick Beckstead served as the sole manager of the Long-Term Future Fund. During this period, grants went mostly to established organizations like CSER, FLI, Charity Entrepreneurship, and Founder’s Pledge, with minimal public writeups. Beckstead stepped down in August 2018.
Transition to Active Grantmaking (2018-2020)
Section titled “Transition to Active Grantmaking (2018-2020)”In October 2018, a new management team was announced: Matt Fallshaw (chair), Helen Toner, Oliver Habryka, Matt Wage, and Alex Zhu, with Nick Beckstead and Jonas Vollmer as advisors. This transition marked a fundamental shift:
| Aspect | Beckstead Era | Post-2018 Era |
|---|---|---|
| Recipients | Primarily organizations | More individuals, speculative projects |
| Grant Size | Larger | Wider range, more small grants |
| Transparency | Minimal writeups | Detailed public justifications |
| Risk Profile | Conservative | More hits-based |
The new approach generated some controversy—certain grants were “scathingly criticized in the comments.” However, a 2021 retrospective found a 30-40% success rate among 2018-2019 grantees, suggesting appropriate risk-taking for hits-based giving.
Scaling Period (2021-2022)
Section titled “Scaling Period (2021-2022)”| Year | Total Grants | Key Developments |
|---|---|---|
| 2021 | ≈$4.8M | Added guest managers; expanded capacity |
| Q4 2021 | $2.1M | 34 grantees in single quarter |
| 2022 | ≈$4.5M | Peak pre-FTX; Manifold Markets grant |
The fund processed 878 applications from March 2022 to March 2023, funding 263 grants worth approximately $9.1M total (average $34.6K per grant).
FTX Impact and Recovery (2022-2023)
Section titled “FTX Impact and Recovery (2022-2023)”The November 2022 FTX collapse significantly impacted the EA funding ecosystem. While LTFF itself did not receive direct FTX funding, the downstream effects included:
- Increased uncertainty among applicants and donors
- Some grantees lost expected Future Fund grants
- Broader EA funding contraction
LTFF remained relatively stable, with Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 (then Open Philanthropy) providing approximately 40-50% of funding and the remainder from direct donations. The fund issued statements about funding constraints in early 2023.
Current Period (2023-2025)
Section titled “Current Period (2023-2025)”| Metric | 2023 | 2024 (projected) |
|---|---|---|
| Total Grants | $6.67M | $8M+ |
| AI Safety Portion | ≈67% | ≈70% |
| Acceptance Rate | 19.3% | ≈18-20% |
| Monthly Applications | ≈80-90 | ≈80-90 |
Recent strategic shifts include:
- Less funding for mechanistic interpretability due to field becoming less neglected
- Continued focus on upskilling and career transitions
- More stringent evaluation of claims about AI safety relevance
Impact and Outcomes
Section titled “Impact and Outcomes”Researcher Pipeline
Section titled “Researcher Pipeline”LTFF plays a critical role in the AI safety talent pipeline. Many researchers receive their first external funding through LTFF before achieving one of several outcomes:
| Outcome | Description | Evidence |
|---|---|---|
| Major Lab Hires | Researchers later hired by Anthropic, DeepMind, OpenAI | Multiple MATS scholars post-LTFF |
| Academic Positions | Faculty or postdoc roles in AI safety | David Krueger, Gabriel Mukobi PhD admissions |
| Research Output | Published papers, tools, analyses | Interpretability papers, safety tooling |
| Career Pivots | Successful transitions into AI safety | Physics PhDs, software engineers entering field |
| Capability Building | Skills developed through funded training | MATS completions, self-study periods |
LTFF grantees often exhibit extraordinary earning potential. As fund managers note, many “are excellent researchers (or have the potential to become one in a few years) and could easily take jobs in big tech or finance, and some could command high salaries (over $400k/year) while conducting similar research at AI labs.”
Documented Success Stories
Section titled “Documented Success Stories”| Grantee | LTFF Support | Subsequent Achievement |
|---|---|---|
| Gabriel Mukobi | $40.7K for Stanford master’s | Accepted to 4/6 PhD programs; multiple publications |
| Manifold Markets | $200K early-stage | Grew to major EA forecasting platform |
| AXRP Podcast | Multiple grants | Established AI safety podcast with consistent output |
| Robert Miles | Multiple grants | Major AI safety YouTube creator, mainstream reach |
| Mechanistic interpretability researchers | Multiple | Field growth attributed partly to early LTFF support |
Quantifying Impact
Section titled “Quantifying Impact”A 2021 retrospective analysis of 2018-2019 grantees found:
| Finding | Interpretation |
|---|---|
| 30-40% success rate | Appropriate risk level for hits-based giving |
| Track record correlation | Grantees with prior relevant experience performed better |
| Renewal patterns | Successful grantees often received follow-on funding |
The fund’s impact extends beyond direct grantees. By funding early-stage researchers who later join major labs or receive larger grants, LTFF serves as a “farm team” for the AI safety field.
Strengths and Limitations
Section titled “Strengths and Limitations”Organizational Strengths
Section titled “Organizational Strengths”| Strength | Description | Evidence |
|---|---|---|
| Speed | Much faster than major foundations | 21-day target vs months at Coefficient |
| Flexibility | Funds individuals, not just organizations | Median grant $25K; individuals welcome |
| Accessibility | Lower barriers to application | Rolling applications; quick online form |
| Risk Tolerance | Willing to fund early-stage ideas | Hits-based approach; one excited manager can approve |
| Transparency | Publishes detailed grant reasoning | Payout reports on EA Forum with justifications |
| Renewal Support | Happy to renew successful grants | ”Would be happy to be primary funder for years” |
| Bridge Function | Supports people building toward larger opportunities | Explicit bridge funding category |
Organizational Limitations
Section titled “Organizational Limitations”| Limitation | Description | Mitigation |
|---|---|---|
| Scale | $5-8M is small relative to field needs | Complements rather than replaces major funders |
| Capacity | Part-time managers limit deep evaluation | Guest managers provide additional bandwidth |
| Individual Focus | Less suited for large organizations | Refer to Coefficient or SFF for large orgs |
| Concentration | Heavy AI safety focus; other x-risks less funded | Apply to specialized funds for non-AI work |
| Funding Dependency | ≈40-50% from Coefficient | Diversified direct donation base |
| Feedback Quality | Rejection feedback varies in depth | Explicit in AMAs that feedback is limited |
Strategic Challenges
Section titled “Strategic Challenges”| Challenge | Description |
|---|---|
| Grantmaker Bottleneck | As Coefficient staff noted, “a key bottleneck is that they currently don’t have enough qualified AI Safety grantmakers to hand out money fast enough” |
| Neglectedness Shifts | Areas like mechanistic interpretability become less neglected, requiring strategic adjustment |
| Talent Retention | Fund managers have demanding day jobs; turnover creates institutional memory loss |
| Validation Without Dependency | Goal is to help researchers become self-sustaining, not LTFF-dependent |
How to Apply
Section titled “How to Apply”Application Process
Section titled “Application Process”| Step | Details | Tips |
|---|---|---|
| 1. Review Guidelines | Check EA Funds website for current priorities | Note recent payout reports for examples |
| 2. Assess Fit | Confirm LTFF is right fund (vs EAIF, Animal Welfare, Global Health) | AI safety → LTFF; general EA → EAIF |
| 3. Prepare Application | Describe project, background, budget, theory of change | Be specific; include milestones |
| 4. Submit Online | Via EA Funds application portal | Mark time-sensitive if urgent |
| 5. Wait for Response | Target 21 days, max 42 days | Check spam folder; follow up if over 6 weeks |
| 6. Respond to Questions | Committee may request clarification | Respond promptly; additional info often requested |
| 7. Negotiate Terms | Discuss grant structure, milestones, reporting | Standard terms; flexibility for edge cases |
Application Portal
Section titled “Application Portal”The application form includes:
- Project description: What you’ll do and why it matters
- Background: Relevant experience and qualifications
- Budget: Itemized costs with justification
- Timeline: Key milestones and deliverables
- Theory of change: How this reduces existential risk
- Counterfactual: What happens without LTFF funding
- Other funding: Alternative sources, if any
After Approval
Section titled “After Approval”| Stage | Details |
|---|---|
| Grant Agreement | Review and sign terms (usually standard) |
| Disbursement | Funds sent within 1-2 weeks of signing |
| Reporting | Progress updates as agreed; typically light-touch |
| Renewal | Apply again if continuing work; track record helps |
Funding Sources and Donations
Section titled “Funding Sources and Donations”Revenue Composition
Section titled “Revenue Composition”| Source | Percentage | Notes |
|---|---|---|
| Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 | ≈40-50% | Regranting arrangement; largest single source |
| Direct Donations | ≈40-50% | Individual EA donors; recurring and one-time |
| Other Institutional | ≈10% | Other foundations, DAFs |
How to Donate
Section titled “How to Donate”| Method | Details |
|---|---|
| Direct Donation | funds.effectivealtruism.org (credit card, bank transfer) |
| Every.org | Tax-deductible for US donors |
| Manifund | Regranting via LTFF project page |
| DAF Grants | Specify “Long-Term Future Fund” via CEA |
For Large Donors
Section titled “For Large Donors”Donors considering significant contributions ($50K+) can contact the fund directly. Linchuan Zhang has volunteered to speak with donors about LTFF’s strategy and current funding needs.
Sources and Citations
Section titled “Sources and Citations”Primary Sources
Section titled “Primary Sources”- Long-Term Future Fund Official Page
- EA Forum LTFF Topic
- Giving What We Can - LTFF Profile
- Manifund - LTFF Project
Grant Reports and Payout Announcements
Section titled “Grant Reports and Payout Announcements”- May 2023 to March 2024 Payout Report - Most recent comprehensive report
- April 2023 Grant Recommendations
- 2018-2019 Grantee Retrospective - Impact assessment of early grants
AMA and Transparency Posts
Section titled “AMA and Transparency Posts”- Ask Us Anything - September 2023
- Ask Us Anything - Original
- Reflections on My Time on LTFF - Fund manager perspective
Funding Landscape Analysis
Section titled “Funding Landscape Analysis”- Overview of the AI Safety Funding Situation - Comprehensive funding comparison
- What Does a Marginal Grant at LTFF Look Like? - Grant evaluation thresholds
- LTFF and EAIF Funding Constraints - Post-FTX funding situation
EA Funds History
Section titled “EA Funds History”- Introducing the EA Funds - Original launch announcement
- EA Funds Beta Launch - Initial rollout
- Rolling Applications Announcement
Related Organizations
Section titled “Related Organizations”- Coefficient Giving AI Safety Grants
- Survival and Flourishing Fund
- MATS Program
- AISafety.com Funding Guide