Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Centre for Effective Altruism

The Long-Term Future Fund is a key funding source for early-career AI safety researchers and independent projects; reviewing its grant history offers insight into what the EA community considers high-priority work in existential risk reduction.

Metadata

Importance: 55/100homepage

Summary

The Long-Term Future Fund is an Effective Altruism-affiliated grantmaking fund focused on improving humanity's prospects over the long run, particularly by supporting work on reducing existential and catastrophic risks. It funds research, advocacy, and capacity-building projects related to AI safety, biosecurity, and other global priorities. The fund is managed by a committee of EA community members and operates on a rolling grants basis.

Key Points

  • Funds projects aimed at reducing existential and global catastrophic risks, with a strong emphasis on AI safety and alignment research.
  • Operates as part of the EA Funds platform, allowing individual donors to pool resources toward high-impact long-term causes.
  • Supports a broad range of activities including technical research, policy work, field-building, and individual researcher grants.
  • Grant decisions are made by a rotating committee of experienced EA and AI safety community members.
  • Publishes grant reports detailing funding decisions and rationale, providing transparency into priorities and reasoning.

Cited by 4 pages

PageTypeQuality
AI Safety Research Value ModelAnalysis60.0
Coefficient GivingOrganization55.0
Long-Term Future Fund (LTFF)Organization56.0
Survival and Flourishing Fund (SFF)Organization59.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20268 KB
Long-Term Future Fund | Effective Altruism Funds Funds Long-Term Future Fund Long-Term Future Fund

 We make grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. We also seek to promote longtermist ideas and increasing the likelihood that future generations will flourish. Donate Donate Donate on every.org Recommended for donors outside the UK or Netherlands Donate on Giving What We Can Recommended for donors in the UK or Netherlands Apply for funding Impact

 The Long-Term Future Fund has recommended several million dollars' worth of grants to a range of organizations, including: Created an instruction-generalization benchmark for LLMs

 Read more Built and maintained digital infrastructure for the AI safety ecosystem

 Read more Conducted public and expert surveys on AI governance and forecasting

 Read more Ran a biorisk summit

 Read more Ran an AI safety independent research program

 Read more About the fund

 The Fund has historically supported researchers in areas such as cause prioritization, existential risk identification and mitigation, and technical research on the development of safe and secure artificial intelligence—where it was among the first funders. Most of our fund managers have built their careers working full time in areas directly relevant to the Fund’s mission. 
 The Fund managers can be contacted at longtermfuture[at]effectivealtruismfunds.org Focus areas

 The Fund has a broad remit to make grants that promote, implement and advocate for longtermist ideas. Many of our grants aim to address potential risks from advanced artificial intelligence and to build infrastructure and advocate for longtermist projects. However, we welcome applications related to long-term institutional reform or other global catastrophic risks (e.g., pandemics or nuclear conflict). We intend to support: 
 
 Projects that directly contribute to reducing existential risks through technical research, policy analysis, advocacy, and/or demonstration projects 

 Training for researchers or practitioners who work to mitigate existential risks, or help with relevant recruitment efforts, or infrastructure for people working on longtermist projects 

 Promoting long-term thinking 

 Featured grants with outstanding outcomes

 Logan Smith

 $40,000 2024 Q3 6-month stipend to create language model (LM) tools to aid alignment research through feedback and content generation $40,000 2024 Q3 Robert Miles

 $121,575 2023 Q3 1-year stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved $121,575 2023 Q3 Jeffrey Ladish

 $98,000 2023 Q1 6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org $98,000 2023 Q1 Sage Bergerson

 $2,500 2022 Q4 5-month part time stipend for collaborating on a research paper analyzing the implications of compute access with Epoch, FutureTech (MIT CSAIL), and GovAI

... (truncated, 8 KB total)
Resource ID: 9baa7f54db71864d | Stable ID: sid_QZEgoeZvKD