Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Centre for Effective Altruism

This is a grant payout report from the EA Long-Term Future Fund (November 2019), distributing $471,000 across 15 grantees including several AI safety researchers, biosecurity projects, and community-building efforts relevant to existential risk reduction.

Metadata

Importance: 38/100organizational reportnews

Summary

The EA Long-Term Future Fund distributed approximately $471,000 to 15 grantees in November 2019, supporting independent AI safety research, biosecurity work, community infrastructure, and longtermist policy advocacy. Grants ranged from $10,000 to $62,000 and covered technical research (agent foundations, AI forecasting), field-building (AI Safety Camp Toronto), and support services (subsidized therapy for EA workers). The report includes brief rationales for each grant by fund manager Oliver Habryka.

Key Points

  • 15 grantees received a total of ~$471,000, with individual grants ranging from $10,000 to $62,000.
  • Technical AI safety grants included work on agent foundations (Daniel Demski), abstraction theory for embedded agency (John Wentworth), and AI Safety Via Debate (Joe Collman).
  • Field-building grants supported AI Safety Camp Toronto, independent researchers transitioning into AI safety, and biosecurity white-space analysis.
  • Non-technical grants included subsidized therapy for EA workers, longtermist UK policy advocacy, and a note-taking tool (Roam Research) used by EA researchers.
  • The report was intentionally brief due to internal restructuring and time constraints, with more detailed writeups promised as a follow-up.

Cached Content Preview

HTTP 200Fetched Apr 11, 202617 KB
Funds Long-Term Future Fund November 2019: Long-Term Future Fund Grants November 2019: Long-Term Future Fund Grants

 Payout Date: November 22, 2019 Total grants: USD 466,000 Number of grantees: 15 Since we’ve been dealing with a larger-than-usual set of commitments for the Long-Term Future Fund, including some internal restructuring, discussion of fund scope, and coordination of fundraising initiatives, we did not end up having enough time to produce a set of writeups with as much detail as those written for past rounds. 
 As a result, the following report consists of a relatively straightforward list of the grants we made, with short explanations of the reasoning behind them. I (Oliver Habryka) am planning to follow this up in a few weeks with more detailed explanations of my reasoning, and other fund members might do the same. I will still be available to respond to comments and questions in the comment section. 
 Grant Recipients

 Grants Made By the Long-Term Future Fund

 Each grant recipient is followed by the size of the grant and their one-sentence description of their project. All of these grants have been made. 
 
 Damon Pourtahmaseb-Sasi ($40,000):  Subsidized therapy/coaching/mediation for those working on the future of humanity. 

 Tegan McCaslin ($40,000):  Conducting independent research into AI forecasting and strategy questions. 

 Vojtěch Kovařík ($43,000):  Research funding for a year, to enable a transition to AI safety work. 

 Jaspreet Pannu ($18,000):  Surveying the neglectedness of broad-spectrum antiviral development. 

 John Wentworth ($30,000):  Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop. 

 Elizabeth E. Van Nostrand ($19,000):  Create a toolkit to bootstrap from zero to competence in ambiguous fields. 

 Daniel Demski ($30,000):  Independent research on agent foundations. 

 Sam Hilton ($62,000):  Supporting the rights of future generations in UK policy and politics. 

 Topos Institute ($50,000):  A summit for the world's leading applied category theorists to engage with human flourishing experts. 

 Jason Crawford ($25,000):  Tell the story of human progress to the world, and promote progress as a moral imperative. 

 Kyle Fish ($30.000):  Identifying white space opportunities for technical projects to improve biosecurity. 

 AI Safety Camp Toronto ($29,000):  AISC Toronto brings together aspiring researchers to work on concrete problems in AI safety. 

 Miranda Dixon-Luinenburg ($20,000):  Writing fiction to convey EA and rationality-related topics. 

 Roam Research ($20,000): A note-taking tool for networked thought, actively used by many EA researchers. 

 Joe Collman ($10,000): Investigation of AI Safety Via Debate and ML training. 

 
 Total distributed: $471,000 
 Writeups by Oliver Habryka

 Damon Pourtahmaseb-Sasi ($40,000)

 Subsidized therapy/coaching/mediation for those working on the future of humanity. 
 We are aware of a significant number o

... (truncated, 17 KB total)
Resource ID: 565c9b2218268467