April 2019: Long-Term Future Fund Grants and Recommendations
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Centre for Effective Altruism
This is the EA Long-Term Future Fund's Q1 2019 grant report, detailing $875,150 in funding across 21 grantees working on AI safety, biosecurity, forecasting, rationality, and existential risk reduction — providing insight into early EA funding priorities for AI safety research.
Metadata
Summary
The EA Long-Term Future Fund distributed $875,150 across 21 grantees in Q1 2019, supporting projects spanning AI alignment research, forecasting infrastructure, biosecurity, community building, and rationality training. Notable recipients include CFAR ($150K), Metaculus ($70K), Ought ($50K), MIRI ($50K), and various individual researchers. Two additional grants were recommended but not approved by CEA.
Key Points
- •Total of $875,150 distributed to 21 grantees, with a mix of institutional grants (CFAR, MIRI, Ought) and individual researcher support.
- •AI safety-focused grants include Alex Turner (corrigibility/mild optimization), Anand Srinivasan (safe intelligence amplification), and AI Safety Camp.
- •Significant investment in forecasting infrastructure: Metaculus ($70K), Ozzie Gooen's forecasting tools ($70K), and Jacob L.'s superforecasting for X-risk researchers ($27K).
- •Community and pipeline building funded via CFAR, Robert Miles' AI alignment videos, and European AI governance workshops.
- •Two recommended grants (Lauren Lee on burnout prevention, Mikhail Yagudin on HPMOR distribution) were not approved by CEA.
Cached Content Preview
Funds Long-Term Future Fund April 2019: Long-Term Future Fund Grants and Recommendations April 2019: Long-Term Future Fund Grants and Recommendations
Payout Date: March 21, 2019 Total grants: USD 875,150 Number of grantees: 21 Oliver Habryka also posted these details on the EA Forum , where he answered questions and clarified some of his points.
--
This post contains our allocation and some explanatory reasoning for our Q1 2019 grant round. We opened up an application for grant requests earlier this year which was open for about one month, after which we received an unanticipated large donation of about $715,000. This caused us to reopen the application for another two weeks. We then used a mixture of independent voting and consensus discussion to arrive at our current grant allocation.
In the writeups below, we explain the purpose for each grant and summarize our reasons for recommending it. Most summaries are written by the fund manager who was most excited about recommending the relevant grant (we've noted any exceptions in the text). These differ widely in length, based on how much available time the different fund members had to explain their reasoning.
When we’ve shared excerpts from an application, those excerpts may have been lightly edited for context or clarity.
Grant Recipients
Each grant recipient is followed by the size of the grant and their one-sentence description of their project.
Anthony Aguirre ($70,000): A major expansion of the Metaculus prediction platform and its community
Tessa Alexanian ($26,250): A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers
Shahar Avin ($40,000): Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers
Lucius Caviola ($50,000): Conducting postdoctoral research at Harvard on the psychology of EA/long-termism
Connor Flexman ($20,000): Performing independent research in collaboration with John Salvatier
Ozzie Gooen ($70,000): Building infrastructure for the future of effective forecasting efforts
David Girardo ($30,000): A research agenda rigorously connecting the internal and external views of value synthesis
Johannes Heidecke (AI Safety Camp) ($25,000): Supporting aspiring researchers of AI alignment to boost themselves into productivity
Nikhil Kunapuli ($30,000): A study of safe exploration and robustness to distributional shift in biological complex systems
Jacob L. ($27,000): Building infrastructure to give X-risk researchers superforecasting ability with minimal overhead
Alex Lintz ($17,900): A two-day, career-focused workshop to inform and connect European EAs interested in AI governance
Orpheus Lummis ($10,000): Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD
Vyacheslav Matyuhin ($50,000): An offline community hub for rationalists and EAs
Tegan McCaslin ($30,000): Conducting independent rese
... (truncated, 86 KB total)ae422bdae274d2dd