Back
An Overview of the AI Safety Funding Situation
webAuthor
Stephen McAleese
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
Useful reference for understanding the financial infrastructure of AI safety research as of mid-2023, particularly relevant given the post-FTX collapse reshaping of the funding landscape.
Forum Post Details
Karma
142
Comments
15
Forum
eaforum
Forum Tags
AI safetyBuilding effective altruismForecastingEffective altruism fundingBuilding the field of AI safety
Metadata
Importance: 55/100blog postanalysis
Summary
A comprehensive survey of the AI safety funding landscape as of mid-2023, cataloging major philanthropic sources including Open Philanthropy, the FTX Future Fund, and the Long-Term Future Fund. The post maps the distribution of financial resources across AI safety research mechanisms and identifies key institutional players shaping the field's financial ecosystem.
Key Points
- •Open Philanthropy remains the dominant funder in AI safety, with the FTX Future Fund's collapse creating a significant funding gap in the ecosystem.
- •Emerging funders including AI companies and academic institutions are beginning to fill some gaps left by philanthropic contraction.
- •The analysis reveals concentration risk in AI safety funding, with a small number of funders controlling a large share of resources.
- •Different funding mechanisms (grants, fellowships, research programs) are mapped across the major institutions supporting AI safety work.
- •The post identifies underserved areas and potential misalignments between available funding and research priorities in the field.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Mainstream Era | Historical | 42.0 |
| AI Safety Field Building and Community | Crux | 0.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202637 KB
# An Overview of the AI Safety Funding Situation
By Stephen McAleese
Published: 2023-07-12
*Note: this post was updated in January 2025 to reflect all available data from 2024.*
Introduction
============
AI safety is a field concerned with preventing negative outcomes from AI systems and ensuring that AI is beneficial to humanity. The field does research on problems such as the AI alignment problem which is the problem of designing AI systems that follow user intentions and behave in a desirable and beneficial way.
Understanding and solving AI safety problems may involve reading past research, producing new research in the form of papers or posts, running experiments with ML models, and so on. Producing research typically involves many different inputs such as research staff, compute, equipment, and office space.
These inputs all require funding and therefore funding is a crucial input for enabling or accelerating AI safety research. Securing funding is usually a prerequisite for starting or continuing AI safety research in industry, in an academic setting, or independently.
There are many barriers that could prevent people from working on AI safety. Funding is [one](https://www.lesswrong.com/posts/3eB7PsDCbuiNjaAnZ/why-i-m-not-yet-a-full-time-technical-alignment-researcher) of them. Even if someone is working on AI safety, a lack of funding may prevent them from [continuing](https://www.lesswrong.com/posts/HDXLTFnSndhpLj2XZ/i-m-leaving-ai-alignment-you-better-stay) to work on it.
It’s not [clear](https://www.lesswrong.com/posts/EjgfreeibTXRx9Ham/ten-levels-of-ai-alignment-difficulty) how hard AI safety problems like AI alignment are. But in any case, humanity is more likely to solve them if there are hundreds or thousands of brilliant minds working on them rather than [one](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_C_) guy. I would like there to be a large and thriving community of people working on AI safety and I think funding is an important prerequisite for enabling that.
The goal of this post is to give the reader a better understanding of funding opportunities in AI safety so that hopefully funding will be less of a barrier if they want to work on AI safety. The post starts with a high-level overview of the AI safety funding situation followed by a more in-depth description of various funding opportunities.
Past work
=========
To get an overview of AI safety spending, we first need to find out how much is spent on it per year. We can use past work as a prior and then use grant data to find a more accurate estimate.
* [Changes in funding in the AI safety field](https://aiimpacts.org/changes-in-funding-in-the-ai-safety-field/) (2017) by the Center for Effective Altruism estimated the change in AI safety funding between 2014 - 2017. In 2017, the post estimated that total AI safety spending was about $9 million.
* [How are resources in effective altruism allocated across issues?](htt
... (truncated, 37 KB total)Resource ID:
80125fcaf04609b8 | Stable ID: sid_cIWnkSGW9t