Longterm Wiki
Back

An Overview of the AI Safety Funding Situation (LessWrong)

blog

Author

Stephen McAleese

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

Data Status

Full text fetchedFetched Dec 28, 2025

Summary

Analyzes AI safety funding from sources like Open Philanthropy, Survival and Flourishing Fund, and academic institutions. Estimates total global AI safety spending and explores talent versus funding constraints.

Key Points

  • Open Philanthropy is the largest AI safety funder, spending about $46 million in 2023
  • For-profit AI companies contribute an estimated $32 million annually to AI safety research
  • The field may be simultaneously constrained by funding, talent, and leadership

Review

This detailed analysis provides a nuanced examination of AI safety funding landscape, revealing the complex ecosystem of financial support for preventing potential negative AI outcomes. The research meticulously tracks funding from philanthropic organizations, government grants, academic research, and for-profit companies, demonstrating a growing financial commitment to AI safety research. The methodology involves aggregating grant databases, creating Fermi estimates, and analyzing spending across different organizational types. Key findings include an estimated $32 million contribution from for-profit AI companies, approximately $11 million from academic research in 2023, and significant contributions from organizations like Open Philanthropy. The analysis goes beyond mere financial tracking, exploring critical questions about whether the field is more constrained by talent or funding, suggesting a complex interdependence between financial resources and human capital.

Cited by 8 pages

Cached Content Preview

HTTP 200Fetched Feb 22, 202636 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. An Overview of the AI Safety Funding Situation — LessWrong Community AI Personal Blog 74

 An Overview of the AI Safety Funding Situation 

 by Stephen McAleese 12th Jul 2023 18 min read 10 74

 Note: this post was updated in January 2025 to reflect all available data from 2024. 

 Introduction

 AI safety is a field concerned with preventing negative outcomes from AI systems and ensuring that AI is beneficial to humanity. The field does research on problems such as the AI alignment problem which is the problem of designing AI systems that follow user intentions and behave in a desirable and beneficial way.

 Understanding and solving AI safety problems may involve reading past research, producing new research in the form of papers or posts, running experiments with ML models, and so on. Producing research typically involves many different inputs such as research staff, compute, equipment, and office space.

 These inputs all require funding and therefore funding is a crucial input for enabling or accelerating AI safety research. Securing funding is usually a prerequisite for starting or continuing AI safety research in industry, in an academic setting, or independently.

 There are many barriers that could prevent people from working on AI safety. Funding is  one of them. Even if someone is working on AI safety, a lack of funding may prevent them from  continuing to work on it.

 It’s not  clear how hard AI safety problems like AI alignment are. But in any case, humanity is more likely to solve them if there are hundreds or thousands of brilliant minds working on them rather than  one guy. I would like there to be a large and thriving community of people working on AI safety and I think funding is an important prerequisite for enabling that.

 The goal of this post is to give the reader a better understanding of funding opportunities in AI safety so that hopefully funding will be less of a barrier if they want to work on AI safety. The post starts with a high-level overview of the AI safety funding situation followed by a more in-depth description of various funding opportunities.

 Past work

 To get an overview of AI safety spending, we first need to find out how much is spent on it per year. We can use past work as a prior and then use grant data to find a more accurate estimate.

 Changes in funding in the AI safety field (2017) by the Center for Effective Altruism estimated the change in AI safety funding between 2014 - 2017. In 2017, the post estimated that total AI safety spending was about $9 million.
 How are resources in effective altruism allocated across issues? (2020) by 80,000 Hours estimated the amount of money spent by EA on AI safety in 2019. Using data from the Open Philanthropy grants database, the post says that EA spent about $40 million on AI safety globally in 2019.
 In  The Precipice (2020) , To

... (truncated, 36 KB total)
Resource ID: b1ab921f9cbae109 | Stable ID: NmEzYWI0MW