Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Future of Life Institute

This page archives FLI's landmark 2015 AI Safety Grant Program, the first peer-reviewed grants initiative focused on beneficial AI, distributing $6.5M to 37 researchers and helping establish the modern AI safety research field.

Metadata

Importance: 62/100organizational reportreference

Summary

In 2015, the Future of Life Institute launched the first peer-reviewed AI safety grants program, awarding $6.5 million across 37 research projects. The archive documents funded projects including AI Impacts (forecasting AI timelines) and MIRI's work on aligning superintelligence with human interests. This program helped catalyze the modern AI safety research ecosystem.

Key Points

  • FLI awarded $6.5M to 37 researchers in the first-ever peer-reviewed AI safety grants program in 2015.
  • AI Impacts received funding to research AI timelines, including brain-computing comparisons and discontinuities in AI progress.
  • MIRI received $250,000 to develop toy models and formal foundations for the AI alignment problem.
  • The program represented a pivotal moment in institutionalizing AI safety as a legitimate research field.
  • Projects focused on both technical alignment challenges and forecasting societal impacts of advanced AI.

Cited by 1 page

PageTypeQuality
Future of Life InstituteOrganization46.0

Cached Content Preview

HTTP 200Fetched Apr 11, 202698 KB
All Grant Programs 2015 AI Safety Grant Program 

 In 2015, FLI launched the first peer-reviewed grants program aimed at ensuring artificial intelligence (AI) remains safe, ethical and beneficial. In the first round, FLI awarded $6.5M to 37 researchers. Status: Completed 

 Grants archive

 An archive of all grants provided within this grant program: Project title AI Impacts 

 Amount recommended $49,310.00 Primary investigator Katja Grace , Machine Intelligence Research Institute Details Project Summary

 Many experts think that within a century, artificial intelligence will be able to do almost anything a human can do. This might mean humans are no longer in control of what happens, and very likely means they are no longer employable. The world might be very different, and the changes that take place could be dangerous.

 Very little research has asked when this transition will happen, what will happen, and how we can make it go well. AI Impacts is a project to ask those questions, and to answer them rigorously. We look for research projects that can shed light on the future of AI; especially on questions that matter to people making decisions. We publish the results online, and explain our research to a broad audience.

 We are currently working on comparing the power of the brain to that of supercomputers, to help calculate when people will have enough hardware to run something as complex as a brain. We are also checking whether AI progress is likely to see sudden jumps, by looking for jumps in other areas of technological progress.

 Technical Abstract

 ‘Human-level’ artificial intelligence will have far-reaching effects on society, and is generally anticipated within the coming century. Relatively little is known about the timelines or consequences of this arrival, though increasingly many decisions depend on guesses about it. AI Impacts identifies cost-effective research projects which might shed light on the future of AI, and especially on the parts of it that might guide policy and other decisions. We perform a selection of these research projects, and publish the results as accessible articles in the public domain.

 We recently made a preliminary estimate of the computing performance of the brain in terms of traversed edges per second (TEPS), “a supercomputing benchmark” to better judge when computing hardware will be capable of replicating what the brain does, given the right software. We are also collecting case studies of abrupt technological progress to aid in evaluating the probability of discontinuities in AI progress. In the coming year we will continue with both of these projects, publish articles about several projects in progress, and start several new projects.

 Project title Aligning Superintelligence With Human Interests 

 Amount recommended $250,000.00 Primary investigator Benja Fallenstein , Machine Intelligence Research Institute Details Project Summary

 How can we ensure that powerful AI sys

... (truncated, 98 KB total)
Resource ID: 610cadadc65ccb8e