Skip to content
Longterm Wiki
Back

Alignment Forum - 2021 AI Alignment Literature Review

blog

Author

Larks

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

An annual tradition on the Alignment Forum, this 2021 edition is a go-to reference for understanding the organizational landscape of AI safety research and is widely used by newcomers and donors assessing where to direct resources.

Metadata

Importance: 62/100blog postreference

Summary

A comprehensive annual review of the AI alignment research landscape, surveying major organizations (FHI, MIRI, Anthropic, DeepMind, OpenAI, ARC, and others), their research approaches, and contributions to AI safety. The document also serves as a comparative guide for donors evaluating AI safety charities, and as an entry point for those new to AI existential risk.

Key Points

  • Surveys 15+ major AI safety research organizations including FHI, MIRI, CHAI, Anthropic, DeepMind, OpenAI, ARC, Redwood Research, and Ought.
  • Provides comparative analysis of organizational approaches, output quality, and funding needs to guide effective altruist charitable giving decisions.
  • Includes an introductory section for readers new to AI as an existential risk, making it accessible to newcomers.
  • Covers both technical alignment research organizations and adjacent groups focused on governance, policy, and global catastrophic risk.
  • Represents a snapshot of the 2021 AI safety ecosystem, useful for understanding how the field was structured and prioritized at that time.

Cited by 1 page

PageTypeQuality
Elicit (AI Research Tool)Organization63.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202698 KB
Dec
 JAN
 Feb
 

 
 

 
 16
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20260116190152/https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison

 

x

 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. 

AI ALIGNMENT FORUM

AF

Login

2021 AI Alignment Literature Review and Charity Comparison — AI Alignment Forum

Literature ReviewsAcademic PapersAI
Curated

51

2021 AI Alignment Literature Review and Charity Comparison

by Larks

23rd Dec 2021

87 min read

28

51

cross-posted to the EA forum here.

Introduction

As in 2016, 2017, 2018, 2019 and 2020 I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments.

My aim is basically to judge the output of each organisation in 2021 (technically: 2020-12-01 to 2021-11-30) and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2021 budgets to get a sense of urgency.

This document aims to be sufficiently broad that someone who has not paid any attention to the space all year could read it (and the linked documents) and be as well-informed to make donation decisions as they could reasonably be without personally interviewing researchers and organisations.

I’d like to apologize in advance to everyone doing useful AI Safety work whose contributions I have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) other projects, 2) the miracle of life and 3) computer games.

This article focuses on AI risk work. If you think other causes are important too, your priorities might differ. This particularly affects GCRI, FHI and CSER, who both do a lot of work on other issues which I attempt to cover but only very cursorily.

How to read this document

This document is fairly extensive, and some parts (particularly the methodology section) are largely the same as last year, so I don’t recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you. You should also read the Conflict of Interest Section.

If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You 

... (truncated, 98 KB total)
Resource ID: 2015b4d610e5549c | Stable ID: sid_pyGYXygtLD