Skip to content
Longterm Wiki
Back

80,000 Hours technical AI safety upskilling resources

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: 80,000 Hours

Published by 80,000 Hours in June 2025, this resource is aimed at career-changers and students seeking structured pathways into technical AI safety research; useful as a starting point for newcomers to the field.

Metadata

Importance: 55/100blog posteducational

Summary

A curated guide from 80,000 Hours providing resources for individuals looking to develop technical skills relevant to AI safety research. It aggregates learning materials, courses, and pathways to help people transition into or advance within the technical AI safety field. The resource supports field-building by lowering barriers to entry for aspiring AI safety researchers.

Key Points

  • Curates technical learning resources specifically aimed at building AI safety research skills
  • Targets individuals seeking to transition into or upskill within technical AI safety roles
  • Likely includes programming, ML, and alignment-specific learning pathways
  • Part of 80,000 Hours' broader career guidance mission for high-impact work
  • Supports the AI safety talent pipeline by making skill development more accessible

Cited by 2 pages

PageTypeQuality
80,000 HoursOrganization45.0
AI Safety Field Building and CommunityCrux0.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20266 KB
Want to upskill in technical AI safety? Here are 67 useful resources | 80,000 Hours Search for: Our new book, a ridiculously in-depth guide to a fulfilling career, is out May 2026. Preorder now 

 Are you enthusiastic about technical AI safety but need concrete ideas for how to enter the field?

 Below are our top picks for upskilling in technical AI safety research, the field focused on ensuring powerful AI systems behave safely and as intended. In practice, upskilling involves developing the machine learning and research skills needed to work on challenges such as alignment and interpretability.

 We developed this list in consultation with our advisors to highlight the resources they most commonly recommend, including articles, courses, organisations, and fellowships. While we recommend applying to speak to an advisor for tailored, one-on-one guidance, this page gives a practical, noncomprehensive snapshot of how you might move from being interested in technical AI safety to starting to work on it.

 Overviews

 These resources outline the technical AI safety landscape, highlighting current research efforts and some practical ways to begin contributing to the field.

 AISafety.com 
 Shallow review of technical AI safety by technicalities et al.
 AI safety technical research career guide - how to enter by 80,000 Hours
 Levelling up in AI safety research engineering by Gabriel Mukobi
 Recommendations for technical AI safety research agendas by Anthropic
 Technical AI safety research areas by Coefficient Giving
 An overview of areas of control work by Ryan Greenblatt, Redwood Research
 AI safety needs great engineers by Andy Jones
 AI safety courses

 These courses can help you gain technical knowledge and practical research experience in AI safety.

 ARENA’s curriculum 
 BlueDot Impact’s AI Alignment course 
 Andrej Karpathy’s Zero to Hero course His YouTube videos can also be great intro-friendly resources, as can 3Blue1Brown’s deep learning videos .
 
 Deep Learning Curriculum by Jacob Hilton
 Google ML Course 
 Ideas for projects and upskilling

 If you’re looking for concrete ways to contribute to technical AI safety research, check out these resources:

 What are some projects I can try? (AISafety.Info)
 100+ concrete projects and open problems in evals by Marius Hobbhahn
 A list of 45+ mech interp Projects by Apollo Research
 Open problems in mechanistic interpretability by Sharkey et al.
 Consider joining an alignment hackathon such as an Apart Research Sprint .
 Consider joining Eleuther’s community of researchers on their Discord.
 Consider writing a task using the METR framework .
 Consider writing your research theory of change ( workshop slides , Michael Aird).
 Advice from technical AI safety researchers

 Many experts in the field have practical tips for getting involved in technical AI safety work. Here is some of our favourite advice:

 Karpathy on PhDs, research agendas, career advice 
 Et

... (truncated, 6 KB total)
Resource ID: 7d9c703f769e1142 | Stable ID: sid_nmsYEnHZbW