Skip to content
Longterm Wiki
Back

AI Safety’s Talent Pipeline is Over-optimised for Researchers

web

Author

Chris Clay🔸

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

A 2025 EA Forum post by Chris Clay offering a structural critique of AI safety community-building, relevant to anyone thinking about career pathways, talent strategy, or ecosystem coordination within AI safety.

Forum Post Details

Karma
117
Comments
15
Forum
eaforum
Forum Tags
AI safetyBuilding effective altruismCommunityBuilding the field of AI safety

Metadata

Importance: 45/100blog postcommentary

Summary

This EA Forum post argues that AI safety's talent pipeline is structurally biased toward producing researchers, despite leadership consensus that research is not the most neglected role. The author identifies feedback loops where research-centric programs disadvantage non-researchers in hiring, and calls for ecosystem-level coordination to better allocate talent across leadership, policy, and advocacy roles.

Key Points

  • Broad consensus among AI safety org leaders that research is not the most neglected career, yet nearly all entry programs target researchers.
  • Research-focused pipelines create hiring bias: alumni can more easily demonstrate 'value alignment,' disadvantaging non-researcher candidates.
  • Young people are steered toward research careers by available programs, reducing the talent pool for critical non-research roles.
  • A survey of 25 EA leaders identified leadership, policy expertise, and media engagement as more neglected than research talent.
  • The author calls for more ecosystem-level coordination and independent study of AI safety talent allocation to break the feedback loop.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 7, 202612 KB
# AI Safety’s Talent Pipeline is Over-optimised for Researchers
By Chris Clay🔸
Published: 2025-08-30
*Thank you to all the wonderful people who've taken the time to share their thoughts with me. All opinions are my own: Will Aldred, Jonah Boucher, Deena Englander, Dewi Erwan, Bella Forristal, Patrick Gruban, William Gunn, Tobias Häberli, James Herbert, Adam Jones, Michael Kerrison, Schäfer Kleinert, Chris Leong, Cheryl Luo, Sobanan Narenthiran, Alicia Pollard, Will Saunter, Nate Simmons, Sam Smith, Chengcheng Tan, Simon Taylor, Ben West, Peter Wildeford, Jian Xin.*

Executive Summary
=================

There is broad consensus that research is not the most neglected career in AI Safety, but almost all entry programs are targeted at researchers. This creates a number of problems:

*   People who are tail-case at research are unlikely to be *tail-case* in other careers.
*   Researchers have a bias in demonstrating ‘value alignment’ in hiring rounds.
*   Young people trying to choose careers have a bias towards aiming for research.

Introduction
============

When I finished the Non-Trivial Fellowship, I was excited to go out and do good in the world. The impression I got from general EA resources out there was that I could progress through to the ‘next stage’ relatively easily[^j1maly4w9xq]. Non-Trivial is a highly selective pre-uni fellowship, so I expected to be within the talent pool for the next steps. But I spent the next 6 months floundering; I thought and thought about cause prioritisation, I read lots of 80k and I applied to *fellowship after fellowship* without success.

The majority of AI Safety talent pipelines are optimised for selecting and producing researchers. But research is not the most neglected talent in AI Safety. I believe this is leading to people with research-specific talent being over-represented in the community because:

1.  Most supporting programs into AI Safety strongly select for research skills.
2.  Alumni of these research programs are much better able to demonstrate value alignment.

This is leading to a much *smaller* talent pool for non-research roles, including advocacy and running organisations. And those non-research roles have a bias towards selecting former researchers.

From the people I talked to, I got the impression that this is broadly agreed among leaders of AI Safety organisations[^kmmwvt8kst]. But these are a very small number of people thinking about this - and they’re often thinking about this completely independently of each other!

My main goal of this post is to get more people outside of high-level AI Safety organisations to study the ecosystem itself. With limited competitive pressures on the system to force it to become more streamlined, I believe having more people actively helping the movement coordinate could magnify the impact of others.  
  
If you are interested in working on *any* aspect of the AI Safety Pipeline, please consider getting in touch; I’m actively looking for collaborato

... (truncated, 12 KB total)
Resource ID: 4a117e76e94af55d | Stable ID: sid_LJxxeQlVoV