Longterm Wiki
Back

Author

Christopher Clay

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

Data Status

Not fetched

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 7, 202624 KB
AI Safety’s Talent Pipeline is Over-optimised for Researchers — EA Forum 
 
 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Hide table of contents AI Safety’s Talent Pipeline is Over-optimised for Researchers 

 by Chris Clay🔸 Aug 30 2025 7 min read 15 116

 AI safety Building effective altruism Community Building the field of AI safety Frontpage AI Safety’s Talent Pipeline is Over-optimised for Researchers Executive Summary Introduction We Need Non-Research AI Safety Talent Most Talent Pipelines are for AI Safety Research This Creates the Wrong Filter for Non-Talent Roles This Creates a Feedback Loop of Status Research Fellowships have a Bias in Hiring Conclusion FAQ Further Questions Bycatch; Addendum 16 comments Thank you to all the wonderful people who've taken the time to share their thoughts with me. All opinions are my own: Will Aldred, Jonah Boucher, Deena Englander, Dewi Erwan, Bella Forristal, Patrick Gruban, William Gunn, Tobias Häberli, James Herbert, Adam Jones, Michael Kerrison, Schäfer Kleinert, Chris Leong, Cheryl Luo, Sobanan Narenthiran, Alicia Pollard, Will Saunter, Nate Simmons, Sam Smith, Chengcheng Tan, Simon Taylor, Ben West, Peter Wildeford, Jian Xin. 

 Executive Summary 

 There is broad consensus that research is not the most neglected career in AI Safety, but almost all entry programs are targeted at researchers. This creates a number of problems:

 People who are tail-case at research are unlikely to be tail-case  in other careers.
 Researchers have a bias in demonstrating ‘value alignment’ in hiring rounds.
 Young people trying to choose careers have a bias towards aiming for research.
 Introduction 

 When I finished the Non-Trivial Fellowship, I was excited to go out and do good in the world. The impression I got from general EA resources out there was that I could progress through to the ‘next stage’ relatively easily [1] . Non-Trivial is a highly selective pre-uni fellowship, so I expected to be within the talent pool for the next steps. But I spent the next 6 months floundering; I thought and thought about cause prioritisation, I read lots of 80k and I applied to fellowship after fellowship  without success.

 The majority of AI Safety talent pipelines are optimised for selecting and producing researchers. But research is not the most neglected talent in AI Safety. I believe this is leading to people with research-specific talent being over-represented in the community because:

 Most supporting programs into AI Safety strongly select for research skills.
 Alumni of these research programs are much better able to demonstrate value alignment.
 This is leading to a much smaller  talent pool for non-research roles, including advocacy and running organisations. And those non-research roles have a bias towards selecting former researchers.

 From the people I talked to, I got the impression that this is broadly agreed among leaders 

... (truncated, 24 KB total)
Resource ID: 4a117e76e94af55d | Stable ID: YzVlMjI2ZD