Back
Widening AI Safety's Talent Pipeline
blogAuthors
RubenCastaing·Nelson_GC·danwil
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
Data Status
Not fetched
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Safety Field Building Analysis | Approach | 65.0 |
| AI Safety Field Building and Community | Crux | 0.0 |
Cached Content Preview
HTTP 200Fetched Mar 7, 202617 KB
Widening AI Safety's talent pipeline by meeting people where they are — EA Forum
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Hide table of contents Widening AI Safety's talent pipeline by meeting people where they are
by RubenCastaing , Nelson_GC , danwil Sep 25 2025 9 min read 0 21
AI safety Building effective altruism Career choice Community AI alignment Building the field of AI safety Education Field-building Postmortems & retrospectives Research training programs Frontpage Widening AI Safety's talent pipeline by meeting people where they are Summary Theory of Change The Problem Why Current Solutions Fail TARA's Solution Program Overview Participant Selection Survey Results Overview Program Engagement Course Material Engagement Impact of Program Possible Improvements Recruitment & Selection Curriculum & Delivery Participant Support Community Integration Expense Breakdown Future Programs and Expansion Acknowledgements No comments Summary
The AI safety field has a pipeline problem: many skilled engineers and researchers are locked out of full‑time overseas fellowships. Our answer is the Technical Alignment Research Accelerator (TARA) — a 14‑week, part‑time program designed for talented professionals and students who can’t put their careers or studies on hold or leave their families for months at a time. Our inaugural TARA cohort provided a cost‑effective, flexible model that achieved:
Exceptional Satisfaction : 9.43/10 average recommendation score
High Completion : 90% finished the program
Career Impact : 15 of 19 graduates became more motivated to pursue AI safety careers, with several already securing roles or publishing research
Cost Efficiency : $899 AUD per participant (note: organizers were either paid from other grants or volunteers. The cost per participant would otherwise be much higher)
Access : Most participants could not join a similar program without a part‑time format
In this report, we share key learnings, results and operational details so that others can replicate this success. We are seeking funding partners for another TARA cohort.
Theory of Change
The Problem
The AI safety field faces a critical talent pipeline bottleneck. While millions are becoming aware of AI risks (the 80,000 Hours documentary has almost reached 6 million views) and organizations like BlueDot plan to train 100,000 people in AI safety fundamentals over the next 4.5 years, there's a massive gap between awareness-level training and the technical expertise required to qualify for selective research fellowships.
BlueDot's online courses can't provide hands-on training in critical areas like reinforcement learning, evaluations, AI x cybersecurity, and mechanistic interpretability. Meanwhile, advanced programs like MATS - which has accelerated 450+ researchers over 3.5 years - require participants to already possess
... (truncated, 17 KB total)Resource ID:
fe505379ab7dd580 | Stable ID: MzhhZDQxZW