Skip to content
Longterm Wiki
Back

SPAR - Research Program for AI Risks

web
sparai.org·sparai.org/

SPAR is a key entry-level program for those seeking to break into AI safety research; relevant for wiki users looking for mentorship opportunities or field-building initiatives in the AI safety community.

Metadata

Importance: 45/100homepage

Summary

SPAR (Supervised Program for Alignment Research) is a structured mentorship program that pairs aspiring researchers with experienced AI safety professionals to conduct research on AI safety, alignment, and policy topics. The program provides hands-on research experience, guidance from domain experts, and opportunities for publication, serving as an entry point for newcomers to the AI safety field.

Key Points

  • Pairs mentees with experienced AI safety researchers and professionals for collaborative research projects
  • Covers a broad range of topics including technical AI safety, alignment, governance, and policy
  • Provides structured research experience aimed at building the next generation of AI safety researchers
  • Offers potential publication opportunities, helping participants establish research credentials in the field
  • Serves as a talent pipeline and community-building initiative for the broader AI safety ecosystem

Review

SPAR represents an innovative approach to addressing AI safety research by creating a flexible, accessible pathway for emerging researchers to engage with critical challenges in the field. The program distinguishes itself by offering a part-time, remote model that accommodates participants with varying levels of experience and availability, ranging from undergraduate students to mid-career professionals. The program's strength lies in its comprehensive approach to talent development, providing structured research opportunities, expert mentorship, and potential career advancement. By covering a broad range of research areas including AI safety, policy, security, interpretability, and biosecurity, SPAR creates a versatile platform for addressing multifaceted AI risks. The program's track record of accepted publications at conferences like ICML and NeurIPS, along with coverage in TIME, demonstrates its credibility and potential impact on the AI safety research ecosystem.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 202612 KB
SPAR - Research Program for AI Risks 
 
 new Date(Date.UTC(2026, 1, 8, 0, 0, 0))
 }" x-show="!dismissed && !expired" x-transition:enter="transition ease-out duration-300" x-transition:enter-start="transform -translate-y-full opacity-0" x-transition:enter-end="transform translate-y-0 opacity-100" x-transition:leave="transition ease-in duration-200" x-transition:leave-start="transform translate-y-0 opacity-100" x-transition:leave-end="transform -translate-y-full opacity-0" class="relative bg-gradient-to-r from-purple-600 via-purple-500 to-purple-600 text-white"> Spring 2026: Mentee decisions will be sent out by end of day today. Check your email! 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Research program for AI risks
 
Connecting rising talent

with expertise to tackle risks from AI

 
We're a part-time, remote research fellowship that enables aspiring AI safety and policy researchers to work on impactful research projects with professionals in the field.

 
Express Interest
 
Browse Projects
 Applications have closed Meet our 130+ mentors for Spring 2026

 C 
What is SPAR?

 A part-time, remote research program pairing
 aspiring researchers with professionals addressing risks from AI.
 Mentees gain research experience and guidance; mentors get capable collaborators.

 
 
 
Commit 5–40 hours/week , depending on your availability. SPAR is designed to fit well alongside other commitments.

 
 
 3 months of structured research , culminating in Demo Day — posters, talks, and a career fair with organizations like METR, Redwood Research, GovAI and MATS.

 
 
 We accept mentees with relevant technical or policy
 background at any level, 
from undergraduate students to mid-career professionals.

 
Express interest
 
 
 
 
 
 
 
 
Selected Research

 
SPAR research has been accepted at ICML and NeurIPS, covered by TIME, and led to full-time job offers for mentees.

 
Check out some of our work:

 Refusal in Language Models Is Mediated by a Single Direction 

 Andy Arditi, Oscar Balcells Obeso, Aaquib Syed, Daniel Paleka, Nina Rimsky, Wes Gurnee, Neel Nanda 

 Authors:

 Andy Arditi, Oscar Balcells Obeso, Aaquib Syed, Daniel Paleka, Nina Rimsky, Wes Gurnee, Neel Nanda

 SPAR Mentors:

 Nina Rimsky

 Note:

 This paper builds off of prior work made under SPAR (under the direction of Nina) and was produced as part of Neel Nanda's stream in MATS Winter 2023-24

 NeurIPS 2024 Interpretability Evaluating LLM Agent Collusion in Double Auctions 

 Kushal Agrawal, Verona Teo, Juan J. Vazquez, Sudarsh Kunnavakkam, Vishak Srikanth, Andy Liu 

 Authors:

 Kushal Agrawal, Verona Teo, Juan J. Vazquez, Sudarsh Kunnavakkam, Vishak Srikanth, Andy Liu

 SPAR Mentors:

 Andy Liu

 ICML MAS 2025 Multi-Agent Safety Lead, Own, Share: Sovereign Wealth Funds for Transformative AI 

 Liam Epstein (advised by Deric Cheng, Justin Bullock, Seán Ó hÉigeartaigh) 

 Authors:


... (truncated, 12 KB total)
Resource ID: f566780364336e37 | Stable ID: sid_SxE2M3cgJz