Skip to content
Longterm Wiki
Back

Help Save Apart Research

web
apartresearch.com·apartresearch.com/donate

Apart Research's donation page describes their AI safety research accelerator and talent pipeline, which has produced peer-reviewed publications and placed researchers at leading AI safety institutions. Relevant to understanding the AI safety research ecosystem and talent development infrastructure.

Metadata

Importance: 22/100homepage

Summary

Apart Research is a global AI safety research accelerator that runs hackathons, research sprints, and fellowships to develop AI safety talent and produce peer-reviewed research. In 2.5 years, they have engaged 3,500+ participants, produced 22 peer-reviewed papers, and helped place researchers at institutions like METR, Oxford, and the UK AI Security Institute. This page solicits donations to continue and expand their work.

Key Points

  • Apart Research has produced 22 peer-reviewed publications at top venues including NeurIPS, ICLR, ICML, and ACL, with two oral spotlights at ICLR 2025.
  • Their Sprint → Studio → Fellowship model has engaged 3,500+ participants across 50+ global locations in 42 research sprints.
  • They actively engage in AI policy and governance, including consulting on the EU AI Act Code of Practice and participating in EU AI Office workshops.
  • Their 'DarkBench' benchmark for dark design patterns in LLMs received an Oral Spotlight at ICLR 2025 (top 1.8% of accepted papers).
  • Donations are tax-deductible through fiscal sponsor Ashgro Inc, a 501(c)(3) organization.

1 FactBase fact citing this source

Cached Content Preview

HTTP 200Fetched Apr 7, 202639 KB
Help Save Apart Research 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 




 
 

 

 
 
 
 
 
 
 
 
 Donate to Apart Research

 Global AI Safety Research Accelerator & Talent Pipeline

 Donate Now Share our donation page:

 Read Our Impact Report

 "Apart is the fastest way to impact for aspiring AI safety researchers anywhere"

 In the past 2.5 years, we've built a global pipeline for AI safety research and talent that has produced 22 peer-reviewed papers, engaged 3,500+ participants in 42 research sprints across 50+ global locations , and helped launch top talent into AI safety careers at leading institutions like METR, Oxford, FAR.AI and the UK AI Security Institute.

 We strive to be the fastest and most accessible way to develop into an impactful AI safety career . When you support Apart, you support research used by all the top AGI labs' safety teams, national security think tanks, and government lawmakers and the career transitions of tomorrow's most important contributors to the field.

 As we grow this important work, we ask for your generous support. 

 Jason Hoelscher-Obermaier Research Director

 Jason Hoelscher-Obermaier Research Director

 Jaime Raldua Veuthey CTO

 Jaime Raldua Veuthey CTO

 Esben Kran Chairman of the Board

 Esben Kran Chairman of the Board

 Total funding for 2025

 $619,921

 Donate Now Why Apart Makes a Difference

 Unique Talent Pipeline: Our Sprint → Studio → Fellowship model has engaged 3,500+ participants, developed 450+ pilot experiments, and supported 100+ research fellows from diverse backgrounds who might otherwise never enter AI safety

 Research Excellence: Our pipeline has produced 22 peer-reviewed publications at top conferences like NeurIPS, ICLR, ICML, ACL (incl. two oral spotlights at ICLR 2025), has been cited by top labs, and has received significant media attention

 Policy Engagement: Beyond producing research, we actively engage in AI policy and governance. This includes presenting key findings at prominent forums like IASEAI, sharing our research in major media, serving as expert consultants for the EU AI Act Code of Practice, and participating as panelists in EU AI Office workshops

 Global Impact: With 50+ event locations and participants from 26+ countries, we're building AI safety expertise across the globe

 Read Impact Report

 Spotlight Achievements

 Our "DarkBench" research, the first comprehensive benchmark for dark design patterns in large language models, received an Oral Spotlight at ICLR 2025 (top 1.8% of accepted papers), was presented at the IASEAI (International Association for Safe and Ethical AI) conference, and has been featured in major tech publications. 

 One of the participants in our hackathon with METR, a physics graduate and computational neuroscience expert with experience in ML and data science, joined METR as a full-time member of technical staff as a direct result of participating in our March 2024 event. W

... (truncated, 39 KB total)
Resource ID: kb-c11c5f19756fb5d9 | Stable ID: sid_EdwBacPBmF