Back
11th Edition of AI Safety Camp – Manifund Fundraiser
webmanifund.org·manifund.org/projects/11th-edition-of-ai-safety-camp
This is a Manifund fundraising page for the 11th edition of AI Safety Camp, a program with a seven-year track record of incubating AI safety research projects and funneling new talent into the field outside major hubs like the Bay Area and London.
Metadata
Importance: 38/100otherhomepage
Summary
AI Safety Camp (AISC) is seeking funding for its 11th edition, aiming to support 25–40 research projects depending on funding level. The program functions as both an incubator for new AI safety collaborations and a talent funnel for newcomers, with alumni having founded 10 organizations and secured 43 jobs in AI safety. The fundraiser highlights AISC's cost-efficiency and its openness to unconventional and epistemically diverse approaches to AI safety.
Key Points
- •AISC has a seven-year track record; alumni founded 10 organizations and obtained 43 jobs in AI safety.
- •Funding tiers range from $15k (10 projects) to $300k (40 projects with stipends), with $40k enabling the full 11th edition.
- •Program supports diverse approaches including alignment research, control limits, legal regulations, and 'slow down AI' advocacy.
- •Endorsed by Zvi Mowshowitz as 'gold standard' for talent funnels in AI safety (Nov 2024).
- •Framed as cost-efficient optionality preservation: cheaper to sustain than to rebuild if the program shuts down.
Cached Content Preview
HTTP 200Fetched Apr 12, 202642 KB
Jan
FEB
Mar
17
2025
2026
2027
success
fail
About this capture
COLLECTED BY
Collection: Common Crawl
Web crawl data from Common Crawl.
TIMESTAMPS
The Wayback Machine - http://web.archive.org/web/20260217120940/https://manifund.org/projects/11th-edition-of-ai-safety-camp
Manifund
Home
Login
About
People
Categories
Newsletter
Home
About
People
Categories
Login
Create
11th edition of AI Safety Camp | Manifund
25
11th edition of AI Safety Camp
Technical AI safety
AI governance
Remmelt Ellen
Active
Grant
$45,115raised
$300,000funding goal
Donate
Sign in to donate
p]:prose-li:my-0 text-gray-900 prose-blockquote:text-gray-600 prose-a:font-light prose-blockquote:font-light font-light break-anywhere empty:prose-p:after:content-["\00a0"]">
Project summary
AI Safety Camp has a seven-year track record of enabling participants to try their fit, find careers and start new orgs in AI Safety. We host up-and-coming researchers outside the Bay Area and London hubs.
If this fundraiser passes…
$15k, we won’t run a full program, but can facilitate 10 projects.
$40k, we can organise the 11th edition, for 25 projects.
$70k, we can pay a third organiser, for 35 projects.
$300k, we can cover stipends for 40 projects.
What are this project's goals? How will you achieve them?
By all accounts they are the gold standard for this type of thing. Everyone says they are great, I am generally a fan of the format, I buy that this can punch way above its weight or cost. If I was going to back [a talent funnel], I’d start here.
— Zvi Mowshowitz (Nov 2024)
My current work (AI Standards Lab) was originally a AISC project. Without it, I'd guess I would be full-time employed in the field at least 1 year later, and the EU standards currently close to completion would be a lot weaker. High impact/high neglectedness opportunities are fairly well positioned to be kickstarted with volunteer effort in AISC, even if some projects will fail (hits based). After some initial results during AISC, they can be funded more easily.
— Ariel Gil (Jan 2025)
AI Safety Camp is part incubator and part talent funnel:
an incubator in that we help experienced researchers form new collaborations that can last beyond a single edition. Alumni went on to found 10 organisations.
a talent funnel in that we help talented newcomers to learn by doing – by working on a concrete project in the field. This has led to alumni 43 jobs in AI Safety.
The Incubator case is that AISC seeds epistemically diverse initiatives. The coming edition supports new alignment directions, control limits research, neglected legal regulations, and 'slow down AI' advocacy. Funders who are uncertain about approaches to alignment – or believe we cannot align AGI on time – may prioritise funding this program.
The
... (truncated, 42 KB total)Resource ID:
6ab3fda5b7fb9ccb