Back
Scaling AI Safety Awareness via Content Creators – CeSIA Manifund Grant
webA Manifund grant project by CeSIA (Centre pour la Sécurité de l'IA) seeking funding to scale AI safety public outreach by advising content creators, having already helped produce a 4M+ view French YouTube video on AI safety.
Metadata
Importance: 32/100othereducational
Summary
CeSIA proposes funding 1 FTE for 3-6 months to expand their content creator advisory model, which connects AI safety researchers with popular YouTubers and podcasters to produce accurate, engaging AI risk content. Their track record includes advising a French YouTuber whose video garnered over 4 million views. The project addresses the gap between AI safety knowledge production and broad public dissemination.
Key Points
- •CeSIA successfully advised French YouTuber 'Ego,' resulting in a 4M+ view video on AI safety through ~20 hours of scientific consulting.
- •The project targets the dissemination bottleneck: AI safety knowledge exists but lacks broad public reach beyond already-convinced audiences.
- •Strategy involves identifying influential creators, providing fact-checking and argument refinement, and leveraging existing audience trust.
- •Two complementary approaches: advisory/collaboration with creators and potentially direct content production.
- •Fully funded at $21,309 against a $21,000 goal, indicating community support for public AI safety communication efforts.
Cached Content Preview
HTTP 200Fetched Apr 11, 202616 KB
Scaling AI safety awareness via content creators | Manifund
Dec
JAN
Feb
28
2025
2026
2027
success
fail
About this capture
COLLECTED BY
Collection: Save Page Now Outlinks
TIMESTAMPS
The Wayback Machine - https://web.archive.org/web/20260128054010/https://manifund.org/projects/scaling-ai-safety-awareness-via-content-creators
Manifund
Home
Login
About
People
Categories
Newsletter
Home
About
People
Categories
Login
Create
13
Scaling AI safety awareness via content creators
Technical AI safety
AI governance
Global catastrophic risks
Centre pour la Sécurité de l'IA
Active
Grant
$21,309raised
$21,000funding goal
Fully funded and not currently accepting donations.
p]:prose-li:my-0 text-gray-900 prose-blockquote:text-gray-600 prose-a:font-light prose-blockquote:font-light font-light break-anywhere empty:prose-p:after:content-["\00a0"]">
Tldr: CeSIA successfully advised YouTubers, generating millions of views on AI safety (notably with one video which made 4M+ views). We seek funding for 1 FTE for 3-6 months to scale this content creator outreach and advisory model.
1. Problem
The general public and journalists increasingly turn to online sources for information about AI, yet face a scarcity of accessible, high-quality content explaining AI risks. The extremely rapid advances in AI technologies make it particularly difficult for the public to form opinions about risks based on reliable resources. Despite pockets of excellent resources (like Rob Miles, Kurzgesagt, Lethal intelligence, Rational Animation), significant gaps persist in broad public awareness. These quality resources are generally limited to isolated and already convinced audiences. This knowledge deficit is particularly concerning because informed public understanding of large-scale AI risks is essential for public support of informed policy decisions—an area that remains severely under-resourced.
2. Proposed solution
The political bottleneck for AI safety is not the production of knowledge about risks posed by AI to society—which is already plentiful—but rather the dissemination of this knowledge. It is extremely time-consuming and difficult for organizations producing knowledge on AI safety to build their own audience and achieve large-scale reach. Therefore, it is strategic for them to leverage existing platforms by connecting with quality content creators who already have millions of followers and enjoy trust from their audience.
We propose mobilizing the talent of influential content creators (YouTubers, podcasters, etc.) to produce and disseminate engaging, informative content on AI safety. The goal is to educate a broad audience through various formats and channels they already consult, by encouraging content creators to cover AI risks and AI safety, and by providing them wi
... (truncated, 16 KB total)Resource ID:
58c2065ba48e81e5 | Stable ID: sid_uN16yncYUw