Stampy / AISafety.info
- QualityRated 45 but structure suggests 87 (underrated by 42 points)
- Links3 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Content Coverage | Substantial | 280+ live answers, hundreds of drafts |
| Data Sources | Comprehensive | 10K-100K documents from alignment literature |
| Accessibility | High | Free web interface, Discord bot, chatbot |
| Community Integration | Strong | YouTube bridging, karma voting, write-a-thons |
| Open Source | Yes | 10 public GitHub repositories |
| Maintenance | Active | Global volunteer team + paid editor fellowships |
Project Details
Section titled “Project Details”| Attribute | Details |
|---|---|
| Name | AISafety.info (also known as Stampy) |
| Organization | Ashgro Inc (501(c)(3) nonprofit) |
| Founder | Rob Miles |
| Website | aisafety.info |
| GitHub | github.com/StampyAI (10 repositories) |
| Dataset | HuggingFace: alignment-research-dataset |
| Discord | Rob Miles AI Discord (active community) |
| License | MIT (open source) |
Overview
Section titled “Overview”AISafety.info is a collaborative Q&A wiki focused on existential risk from artificial intelligence, founded by AI safety educator Rob Miles. The project combines human-written educational content with an LLM-powered chatbot, a Discord bot bridging YouTube and Discord communities, and structured programs for content creation.
The site’s core thesis is that “smarter-than-human AI may come soon” and “it could lead to human extinction.” Rather than simply asserting these claims, the wiki provides structured explanations, addresses common objections, and offers pathways for further engagement.
Key Components
Section titled “Key Components”| Component | Purpose | Technology |
|---|---|---|
| Q&A Wiki | Human-written answers to AI safety questions | Web frontend (Remix/Cloudflare) |
| Stampy Chatbot | LLM-powered answers with citations | RAG pipeline + GPT models |
| Discord Bot | YouTube integration, community moderation | Python, modular architecture |
| Alignment Research Dataset | Curated corpus for chatbot | HuggingFace, 10K-100K documents |
Content & Statistics
Section titled “Content & Statistics”Wiki Content
Section titled “Wiki Content”| Metric | Value |
|---|---|
| Live Answers | 280+ |
| Draft Answers | Hundreds in development |
| Content Updates | Ongoing community contributions |
| Feedback System | Google Docs integration for comments |
Alignment Research Dataset
Section titled “Alignment Research Dataset”The chatbot draws from a curated corpus hosted on HuggingFace:
| Metric | Value |
|---|---|
| Document Count | 10,000 - 100,000 |
| Monthly Downloads | ≈1,600 |
| License | MIT |
| Language | English |
Sources include:
- Academic: arXiv papers, Arbital
- Forums: Alignment Forum, LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100, EA Forum
- Organizational blogs: MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, DeepMind, OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100
- Individual blogs: Eliezer YudkowskyResearcherEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views (>90% ...Quality: 35/100, Gwern BranwenGwernComprehensive biographical profile of pseudonymous researcher Gwern Branwen, documenting his early advocacy of AI scaling laws (predicting AGI by 2030), extensive self-experimentation work, and inf...Quality: 52/100
- Educational: AGI Safety Fundamentals course
- Video: YouTube playlists on AI safety
Technical Architecture
Section titled “Technical Architecture”Stampy Chatbot (RAG Pipeline)
Section titled “Stampy Chatbot (RAG Pipeline)”The chatbot uses Retrieval-Augmented Generation (RAG) with a three-step process:
- Retrieval: Search the alignment-research-dataset for semantically similar chunks using vector embeddings
- Context Assembly: Feed relevant text snippets into an LLM’s context window
- Generation: Produce a summary with citations to source documents
Dual Response Strategy: Stampy prioritizes human-written answers from the wiki when available, falling back to AI-generated responses for novel questions. This reduces hallucination risk for common questions while maintaining coverage for the “long tail.”
Acknowledged Limitations: The documentation explicitly warns that “like all LLM-based chatbots, it will sometimes hallucinate.” Source citations allow users to verify accuracy.
Discord Bot Architecture
Section titled “Discord Bot Architecture”The Discord bot (StampyAI/stampy) has evolved significantly from its original purpose:
Module System: Rob Miles implemented a “bidding” architecture where different modules compete to handle messages, minimizing computation by only activating relevant handlers.
Key Modules:
- Question management (Questions, QuestionSetter)
- Factoid database
- Wolfram Alpha integration
- LLM response generation (GPT-4 whitelist available)
- Alignment Forum search
YouTube-Discord Bridge
Section titled “YouTube-Discord Bridge”A distinctive feature is bidirectional integration with Rob Miles’ YouTube channel:
- YouTube → Discord: Interesting comments from YouTube videos are posted to Discord, sparking community discussions
- Discord → YouTube: Quality responses can be posted back as official YouTube replies
Quality Control via Stamps: The system uses a “stamp” emoji reaction for karma voting. When responses receive enough stamps, they can be posted to YouTube. Critically, stamp value varies by user reputation using a PageRank-style algorithm—users with more stamps have more voting power.
| Feature | Description |
|---|---|
| Stamp Reactions | Karma voting for response quality |
| PageRank Weighting | Vote weight proportional to voter’s reputation |
| Threshold Posting | Responses posted to YouTube when stamp threshold met |
| Bot Identity | Prevents random users from posting as official channel |
Repository Ecosystem
Section titled “Repository Ecosystem”Stampy maintains 10 public repositories:
| Repository | Stars | Purpose |
|---|---|---|
| stampy-ui | 41 | Web frontend (TypeScript) |
| stampy | 40 | Discord bot (Python) |
| stampy-chat | 15 | Conversational chatbot (TypeScript) |
| alignment-research-dataset | 13 | Data scraping pipeline (Python) |
| stampede | - | Elixir chatbot framework (alpha) |
| stampy-nlp | - | NLP microservices (Python) |
| stampy-extension | - | Browser extension |
| GDocsRelatedThings | - | Google Docs integration |
| AISafety.com | 1 | Issue tracker (54 open issues) |
| StampyAIAssets | - | Logos and branding |
Team & Community Programs
Section titled “Team & Community Programs”Team Structure
Section titled “Team Structure”| Role | Description |
|---|---|
| Founder | Rob Miles (YouTube creator, AI safety educator) |
| Editors | Paid staff from Distillation Fellowship programs |
| Developers | Volunteer contributors |
| Community | Discord members, write-a-thon participants |
Distillation Fellowship
Section titled “Distillation Fellowship”A structured 3-month paid program for content creation:
- Completed: Two fellowship cohorts
- Purpose: Train editors to distill complex AI safety content into accessible answers
- Output: Significant portion of the 280+ live answers
- Future: Additional cohorts planned pending funding
Write-a-thons
Section titled “Write-a-thons”Community events for collaborative content creation:
- Format: Multi-day focused writing sprints
- Example: October 6-9 write-a-thon (third event)
- Output: Batch content creation and answer improvement
Use Cases
Section titled “Use Cases”For Newcomers
Section titled “For Newcomers”AISafety.info serves as an accessible entry point for people encountering AI risk arguments for the first time:
- Start with basic questions and progress to advanced topics
- Find responses to specific objections
- Understand reasoning behind AI safety concerns
- Access cited sources for deeper reading
For Content Creators
Section titled “For Content Creators”The platform supports AI safety communication:
- Reference answers when addressing common questions
- Link skeptics to well-structured objection responses
- Consistent explanations across audiences
- Google Docs integration for collaborative editing
For Researchers
Section titled “For Researchers”While primarily aimed at broader audiences:
- Entry points into technical literature via dataset
- Career guidance for field entry
- Community connections via Discord
Strengths and Limitations
Section titled “Strengths and Limitations”Strengths
Section titled “Strengths”| Strength | Evidence |
|---|---|
| Accessible explanations | Content written for general audiences |
| Quality control | PageRank-style voting prevents low-quality YouTube responses |
| Community integration | YouTube bridging creates feedback loop |
| Structured programs | Distillation Fellowship produces consistent content |
| Comprehensive dataset | 10K-100K documents from major alignment sources |
| Open source | All code publicly available, MIT licensed |
Limitations
Section titled “Limitations”| Limitation | Impact |
|---|---|
| Chatbot accuracy | LLM hallucination risk; users must verify sources |
| Volunteer capacity | Development and content dependent on contributor availability |
| Opinionated framing | Presents AI risk case rather than neutral overview |
| Dataset maintenance | Ongoing work to clean and update sources |
| Single community perspective | Primarily reflects EA/rationalist community views |
Funding & Sustainability
Section titled “Funding & Sustainability”Current Model
Section titled “Current Model”| Source | Type |
|---|---|
| Individual Donations | Via website and Every.org |
| EA Community | Grants and donations |
| ManifundManifundManifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, ...Quality: 50/100 | Project funding platform |
| Volunteer Labor | Primary development resource |
Resource Needs
Section titled “Resource Needs”- Distillation Fellowship funding for continued content creation
- Developer time for frontend redesign and chatbot improvements
- Dataset curation for ongoing maintenance