Skip to content
Longterm Wiki
Back

LessWrong - Rationality and AI Safety Community Forum

blog

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

LessWrong is one of the most important community platforms in the AI safety ecosystem; specific posts and sequences hosted here are often more valuable than the homepage itself, but it serves as the primary entry point for the community's collective knowledge.

Metadata

Importance: 72/100homepage

Summary

LessWrong is a community blog and forum focused on rationality, epistemics, and AI safety, serving as a primary venue for discussion and development of ideas related to AI alignment, decision theory, and existential risk. It hosts foundational technical posts, research updates, and philosophical discussions from prominent researchers including Eliezer Yudkowsky, Paul Christiano, and many others. The platform has been instrumental in developing and disseminating key AI safety concepts.

Key Points

  • Central hub for AI safety and rationality research discussion, hosting foundational sequences and technical posts on alignment.
  • Hosts work from leading AI safety researchers including original posts on decision theory, agent foundations, and corrigibility.
  • Community-driven platform with karma-based curation, enabling both informal discussion and serious technical research sharing.
  • Serves as an archive of the intellectual development of AI alignment as a field, including early Yudkowsky sequences.
  • Regularly publishes research updates, open problems, and debate on AI risk, governance, and technical safety approaches.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Apr 10, 202610 KB
x LessWrong This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Recent Enriched Recommended Customize Quick Takes 

 Home All Posts Concepts Library Best of LessWrong Sequence Highlights Rationality: A-Z The Codex HPMOR Community Events Subscribe (RSS/Email) LW the Album Leaderboard About FAQ Home All Posts Concepts Library Community About Load More Your Feed For You Following Load More Advanced Sorting/Filtering 186 The Practical Guide to Superbabies GeneSmith 3d 48 152 Some things I noticed while LARPing as a grantmaker Zach Stein-Perlman 7d 11 519 Welcome to LessWrong! Ruby , Raemon , RobertM , habryka 7y 82 [Today] Rationalist Shabbat [Tomorrow] Forecasting Walkthrough with Metaculus pro ExMateriae Fun Theory Fun Theory is the study of questions such as "How much fun is there in the universe?", 
"Will we ever run out of fun?", "Are we having fun yet?" and "Could we be having 
more fun?". It's relevant to designing utopias and AIs, among other things.

 Fun Theory Fun Theory is the study of questions such as "How much fun is there in the universe?", 
"Will we ever run out of fun?", "Are we having fun yet?" and "Could we be having 
more fun?". It's relevant to designing utopias and AIs, among other things.

 173 The effects of caffeine consumption do not decay with a ~5 hour half-life kman 1d 19 149 Do not be surprised if LessWrong gets hacked RobertM 21h 30 78 Help me launch Obsolete: a book aimed at building a new movement for AI reform garrison 5h 1 757 My journey to the microwave alternate timeline Malmesbury 2mo 56 185 AIs can now often do massive easy-to-verify SWE tasks and I've updated towards shorter timelines Ω ryan_greenblatt 3d Ω 16 331 The Terrarium Caleb Biddulph 14d 31 531 Here's to the Polypropylene Makers jefftk 1mo 17 125 My picture of the present in AI Ω ryan_greenblatt 2d Ω 18 186 The Practical Guide to Superbabies GeneSmith 3d 48 164 dark ilan ozymandias 5d 9 194 "You Have Not Been a Good User" (LessWrong's second album) habryka 8d 72 320 On The Independence Axiom Ihor Kendiukhov 26d 89 180 Lesswrong Liberated Ronny Fernandez 9d 128 Load More Advanced Sorting/Filtering [Yesterday] Hallucinating Certificates: Using Generative Language Models for Testing TLS Software Parsing [Yesterday] Charter Cities First Post: The Fun Theory Sequence First Post: The Fun Theory Sequence Cancel Submit ClaireZabel 18h 109 11 Buck, DirectedEvolution 5 I recently read Bad Blood and Original Sin. Bad Blood is about the downfall of Theranos and Elizabeth Holmes and the fraud that she committed. Original Sin is about the uncovering of Biden's mental degradation and the lead up to his decision to drop out of the presidential race.

I liked both books quite a bit, and I learned more from them than from most things that I read. I particularly enjoyed reading them one after the other because I thought that, despite addressing in many superficial wa

... (truncated, 10 KB total)
Resource ID: 815315aec82a6f7f | Stable ID: sid_9hIGf0AnHJ