Skip to content
Longterm Wiki

LessWrong

Organization
Founded Feb 2009 (17 years old)lesswrong.com
Entity
Wiki
About
Business
Output & Research
Data

LessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondents in 2014. Survey participation peaked at 3,000+ in 2016, declining to 558 by 2023, with the community increasingly focused on AI alignment discussions.

Facts

4
Organization
Founded DateFeb 2009
People
Founder (text)Eliezer Yudkowsky
General
Websitehttps://www.lesswrong.com/
Biographical
Wikipediahttps://en.wikipedia.org/wiki/LessWrong

Other Data

Entity Assessments
3 entries
DimensionRatingEvidenceAssessor
community-scaleMedium2016 survey had ~3,000 respondents; 2023 survey had 558 respondents; peak 15,000 daily pageviews [Wikipedia](https://en.wikipedia.org/wiki/LessWrong), [2023 Survey](https://www.lesswrong.com/posts/WRaq4SzxhunLoFKCs/2023-survey-results)editorial
funding-baseSubstantialOver $5 million from Survival and Flourishing Fund, Future Fund, and Coefficient Giving combined [EA Forum](https://forum-bots.effectivealtruism.org/topics/lesswrong)editorial
influence-on-ai-safetyHighMaterial influenced formation of MIRI and CFAR; attracted major tech donors; 31% of EA survey respondents in 2014 first heard of EA through LessWrong [Wikipedia](https://en.wikipedia.org/wiki/LessWrong), [Rationality - LessWrong](https://www.lesswrong.com/rationality)editorial

Related Wiki Pages

Top Related Pages

Approaches

AI AlignmentAI Non-Extremization Coordination

Analysis

Model Organisms of MisalignmentDonations List WebsiteTimelines WikiAI Actor Feedback Loops

Organizations

Lightcone InfrastructureMachine Intelligence Research Institute (MIRI)

Other

Jaan TallinnDustin MoskovitzInterpretability

Concepts

AI TimelinesExistential Risk from AISelf-Improvement and Recursive Enhancement

Risks

Sleeper Agents: Training Deceptive LLMsInstrumental Convergence

Key Debates

Why Alignment Might Be HardAI Structural Risk CruxesAI Misuse Risk Cruxes

Historical

The MIRI Era