Longterm Wiki

LessWrong

lesswrongorganizationPath: /knowledge-base/organizations/lesswrong/
E538Entity ID (EID)
← Back to page60 backlinksQuality: 44Updated: 2026-03-13
Page Recorddatabase.json — merged from MDX frontmatter + Entity YAML + computed metrics at build time
{
  "id": "lesswrong",
  "numericId": null,
  "path": "/knowledge-base/organizations/lesswrong/",
  "filePath": "knowledge-base/organizations/lesswrong.mdx",
  "title": "LessWrong",
  "quality": 44,
  "readerImportance": 33,
  "researchImportance": 38.5,
  "tacticalValue": null,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-03-13",
  "dateCreated": "2026-02-15",
  "llmSummary": "LessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving \\$5M+ in funding and serving as the origin point for ~31% of EA survey respondents in 2014. Survey participation peaked at 3,000+ in 2016, declining to 558 by 2023, with the community being 75% male and highly secular.",
  "description": "A community blog and forum focused on rationality, cognitive biases, and artificial intelligence that has become a central hub for AI safety discourse and the broader rationalist movement.",
  "ratings": {
    "novelty": 2.5,
    "rigor": 5,
    "actionability": 1,
    "completeness": 6.5
  },
  "category": "organizations",
  "subcategory": "community-building",
  "clusters": [
    "community",
    "ai-safety"
  ],
  "metrics": {
    "wordCount": 1912,
    "tableCount": 1,
    "diagramCount": 0,
    "internalLinks": 20,
    "externalLinks": 73,
    "footnoteCount": 0,
    "bulletRatio": 0.21,
    "sectionCount": 21,
    "hasOverview": true,
    "structuralScore": 13
  },
  "suggestedQuality": 87,
  "updateFrequency": 45,
  "evergreen": true,
  "wordCount": 1912,
  "unconvertedLinks": [
    {
      "text": "LessWrong Wiki",
      "url": "https://www.lesswrong.com/w/instrumental-convergence",
      "resourceId": "90e9322ba84baa7a",
      "resourceTitle": "LessWrong (2024). \"Instrumental Convergence Wiki\""
    },
    {
      "text": "Effective altruism - Wikipedia",
      "url": "https://en.wikipedia.org/wiki/Effective_altruism",
      "resourceId": "f1d79efc3fc232c1",
      "resourceTitle": "Effective Altruism - Wikipedia"
    },
    {
      "text": "Effective altruism - Wikipedia",
      "url": "https://en.wikipedia.org/wiki/Effective_altruism",
      "resourceId": "f1d79efc3fc232c1",
      "resourceTitle": "Effective Altruism - Wikipedia"
    },
    {
      "text": "LessWrong - Main Site",
      "url": "https://www.lesswrong.com/",
      "resourceId": "815315aec82a6f7f",
      "resourceTitle": "LessWrong"
    },
    {
      "text": "Effective altruism - Wikipedia",
      "url": "https://en.wikipedia.org/wiki/Effective_altruism",
      "resourceId": "f1d79efc3fc232c1",
      "resourceTitle": "Effective Altruism - Wikipedia"
    }
  ],
  "unconvertedLinkCount": 5,
  "convertedLinkCount": 0,
  "backlinkCount": 60,
  "hallucinationRisk": {
    "level": "high",
    "score": 75,
    "factors": [
      "biographical-claims",
      "no-citations"
    ]
  },
  "entityType": "organization",
  "redundancy": {
    "maxSimilarity": 14,
    "similarPages": [
      {
        "id": "miri-era",
        "title": "The MIRI Era (2000-2015)",
        "path": "/knowledge-base/history/miri-era/",
        "similarity": 14
      },
      {
        "id": "miri",
        "title": "MIRI (Machine Intelligence Research Institute)",
        "path": "/knowledge-base/organizations/miri/",
        "similarity": 14
      },
      {
        "id": "eliezer-yudkowsky",
        "title": "Eliezer Yudkowsky",
        "path": "/knowledge-base/people/eliezer-yudkowsky/",
        "similarity": 14
      },
      {
        "id": "center-for-applied-rationality",
        "title": "Center for Applied Rationality",
        "path": "/knowledge-base/organizations/center-for-applied-rationality/",
        "similarity": 13
      },
      {
        "id": "ea-global",
        "title": "EA Global",
        "path": "/knowledge-base/organizations/ea-global/",
        "similarity": 13
      }
    ]
  },
  "coverage": {
    "passing": 6,
    "total": 13,
    "targets": {
      "tables": 8,
      "diagrams": 1,
      "internalLinks": 15,
      "externalLinks": 10,
      "footnotes": 6,
      "references": 6
    },
    "actuals": {
      "tables": 1,
      "diagrams": 0,
      "internalLinks": 20,
      "externalLinks": 73,
      "footnotes": 0,
      "references": 3,
      "quotesWithQuotes": 0,
      "quotesTotal": 0,
      "accuracyChecked": 0,
      "accuracyTotal": 0
    },
    "items": {
      "llmSummary": "green",
      "schedule": "green",
      "entity": "green",
      "editHistory": "red",
      "overview": "green",
      "tables": "amber",
      "diagrams": "red",
      "internalLinks": "green",
      "externalLinks": "green",
      "footnotes": "red",
      "references": "amber",
      "quotes": "red",
      "accuracy": "red"
    },
    "ratingsString": "N:2.5 R:5 A:1 C:6.5"
  },
  "readerRank": 426,
  "researchRank": 353,
  "recommendedScore": 126.14
}
External Links
{
  "grokipedia": "https://grokipedia.com/page/LessWrong"
}
Backlinks (60)
idtitletyperelationship
eli-liflandEli Liflandperson
self-improvementSelf-Improvement and Recursive Enhancementcapability
epistemic-risksAI Epistemic Cruxescrux
misuse-risksAI Misuse Risk Cruxescrux
structural-risksAI Structural Risk Cruxescrux
why-alignment-hardWhy Alignment Might Be Hardargument
ea-longtermist-wins-lossesEA and Longtermist Wins and Lossesconcept
miri-eraThe MIRI Era (2000-2015)historical
heavy-scaffoldingHeavy Scaffolding / Agentic Systemsconcept
provable-safeProvable / Guaranteed Safe AIconcept
ai-timelinesAI Timelinesconcept
model-organisms-of-misalignmentModel Organisms of Misalignmentanalysis
ai-futures-projectAI Futures Projectorganization
ai-impactsAI Impactsorganization
arcARC (Alignment Research Center)organization
bridgewater-aia-labsBridgewater AIA Labsorganization
ceaCentre for Effective Altruismorganization
center-for-applied-rationalityCenter for Applied Rationalityorganization
community-building-overviewCommunity Building Organizations (Overview)concept
conjectureConjectureorganization
controlaiControlAIorganization
deepmindGoogle DeepMindorganization
elicitElicit (AI Research Tool)organization
frontier-model-forumFrontier Model Forumorganization
good-judgmentGood Judgment (Forecasting)organization
gratifiedGratifiedorganization
lighthavenLighthaven (Event Venue)organization
lightning-rod-labsLightning Rod Labsorganization
manifestManifest (Forecasting Conference)organization
matsMATS ML Alignment Theory Scholars programorganization
miriMIRI (Machine Intelligence Research Institute)organization
palisade-researchPalisade Researchorganization
pause-aiPause AIorganization
polymarketPolymarketorganization
samotsvetySamotsvetyorganization
the-sequencesThe Sequences by Eliezer Yudkowskyorganization
connor-leahyConnor Leahyperson
dustin-moskovitzDustin Moskovitz (AI Safety Funder)person
eliezer-yudkowsky-predictionsEliezer Yudkowsky: Track Recordconcept
eliezer-yudkowskyEliezer Yudkowskyperson
gwernGwern Branwenperson
issa-riceIssa Riceperson
jaan-tallinnJaan Tallinnperson
leopold-aschenbrennerLeopold Aschenbrennerperson
neel-nandaNeel Nandaperson
nuno-sempereNuño Sempereperson
sam-bankman-friedSam Bankman-Friedperson
vidur-kapurVidur Kapurperson
vipul-naikVipul Naikperson
ai-watchAI Watchproject
donations-list-websiteDonations List Websiteproject
interpretabilityMechanistic Interpretabilitysafety-agenda
org-watchOrg Watchproject
roastmypostRoastMyPostproject
stampy-aisafety-infoStampy / AISafety.infoproject
timelines-wikiTimelines Wikiproject
existential-riskExistential Risk from AIconcept
sleeper-agentsSleeper Agents: Training Deceptive LLMsrisk
page-creator-pipelineResearch-First Page Creation Pipelineconcept
__index__/projectLongtermWiki Projectconcept
Longterm Wiki