Longterm Wiki

Nick Bostrom

nick-bostrompersonPath: /knowledge-base/people/nick-bostrom/
E215Entity ID (EID)
← Back to page35 backlinksQuality: 25Updated: 2026-03-13
Page Recorddatabase.json — merged from MDX frontmatter + Entity YAML + computed metrics at build time
{
  "id": "nick-bostrom",
  "numericId": null,
  "path": "/knowledge-base/people/nick-bostrom/",
  "filePath": "knowledge-base/people/nick-bostrom.mdx",
  "title": "Nick Bostrom",
  "quality": 25,
  "readerImportance": 82,
  "researchImportance": 11.5,
  "tacticalValue": 65,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-03-13",
  "dateCreated": "2026-02-15",
  "llmSummary": "Biographical profile of Nick Bostrom covering his founding of the Future of Humanity Institute, his 2014 book 'Superintelligence' on AI existential risk, and key philosophical contributions including the orthogonality thesis, instrumental convergence, and treacherous turn concepts.",
  "description": "Philosopher at FHI, author of 'Superintelligence'",
  "ratings": {
    "novelty": 1.5,
    "rigor": 3,
    "actionability": 1,
    "completeness": 6
  },
  "category": "people",
  "subcategory": "safety-researchers",
  "clusters": [
    "ai-safety",
    "governance"
  ],
  "metrics": {
    "wordCount": 1211,
    "tableCount": 0,
    "diagramCount": 0,
    "internalLinks": 22,
    "externalLinks": 0,
    "footnoteCount": 0,
    "bulletRatio": 0.47,
    "sectionCount": 22,
    "hasOverview": false,
    "structuralScore": 7
  },
  "suggestedQuality": 47,
  "updateFrequency": 45,
  "evergreen": true,
  "wordCount": 1211,
  "unconvertedLinks": [],
  "unconvertedLinkCount": 0,
  "convertedLinkCount": 0,
  "backlinkCount": 35,
  "hallucinationRisk": {
    "level": "high",
    "score": 95,
    "factors": [
      "biographical-claims",
      "no-citations",
      "low-rigor-score",
      "low-quality-score",
      "few-external-sources"
    ]
  },
  "entityType": "person",
  "redundancy": {
    "maxSimilarity": 15,
    "similarPages": [
      {
        "id": "superintelligence",
        "title": "Superintelligence",
        "path": "/knowledge-base/risks/superintelligence/",
        "similarity": 15
      },
      {
        "id": "miri-era",
        "title": "The MIRI Era (2000-2015)",
        "path": "/knowledge-base/history/miri-era/",
        "similarity": 14
      },
      {
        "id": "ai-impacts",
        "title": "AI Impacts",
        "path": "/knowledge-base/organizations/ai-impacts/",
        "similarity": 14
      },
      {
        "id": "dan-hendrycks",
        "title": "Dan Hendrycks",
        "path": "/knowledge-base/people/dan-hendrycks/",
        "similarity": 14
      },
      {
        "id": "existential-risk",
        "title": "Existential Risk from AI",
        "path": "/knowledge-base/risks/existential-risk/",
        "similarity": 14
      }
    ]
  },
  "changeHistory": [
    {
      "date": "2026-02-18",
      "branch": "claude/fix-issue-240-N5irU",
      "title": "Surface tacticalValue in /wiki table and score 53 pages",
      "summary": "Added `tacticalValue` to `ExploreItem` interface, `getExploreItems()` mappings, the `/wiki` explore table (new sortable \"Tact.\" column), and the card view sort dropdown. Scored 49 new pages with tactical values (4 were already scored), bringing total to 53.",
      "model": "sonnet-4",
      "duration": "~30min"
    },
    {
      "date": "2026-02-18",
      "branch": "claude/audit-webpage-errors-X4jHg",
      "title": "Audit wiki pages for factual errors and hallucinations",
      "summary": "Systematic audit of ~20 wiki pages for factual errors, hallucinations, and inconsistencies. Found and fixed 25+ confirmed errors across 17 pages, including wrong dates, fabricated statistics, false attributions, missing major events, broken entity references, misattributed techniques, and internal inconsistencies."
    },
    {
      "date": "2026-02-17",
      "branch": "claude/review-wiki-editing-scCul",
      "title": "Wiki editing system refactoring",
      "summary": "Six refactors to the wiki editing pipeline: (1) extracted shared regex patterns to `crux/lib/patterns.ts`, (2) refactored validation in page-improver to use in-process engine calls instead of subprocess spawning, (3) split the 694-line `phases.ts` into 7 individual phase modules under `phases/`, (4) created shared LLM abstraction `crux/lib/llm.ts` unifying duplicated streaming/retry/tool-loop code, (5) added Zod schemas for LLM JSON response validation, (6) decomposed 820-line mermaid validation into `crux/lib/mermaid-checks.ts` (604 lines) + slim orchestrator (281 lines). Follow-up review integrated patterns.ts across 19+ files, fixed dead imports, corrected ToolHandler type, wired mdx-utils.ts to use shared patterns, replaced hardcoded model strings with MODELS constants, replaced `new Anthropic()` with `createLlmClient()`, replaced inline `extractText` implementations with shared `extractText()` from llm.ts, integrated `MARKDOWN_LINK_RE` into link validators, added `objectivityIssues` to the `AnalysisResult` type (removing an unsafe cast in utils.ts), fixed CI failure from eager client creation, and tested the full pipeline by improving 3 wiki pages. After manual review of 3 improved pages, fixed 8 systematic pipeline issues: (1) added content preservation instructions to prevent polish-tier content loss, (2) made auto-grading default after --apply, (3) added polish-tier citation suppression to prevent fabricated citations, (4) added Quick Assessment table requirement for person pages, (5) added required Overview section enforcement, (6) added section deduplication and content repetition checks to review phase, (7) added bare URL→markdown link conversion instruction, (8) extended biographical claim checker to catch publication/co-authorship and citation count claims.\n\nSubsequent iterative testing and prompt refinement: ran pipeline on jan-leike, chris-olah, far-ai pages. Discovered and fixed: (a) `<!-- NEEDS CITATION -->` HTML comments break MDX compilation (changed to `{/* NEEDS CITATION */}`), (b) excessive citation markers at polish tier — added instruction to only mark NEW claims (max 3-5 per page), (c) editorial meta-comments cluttering output — added no-meta-comments instruction, (d) thin padding sections — added anti-padding instruction, (e) section deduplication needed stronger emphasis — added merge instruction with common patterns. Final test results: jan-leike 1254→1997 words, chris-olah 1187→1687 words, far-ai 1519→2783 words, miri-era 2678→4338 words; all MDX compile, zero critical issues.",
      "pr": 184
    }
  ],
  "coverage": {
    "passing": 5,
    "total": 13,
    "targets": {
      "tables": 5,
      "diagrams": 0,
      "internalLinks": 10,
      "externalLinks": 6,
      "footnotes": 4,
      "references": 4
    },
    "actuals": {
      "tables": 0,
      "diagrams": 0,
      "internalLinks": 22,
      "externalLinks": 0,
      "footnotes": 0,
      "references": 0,
      "quotesWithQuotes": 0,
      "quotesTotal": 0,
      "accuracyChecked": 0,
      "accuracyTotal": 0
    },
    "items": {
      "llmSummary": "green",
      "schedule": "green",
      "entity": "green",
      "editHistory": "green",
      "overview": "red",
      "tables": "red",
      "diagrams": "red",
      "internalLinks": "green",
      "externalLinks": "red",
      "footnotes": "red",
      "references": "red",
      "quotes": "red",
      "accuracy": "red"
    },
    "editHistoryCount": 3,
    "ratingsString": "N:1.5 R:3 A:1 C:6"
  },
  "readerRank": 75,
  "researchRank": 544,
  "recommendedScore": 112.44
}
External Links
{
  "wikipedia": "https://en.wikipedia.org/wiki/Nick_Bostrom",
  "lesswrong": "https://www.lesswrong.com/tag/nick-bostrom",
  "wikidata": "https://www.wikidata.org/wiki/Q460475",
  "grokipedia": "https://grokipedia.com/page/Nick_Bostrom"
}
Backlinks (35)
idtitletyperelationship
fhiFuture of Humanity Instituteorganization
toby-ordToby Ordperson
self-improvementSelf-Improvement and Recursive Enhancementcapability
accident-risksAI Accident Risk Cruxescrux
case-for-xriskThe Case FOR AI Existential Riskargument
why-alignment-hardWhy Alignment Might Be Hardargument
ea-longtermist-wins-lossesEA and Longtermist Wins and Lossesconcept
epstein-ai-connectionsJeffrey Epstein's Connections to AI Researchersconcept
longtermism-credibility-after-ftxLongtermism's Philosophical Credibility After FTXconcept
miri-eraThe MIRI Era (2000-2015)historical
genetic-enhancementGenetic Enhancement / Selectioncapability
whole-brain-emulationWhole Brain Emulationcapability
instrumental-convergence-frameworkInstrumental Convergence Frameworkanalysis
longtermist-value-comparisonsRelative Longtermist Value Comparisonsanalysis
power-seeking-conditionsPower-Seeking Emergence Conditions Modelanalysis
fliFuture of Life Institute (FLI)organization
lesswrongLessWrongorganization
pause-aiPause AIorganization
secure-ai-projectSecure AI Projectorganization
sffSurvival and Flourishing Fund (SFF)organization
the-sequencesThe Sequences by Eliezer Yudkowskyorganization
eliezer-yudkowskyEliezer Yudkowskyperson
holden-karnofskyHolden Karnofskyperson
__index__/knowledge-base/peoplePeopleconcept
timelines-wikiTimelines Wikiproject
ai-welfareAI Welfare and Digital Mindsconcept
enfeeblementAI-Induced Enfeeblementrisk
existential-riskExistential Risk from AIconcept
instrumental-convergenceInstrumental Convergencerisk
lock-inAI Value Lock-inrisk
superintelligenceSuperintelligenceconcept
treacherous-turnTreacherous Turnrisk
doomerAI Doomer Worldviewconcept
governance-focusedGovernance-Focused Worldviewconcept
diagram-naming-researchFactor Diagram Naming: Research Reportconcept
Longterm Wiki