Longterm Wiki

Dan Hendrycks

dan-hendryckspersonPath: /knowledge-base/people/dan-hendrycks/
E89Entity ID (EID)
← Back to page12 backlinksQuality: 19Updated: 2026-03-13
Page Recorddatabase.json — merged from MDX frontmatter + Entity YAML + computed metrics at build time
{
  "id": "dan-hendrycks",
  "numericId": null,
  "path": "/knowledge-base/people/dan-hendrycks/",
  "filePath": "knowledge-base/people/dan-hendrycks.mdx",
  "title": "Dan Hendrycks",
  "quality": 19,
  "readerImportance": 87,
  "researchImportance": 39.5,
  "tacticalValue": 78,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-03-13",
  "dateCreated": "2026-02-15",
  "llmSummary": "Biographical overview of Dan Hendrycks, CAIS director who coordinated the May 2023 AI risk statement signed by major AI researchers. Covers his technical work on benchmarks (MMLU, ETHICS), robustness research, and institution-building efforts, emphasizing his focus on catastrophic AI risk as a global priority.",
  "description": "Director of CAIS, focuses on catastrophic AI risk reduction",
  "ratings": {
    "novelty": 1.5,
    "rigor": 2,
    "actionability": 1,
    "completeness": 4
  },
  "category": "people",
  "subcategory": "safety-researchers",
  "clusters": [
    "ai-safety",
    "governance"
  ],
  "metrics": {
    "wordCount": 2660,
    "tableCount": 2,
    "diagramCount": 0,
    "internalLinks": 19,
    "externalLinks": 9,
    "footnoteCount": 0,
    "bulletRatio": 0.19,
    "sectionCount": 18,
    "hasOverview": true,
    "structuralScore": 14
  },
  "suggestedQuality": 93,
  "updateFrequency": null,
  "evergreen": true,
  "wordCount": 2660,
  "unconvertedLinks": [
    {
      "text": "course.mlsafety.org",
      "url": "https://course.mlsafety.org/",
      "resourceId": "65c9fe2d57a4eb4c",
      "resourceTitle": "ML Safety Course"
    },
    {
      "text": "Measuring Massive Multitask Language Understanding",
      "url": "https://arxiv.org/abs/2009.03300",
      "resourceId": "0635974beafcf9c5",
      "resourceTitle": "Hendrycks et al."
    },
    {
      "text": "Aligning AI With Shared Human Values",
      "url": "https://arxiv.org/abs/2008.02275",
      "resourceId": "57379f24535e9c04",
      "resourceTitle": "ICLR 2021"
    },
    {
      "text": "Unsolved Problems in ML Safety",
      "url": "https://arxiv.org/abs/2109.13916",
      "resourceId": "f94e705023d45765",
      "resourceTitle": "Unsolved Problems in ML Safety"
    },
    {
      "text": "Actionable Guidance for High-Consequence AI Risk Management",
      "url": "https://arxiv.org/abs/2206.08966",
      "resourceId": "b88263a70cbf743e",
      "resourceTitle": "Barrett, A.M., Hendrycks, D., Newman, J., & Nonnecke, B."
    },
    {
      "text": "Superintelligence Strategy",
      "url": "https://arxiv.org/abs/2503.05628",
      "resourceId": "a15589d5e604d864",
      "resourceTitle": "Hendrycks, D., Schmidt, E., & Wang, A."
    }
  ],
  "unconvertedLinkCount": 6,
  "convertedLinkCount": 0,
  "backlinkCount": 12,
  "hallucinationRisk": {
    "level": "high",
    "score": 90,
    "factors": [
      "biographical-claims",
      "no-citations",
      "low-rigor-score",
      "low-quality-score"
    ]
  },
  "entityType": "person",
  "redundancy": {
    "maxSimilarity": 17,
    "similarPages": [
      {
        "id": "cais",
        "title": "CAIS (Center for AI Safety)",
        "path": "/knowledge-base/organizations/cais/",
        "similarity": 17
      },
      {
        "id": "miri-era",
        "title": "The MIRI Era (2000-2015)",
        "path": "/knowledge-base/history/miri-era/",
        "similarity": 16
      },
      {
        "id": "arc",
        "title": "ARC (Alignment Research Center)",
        "path": "/knowledge-base/organizations/arc/",
        "similarity": 16
      },
      {
        "id": "ilya-sutskever",
        "title": "Ilya Sutskever",
        "path": "/knowledge-base/people/ilya-sutskever/",
        "similarity": 16
      },
      {
        "id": "stuart-russell",
        "title": "Stuart Russell",
        "path": "/knowledge-base/people/stuart-russell/",
        "similarity": 16
      }
    ]
  },
  "changeHistory": [
    {
      "date": "2026-02-18",
      "branch": "claude/fix-issue-240-N5irU",
      "title": "Surface tacticalValue in /wiki table and score 53 pages",
      "summary": "Added `tacticalValue` to `ExploreItem` interface, `getExploreItems()` mappings, the `/wiki` explore table (new sortable \"Tact.\" column), and the card view sort dropdown. Scored 49 new pages with tactical values (4 were already scored), bringing total to 53.",
      "model": "sonnet-4",
      "duration": "~30min"
    },
    {
      "date": "2026-02-17",
      "branch": "claude/review-wiki-editing-scCul",
      "title": "Wiki editing system refactoring",
      "summary": "Six refactors to the wiki editing pipeline: (1) extracted shared regex patterns to `crux/lib/patterns.ts`, (2) refactored validation in page-improver to use in-process engine calls instead of subprocess spawning, (3) split the 694-line `phases.ts` into 7 individual phase modules under `phases/`, (4) created shared LLM abstraction `crux/lib/llm.ts` unifying duplicated streaming/retry/tool-loop code, (5) added Zod schemas for LLM JSON response validation, (6) decomposed 820-line mermaid validation into `crux/lib/mermaid-checks.ts` (604 lines) + slim orchestrator (281 lines). Follow-up review integrated patterns.ts across 19+ files, fixed dead imports, corrected ToolHandler type, wired mdx-utils.ts to use shared patterns, replaced hardcoded model strings with MODELS constants, replaced `new Anthropic()` with `createLlmClient()`, replaced inline `extractText` implementations with shared `extractText()` from llm.ts, integrated `MARKDOWN_LINK_RE` into link validators, added `objectivityIssues` to the `AnalysisResult` type (removing an unsafe cast in utils.ts), fixed CI failure from eager client creation, and tested the full pipeline by improving 3 wiki pages. After manual review of 3 improved pages, fixed 8 systematic pipeline issues: (1) added content preservation instructions to prevent polish-tier content loss, (2) made auto-grading default after --apply, (3) added polish-tier citation suppression to prevent fabricated citations, (4) added Quick Assessment table requirement for person pages, (5) added required Overview section enforcement, (6) added section deduplication and content repetition checks to review phase, (7) added bare URL→markdown link conversion instruction, (8) extended biographical claim checker to catch publication/co-authorship and citation count claims.\n\nSubsequent iterative testing and prompt refinement: ran pipeline on jan-leike, chris-olah, far-ai pages. Discovered and fixed: (a) `<!-- NEEDS CITATION -->` HTML comments break MDX compilation (changed to `{/* NEEDS CITATION */}`), (b) excessive citation markers at polish tier — added instruction to only mark NEW claims (max 3-5 per page), (c) editorial meta-comments cluttering output — added no-meta-comments instruction, (d) thin padding sections — added anti-padding instruction, (e) section deduplication needed stronger emphasis — added merge instruction with common patterns. Final test results: jan-leike 1254→1997 words, chris-olah 1187→1687 words, far-ai 1519→2783 words, miri-era 2678→4338 words; all MDX compile, zero critical issues.",
      "pr": 184
    }
  ],
  "coverage": {
    "passing": 5,
    "total": 13,
    "targets": {
      "tables": 11,
      "diagrams": 1,
      "internalLinks": 21,
      "externalLinks": 13,
      "footnotes": 8,
      "references": 8
    },
    "actuals": {
      "tables": 2,
      "diagrams": 0,
      "internalLinks": 19,
      "externalLinks": 9,
      "footnotes": 0,
      "references": 8,
      "quotesWithQuotes": 0,
      "quotesTotal": 0,
      "accuracyChecked": 0,
      "accuracyTotal": 0
    },
    "items": {
      "llmSummary": "green",
      "schedule": "red",
      "entity": "green",
      "editHistory": "green",
      "overview": "green",
      "tables": "amber",
      "diagrams": "red",
      "internalLinks": "amber",
      "externalLinks": "amber",
      "footnotes": "red",
      "references": "green",
      "quotes": "red",
      "accuracy": "red"
    },
    "editHistoryCount": 2,
    "ratingsString": "N:1.5 R:2 A:1 C:4"
  },
  "readerRank": 38,
  "researchRank": 346,
  "recommendedScore": 103.28
}
External Links
{
  "grokipedia": "https://grokipedia.com/page/Dan_Hendrycks"
}
Backlinks (12)
idtitletyperelationship
far-aiFAR AIorganization
maimMAIM (Mutually Assured AI Malfunction)policy
warning-signs-modelWarning Signs Modelanalysis
ai-impactsAI Impactsorganization
caisCAIS (Center for AI Safety)organization
coefficient-givingCoefficient Givingorganization
manifoldManifold (Prediction Market)organization
manifundManifundorganization
__index__/knowledge-base/peoplePeopleconcept
lab-cultureAI Lab Safety Cultureapproach
training-programsAI Safety Training Programsapproach
emergent-capabilitiesEmergent Capabilitiesrisk
Longterm Wiki