Skip to content

Automation Tools

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:41 (Adequate)
Importance:0 (Peripheral)
Words:764
Structure:
📊 7📈 0🔗 0📚 018%Score: 6/15
LLM Summary:Comprehensive technical documentation for wiki maintenance automation, covering page improvement workflows (Q5 standards requiring 10+ citations, 800+ words), content grading via Claude API (~$0.02/page), validation suite, and knowledge base system. Provides detailed command reference, cost estimates, and common workflows for maintaining content quality.

This page documents all automation tools available for maintaining and improving the knowledge base.

ToolPurposeCommand
Content CommandsImprove, grade, create pagesnpm run crux -- content
ValidatorsCheck content qualitynpm run crux -- validate
AnalyzersAnalysis and reportingnpm run crux -- analyze
Auto-fixersFix common issuesnpm run crux -- fix
Data BuilderRegenerate entity datanpm run build:data
ResourcesExternal resource managementnpm run crux -- resources

The recommended way to improve wiki pages to quality 5.

Terminal window
# List pages that need improvement (sorted by priority)
node scripts/page-improver.mjs --list
# Get improvement prompt for a specific page
node scripts/page-improver.mjs economic-disruption
# Show page info only (no prompt)
node scripts/page-improver.mjs racing-dynamics --info
# Filter by quality and importance
node scripts/page-improver.mjs --list --max-qual 3 --min-imp 50
ElementRequirement
Quick Assessment Table5+ rows, 3 columns (Dimension, Assessment, Evidence)
Substantive Tables2+ additional tables with real data
Mermaid Diagram1+ showing key relationships
Citations10+ real URLs from authoritative sources
Quantified ClaimsReplace “significant” with “25-40%” etc.
Word Count800+ words of substantive content
ModelCost per Page
Opus 4.5$3-5
Sonnet 4.5$0.50-1.00
  • Gold standard: src/content/docs/knowledge-base/risks/misuse/bioweapons.mdx
  • Good example: src/content/docs/knowledge-base/risks/structural/racing-dynamics.mdx

Uses Claude Sonnet API to automatically grade pages with importance, quality, and AI-generated summaries.

Terminal window
# Preview what would be graded (no API calls)
node scripts/grade-content.mjs --dry-run
# Grade a specific page
node scripts/grade-content.mjs --page scheming
# Grade pages and apply to frontmatter
node scripts/grade-content.mjs --limit 10 --apply
# Grade a category with parallel processing
node scripts/grade-content.mjs --category responses --parallel 3
# Skip already-graded pages
node scripts/grade-content.mjs --skip-graded --limit 50
OptionDescription
--page IDGrade a single page
--dry-runPreview without API calls
--limit NOnly process N pages
--parallel NProcess N pages concurrently (default: 1)
--category XOnly process pages in category
--skip-gradedSkip pages with existing importance
--applyWrite grades to frontmatter (caution)
--output FILEWrite results to JSON file

Importance (0-100):

  • 90-100: Essential for prioritization (core interventions, key risk mechanisms)
  • 70-89: High value (concrete responses, major risk categories)
  • 50-69: Useful context (supporting analysis, secondary risks)
  • 30-49: Reference material (historical, profiles, niche)
  • 0-29: Peripheral (internal docs, stubs)

Quality (0-100):

  • 80-100: Comprehensive (2+ tables, 1+ diagram, 5+ citations, quantified claims)
  • 60-79: Good (1+ table, 3+ citations, mostly prose)
  • 40-59: Adequate (structure but lacks tables/citations)
  • 20-39: Draft (poorly structured, heavy bullets, no evidence)
  • 0-19: Stub (minimal content)

≈$0.02 per page, ≈$6 for all 329 pages


All validators are accessible via the unified crux CLI:

Terminal window
npm run validate # Run all validators
npm run crux -- validate --help # List all validators
CommandDescription
crux validate compileMDX compilation check
crux validate dataEntity data integrity
crux validate refsInternal reference validation
crux validate mermaidMermaid diagram syntax
crux validate sidebarSidebar configuration
crux validate entity-linksEntityLink component validation
crux validate templatesTemplate compliance
crux validate qualityContent quality metrics
crux validate unifiedUnified rules engine (escaping, formatting)
Terminal window
# Run specific validators
npm run crux -- validate compile --quick
npm run crux -- validate unified --rules=dollar-signs,markdown-lists
# Skip specific checks
npm run crux -- validate all --skip=component-refs
# CI mode
npm run validate:ci

SQLite-based system for managing content, sources, and AI summaries.

Requires .env file:

ANTHROPIC_API_KEY=sk-ant-...
Terminal window
npm run crux -- analyze scan # Scan MDX files, extract sources
npm run crux -- generate summaries # Generate AI summaries
node scripts/scan-content.mjs --stats # Show database statistics
Terminal window
# Scan content (run after editing MDX files)
node scripts/scan-content.mjs
node scripts/scan-content.mjs --force # Rescan all files
node scripts/scan-content.mjs --verbose # Show per-file progress
# Generate summaries via crux
npm run crux -- generate summaries --batch 50
npm run crux -- generate summaries --model sonnet
npm run crux -- generate summaries --id deceptive-alignment
npm run crux -- generate summaries --dry-run

All cached data is in .cache/ (gitignored):

  • .cache/knowledge.db - SQLite database
  • .cache/sources/ - Fetched source documents
TaskModelCost
Summarize all 311 articlesHaiku≈$2-3
Summarize all 793 sourcesHaiku≈$10-15

Important: Data build must run before site build.

Terminal window
npm run build:data # Regenerate all data files
npm run dev # Auto-runs build:data first
npm run build # Auto-runs build:data first

After running build:data:

  • src/data/database.json - Main entity database
  • src/data/entities.json - Entity definitions
  • src/data/backlinks.json - Cross-references
  • src/data/tagIndex.json - Tag index
  • src/data/pathRegistry.json - URL path mappings
  • src/data/pages.json - Page metadata for scripts
Terminal window
npm run sync:descriptions # Sync model descriptions from files
npm run extract # Extract data from pages
npm run generate-yaml # Generate YAML from data
npm run cleanup-data # Clean up data files

Unified tool for managing and improving content quality via crux content.

Terminal window
# Improve pages
npm run crux -- content improve <page-id>
# Grade pages using Claude API
npm run crux -- content grade --page scheming
npm run crux -- content grade --limit 5 --apply
# Regrade pages
npm run crux -- content regrade --page scheming
# Create new pages
npm run crux -- content create --type risk --file input.yaml
OptionDescription
--dry-runPreview without API calls
--limit NProcess only N pages
--applyApply changes directly to files
--page IDTarget specific page

Terminal window
# Find URLs that can be converted to <R> components
node scripts/map-urls-to-resources.mjs expertise-atrophy # Specific file
node scripts/map-urls-to-resources.mjs # All files
node scripts/map-urls-to-resources.mjs --stats # Statistics only
# Auto-convert markdown links to R components
node scripts/convert-links-to-r.mjs --dry-run # Preview
node scripts/convert-links-to-r.mjs --apply # Apply changes
Terminal window
node scripts/utils/export-resources.mjs # Export resource data

Terminal window
# Generate a model page from YAML input
npm run crux -- generate content --type model --file input.yaml
# Generate a risk page
npm run crux -- generate content --type risk --file input.yaml
# Generate a response page
npm run crux -- generate content --type response --file input.yaml
Terminal window
npm run crux -- generate summaries --batch 50 # Generate summaries for multiple pages

Terminal window
npm run test # Run all tests
npm run test:lib # Test library functions
npm run test:validators # Test validator functions

Terminal window
npm run lint # Check for linting issues
npm run lint:fix # Fix linting issues
npm run format # Format all files
npm run format:check # Check formatting without changing

Convention: All temporary/intermediate files go in .claude/temp/ (gitignored).

Scripts that generate intermediate output (like grading results) write here by default. This keeps the project root clean and prevents accidental commits.


  1. Find candidates:

    Terminal window
    node scripts/page-improver.mjs --list --max-qual 3
  2. Get improvement prompt:

    Terminal window
    node scripts/page-improver.mjs economic-disruption
  3. Run in Claude Code with the generated prompt

  4. Validate the result:

    Terminal window
    npm run crux -- validate compile
    npm run crux -- validate templates
  1. Preview:

    Terminal window
    node scripts/grade-content.mjs --skip-graded --dry-run
  2. Grade and apply:

    Terminal window
    node scripts/grade-content.mjs --skip-graded --apply --parallel 3
  3. Review results:

    Terminal window
    cat .claude/temp/grades-output.json
Terminal window
npm run validate
Terminal window
npm run build:data
npm run crux -- validate data