Skip to content

Sentinel

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:39 (Draft)⚠️
Importance:22 (Peripheral)
Last edited:2026-02-01 (5 days ago)
Words:597
Structure:
📊 2📈 0🔗 9📚 436%Score: 9/15
LLM Summary:Sentinel is a 2024-founded foresight organization led by Nuño Sempere that processes millions of news items weekly through AI filtering and elite forecaster assessment to identify global catastrophic risks, publishing findings via newsletter and podcast. The page describes their multi-stage detection pipeline and team composition but provides no concrete examples of their risk assessments, probabilities assigned, or track record.
Issues (1):
  • QualityRated 39 but structure suggests 60 (underrated by 21 points)
AspectAssessment
TypeForesight and emergency response organization
Founded2024
FocusAnticipating and reacting to large-scale catastrophes
OutputWeekly newsletters, Sentinel Minutes podcast
TeamElite forecasters including members from Samotsvety, Good Judgment, Swift Centre
Websitesentinel-team.org
SourceLink
Official Websitesentinelswiki.com
Wikipediaen.wikipedia.org

Sentinel is a foresight and emergency response team focused on anticipating and reacting to large-scale catastrophes, particularly those of a speculative nature.1 Founded by Nuño Sempere, the organization processes millions of news items weekly to identify potential global risks, using language models for initial prioritization before engaging elite forecasters to assess threats and assign probabilities.

The organization emerged from Sempere’s work with Samotsvety Forecasting, building on that group’s expertise in probabilistic risk assessment. Sentinel aims to provide an early warning system for catastrophic risks by combining automated news monitoring with human forecaster judgment.

Sentinel operates a multi-stage process for identifying global risks:

  1. Automated Collection: Processing millions of news items weekly from global sources
  2. AI Prioritization: Using language models to filter and prioritize potentially significant events
  3. Forecaster Assessment: Elite forecasters evaluate flagged items and assign probability estimates
  4. Publication: Weekly summaries published via newsletter and podcast

The organization publishes regular assessments covering topics including:

  • AI risks: Developments in artificial intelligence that may pose catastrophic risks
  • Geopolitical threats: International conflicts, nuclear risks, and escalation scenarios
  • Emerging catastrophes: Novel threats that lack historical precedent
  • Probability updates: Changes in forecasted likelihoods for tracked risks

Each brief includes a “risk status indicator” providing an at-a-glance assessment of near-term catastrophic risk levels.

The Sentinel team combines expertise from leading forecasting organizations:

  • Nuño Sempere - Head of Foresight; co-founder of Samotsvety Forecasting; fellow in 2025 AI for Human Reasoning Fellowship; produces monthly forecasting newsletters
  • Vidur Kapur - Superforecaster at Good Judgment; also forecasts for Swift Centre, Samotsvety, and RAND
  • Tolga Bilge - AI Policy Researcher at ControlAI
  • Rai Sur - Podcast narrator for Sentinel Minutes
  • belikewater - Forecaster

Sentinel publishes via Substack, offering both free and paid subscription tiers. Paid members gain access to community Slack channels and provide direct support for the organization’s detection work.1

The organization produces a podcast summarizing key findings and risk assessments, available on Apple Podcasts, Spotify, and RSS feeds. The podcast provides an accessible audio format for weekly risk updates.

Sentinel’s work intersects with AI safety in several ways:

  • AI risk tracking: Regular monitoring of AI developments that could pose catastrophic risks
  • Methodology: Uses language models as part of their detection pipeline, providing a case study in AI-assisted risk assessment
  • Community overlap: Team members active in effective altruism and rationalist communities where AI safety is a central concern

The organization represents an application of forecasting methods developed in the EA/rationalist ecosystem to the practical problem of catastrophic risk early warning.

  • How effective is the AI-assisted detection pipeline at catching novel, unprecedented risks?
  • What is the optimal balance between false positives (unnecessary alarm) and false negatives (missed warnings)?
  • How do Sentinel’s risk assessments compare to those from traditional intelligence or security organizations?
  • Can the organization scale its impact while maintaining forecaster quality?
  • What evidence would demonstrate the value of speculative risk forecasting?
  1. Sentinel - About 2