Skip to content
Longterm Wiki
Back

The IMD AI Safety Clock is a risk-tracking metric from the TONOMUS Global Center for Digital & AI Transformation; this update article is useful context for monitoring institutional risk assessments of AGI timelines and threat dimensions, though it is a journalistic summary rather than a technical or policy document.

Metadata

Importance: 42/100organizational reportnews

Summary

The IMD AI Safety Clock has made its largest single jump to 23:40 (20 minutes to midnight), driven by advances in agentic AI, weaponization concerns, Chinese AI competition, and fragmented global regulation. The clock tracks three dimensions—AI sophistication, autonomy, and execution—to signal proximity to uncontrolled AGI. Over 12 months since launch, the clock has advanced nine minutes total, indicating an accelerating pace of risk escalation.

Key Points

  • Clock advanced 4 minutes to 23:40 in its largest single update, totaling 9 minutes of advancement over 12 months since September 2024 launch.
  • Key drivers include agentic AI reasoning breakthroughs, AI weaponization, rise of Chinese AI models, and deepening AI-defense sector ties.
  • Assessment uses three dimensions: sophistication (reasoning), autonomy (independence from human oversight), and execution (physical-world influence).
  • Combines quantitative data from thousands of sources with expert qualitative judgment, inspired by the Doomsday Clock framework.
  • Clock is designed as a trend signal toward 'Uncontrolled Artificial General Intelligence' (UAGI), not a precise prediction of catastrophe timing.

Cited by 2 pages

PageTypeQuality
AI-Induced IrreversibilityRisk64.0
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202616 KB
IMD Safety Clock - Big Leap - Agentic AI - I by IMD 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 
 

 
 
 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Share

 
 
 Facebook 
 Facebook icon 
 

 
 
 Twitter 
 Twitter icon 
 

 
 
 LinkedIn 
 LinkedIn icon 
 

 
 
 Email 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Technology

 IMD AI Safety Clock makes biggest leap yet amid weaponization and rise of agentic AI

 by Michael R. Wade , Konstantinos Trantopoulos Published September 19, 2025 in Technology • 9 min read

 
 
 
 
 
 
 
 
 
 
 
 
 
 One year after launch, AI risk tracker underlines worrying implications of accelerating AI adoption.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 The IMD AI Safety Clock has jumped four minutes forward to 23:40, just 20 minutes to midnight. The largest single adjustment since its launch a year ago, the decision was driven by a new phase in AI development and deployment. Over the course of 12 months, the AI Safety Clock has advanced nine minutes closer to midnight, highlighting the accelerating pace of risk escalation.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Tracking humanity’s race with AI

 In September 2024, the IMD AI Safety Clock was launched by the TONOMUS Global Center for Digital & AI Transformation . The clock is a metaphorical risk gauge that estimates how close humanity is to a tipping point where Uncontrolled Artificial General Intelligence (UAGI) could pose serious threats. Inspired by the Doomsday Clock, which measures existential risks from nuclear conflict, the AI Safety Clock tracks our distance from midnight, the moment when UAGI could become uncontrollable or dangerously misaligned with human interests.

 Unlike a prediction of exact dates or events, the clock is designed as a signal of trends. It combines quantitative data from thousands of news feeds, research reports, and monitoring systems with expert qualitative judgment. The assessment focuses on three key dimensions: sophistication, or the reasoning and problem-solving abilities of AI, autonomy, or the independence with which AI systems operate without human oversight, and execution, or the ability of AI to act on and influence the physical world.

 When the clock was launched in September 2024, it was set at 23:31 , 29 minutes before midnight. This baseline reflected rapid advances in large language models, the first experiments with agentic AI, and limited execution capacity through robotics. It marked a starting point for tracking AI risk rather than a forecast of imminent catastrophe.

 
 
 
 
 
 
 
 
 
 
 
 
 
 Breakthroughs in agentic reasoning, the rise of Chinese models, lack of unified regulations, and deepening ties between AI companies and defense actors pushed the risk profile higher. 
 
 
 
 
 Weighing the impact of the three dimensions

 By December 2024, the clock had advanced to 23:34 , 26 minutes before midnight. “AI was moving faster than expected,”

... (truncated, 16 KB total)
Resource ID: 23067a0dd2856cc6 | Stable ID: sid_t4xbn8BvRL