Skip to content
Longterm Wiki
Back

A public-facing risk communication tool from IMD Business School intended to make AI safety urgency legible to non-technical audiences including executives and policymakers; useful as a reference for how AI risk is being framed in mainstream institutional contexts.

Metadata

Importance: 42/100tool pagetool

Summary

The IMD AI Safety Clock is a visual indicator tool developed by IMD Business School and TONOMUS that tracks how close humanity may be to a critical AI safety threshold, analogous to the Bulletin of Atomic Scientists' Doomsday Clock. It synthesizes expert assessments of AI risk factors to communicate urgency around AI safety governance and the need for proactive intervention before irreversible harms occur.

Key Points

  • Modeled after the Doomsday Clock concept, it uses a clock metaphor to represent proximity to a critical AI safety threshold.
  • Aggregates expert input on risk factors including capability growth, governance gaps, and alignment progress.
  • Designed to communicate AI existential and catastrophic risk urgency to business leaders and policymakers.
  • Highlights path-dependence and value lock-in concerns, emphasizing that inaction now may foreclose safer futures.
  • Serves as a public-facing communication and advocacy tool to mobilize awareness around AI safety timelines.

Cited by 2 pages

PageTypeQuality
AI-Induced IrreversibilityRisk64.0
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202632 KB
AI Safety Clock - IMD business school for management and leadership courses 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 

 

 
 
 

 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 Home Research & Knowledge back to Home Centers & Initiatives back to Research & Knowledge Digital & AI Transformation Center back to Centers & Initiatives AI Safety Clock back to Digital & AI Transformation Center 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 AI Safety Clock
 
 
 Evaluating the risk of Uncontrolled Artificial General Intelligence

 
 
 
 
 
 
 
 
 
 AI Safety Clock
 

 
 Evaluating the risk of Uncontrolled Artificial General Intelligence

 
 
 
 
 
 
 
 

 
 The IMD AI Safety Clock is a tool designed to evaluate the risks of Uncontrolled Artificial General Intelligence (UAGI) – autonomous AI systems that operate without human oversight and could potentially cause significant harm.

 Our mission is to evaluate and communicate these risks to the public, policymakers, and business leaders, helping ensure the safe development and use of AI technologies.
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 What is the current reading on the AI Safety Clock? 
 A time for vigilance 
 Our latest assessment places the Clock 18 minutes from midnight , a reflection of significant advances in artificial intelligence across three key dimensions: sophistication, autonomy, and execution. Large language models grew more capable, more efficient, and more widely available, with four major frontier models released in a single 25-day span. Autonomous AI agents moved from experimental prototypes to mainstream enterprise deployments across major platforms, with Microsoft, Google, and GitHub all launching agent orchestration tools. AI systems became physically embodied in robots, embedded in critical infrastructure, and increasingly integrated into military applications. Meanwhile, the regulatory landscape diverged sharply: the European Union began enforcing the world’s first comprehensive AI law with substantial penalties, while the Trump administration pursued an aggressive military AI strategy, declaring the Pentagon an “AI-first warfighting force” and pressuring leading AI companies to remove safety guardrails from their frontier models.

 
 Read more 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 Featured whitepaper

 The AI Shift: Six Months That Redefined How We Live, Work, and Govern

 This report provides a comprehensive overview of global AI developments from October 2025 to March 2026, examining key trends across model sophistication, autonomous systems, physical AI, military applications, and the evolving regulatory landscape. 

 
 
 
 
 
 
 Whitepaper 
 
 
 
 
 
 
 
 Download the full report 
 
 
 
 
 
 
 
 
 

 

 The Evolution of AI: A Timeline of Progress and Peril

 

 
 
 
 
 
 Why it matters 
 As AI rapidly advances, the risks increase

 The IMD AI Safety Clock takes a systematic 

... (truncated, 32 KB total)
Resource ID: da390c7cc819b788 | Stable ID: sid_CsZpKXywCW