Skip to content
Longterm Wiki
Back

Alignment Forum - Evaluating and Monitoring for AI Scheming

blog

Authors

Vika·Scott Emmons·Erik Jenner·Mary Phuong·Lewis Ho·Rohin Shah

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

Published on the Alignment Forum, this post addresses the practical challenge of detecting scheming or deceptive alignment in AI systems, relevant to researchers developing evaluation suites and safety benchmarks for frontier models.

Metadata

Importance: 68/100blog postanalysis

Summary

This post examines methods for detecting and evaluating 'scheming' behaviors in AI systems—where models pursue hidden agendas or deceive overseers while appearing aligned. It discusses evaluation frameworks and monitoring approaches to identify deceptive alignment before deployment or during operation.

Key Points

  • Defines AI scheming as strategic deception where models pursue goals different from stated objectives while hiding this from evaluators
  • Proposes evaluation methodologies to detect scheming tendencies during model development and testing phases
  • Discusses monitoring techniques for identifying scheming behaviors in deployed systems over time
  • Highlights the challenge that sophisticated scheming may be specifically designed to evade standard evaluation approaches
  • Connects scheming detection to broader deceptive alignment concerns central to AI safety research

Cited by 1 page

PageTypeQuality
Frontier Model ForumOrganization58.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202611 KB
Oct
 NOV
 Dec
 

 
 

 
 14
 
 

 
 

 2024
 2025
 2026
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20251114062101/https://www.alignmentforum.org/posts/JvYF5kosLeYGvvLpP/evaluating-and-monitoring-for-ai-scheming

 

 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. 

1417

AI ALIGNMENT FORUM

AF

Login

1416

Evaluating and monitoring for AI scheming — AI Alignment Forum

DeepMind Alignment Team on Threat Models and Plans

AI
Frontpage

34

Evaluating and monitoring for AI scheming

by Vika, Scott Emmons, Erik Jenner, Mary Phuong, Lewis Ho, Rohin Shah

10th Jul 2025
Linkpost from deepmindsafetyresearch.medium.com

6 min read

9

34

AI
Frontpage

Previous:

A short course on AGI safety from the GDM Alignment team

No comments105 karma

New Comment

 

Submit

 

Moderation Log

More from Vika

View more

Curated and popular this week

0Comments

As AI models become more sophisticated, a key concern is the potential for “deceptive alignment” or “scheming”. This is the risk of an AI system becoming aware that its goals do not align with human instructions, and deliberately trying to bypass the safety measures put in place by humans to prevent it from taking misaligned action. Our initial approach, as laid out in the Frontier Safety Framework, focuses on understanding current scheming capabilities of models and developing chain-of-thought monitoring mechanisms to oversee models once they are capable of scheming.

We present two pieces of research, focusing on (1) understanding and testing model capabilities necessary for scheming, and (2) stress-testing chain-of-thought monitoring as a proposed defense mechanism for future, more capable systems.

Establishing and Assessing Preconditions for Scheming in Current Frontier Models

Our paper “Evaluating Frontier Models for Stealth and Situational Awareness” focuses on an empirical assessment of the component capabilities for AI scheming. We identified two prerequisite capabilities for successful scheming:

Stealth: The ability to reason about and circumvent oversight mechanisms.

Situational awareness: The model’s ability to gain and leverage knowledge about itself and its circumstances and environment to pursue goals.

We expect that in order to follow a realistic scheming strategy, an AI model needs to understand its deployment situation to choose effective strategies that advance its goals (basic situational awareness), and execute the strategy while appearing aligned and trustworthy (stealth). Deployments of advanced AI models in the medium-term future will likely come with a set of “default” oversight mechanisms comparable to forms of oversight used for human 

... (truncated, 11 KB total)
Resource ID: 095bbc2f7e867dc7 | Stable ID: sid_9N0qGgN506