Skip to content
Longterm Wiki
Entity
Wiki
About
Business
Data

METR (Model Evaluation and Threat Research), formerly known as ARC Evals, is an organization dedicated to evaluating frontier AI models for dangerous capabilities before deployment.

Facts

2
Biographical
Wikipediahttps://en.wikipedia.org/wiki/Metre
General
Websitehttps://metr.org

Related Wiki Pages

Top Related Pages

Safety Research

Anthropic Core Views

Approaches

Dangerous Capability EvaluationsResponsible Scaling PoliciesSandboxing / Containment

Analysis

AI Compute Scaling MetricsAI Safety Intervention Effectiveness Matrix

Policy

Voluntary AI Safety CommitmentsAI Safety Institutes (AISIs)

Organizations

Alignment Research Center (ARC)

Other

AI EvaluationsBeth BarnesScalable Oversight

Risks

Reward HackingAI-Induced Enfeeblement

Concepts

AI TimelinesExistential Risk from AISituational AwarenessSelf-Improvement and Recursive Enhancement

Key Debates

AI Safety Solution CruxesTechnical AI Safety Research