Skip to content
Longterm Wiki
Back

Early Best Practices for Frontier AI Safety Evaluations

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Frontier Model Forum

The Frontier Model Forum is an industry consortium of leading AI labs; this page indexes their official publications on safety evaluation best practices, biosafety, cyber risk, and AI governance, making it a key reference for tracking industry-led safety norms.

Metadata

Importance: 62/100organizational reporthomepage

Summary

This is the publications index of the Frontier Model Forum (FMF), an industry body comprising leading AI labs, compiling their issue briefs, technical reports, research updates, and public comments on frontier AI safety. Topics span safety evaluations, biosafety thresholds, red teaming, cyber risks, compute measurement, and AI governance frameworks. It serves as a central hub for FMF's evolving best practices and policy contributions.

Key Points

  • Hosts a growing library of issue briefs on frontier AI safety topics including biosafety thresholds, red teaming, chain-of-thought monitorability, and adversarial distillation.
  • Technical reports cover capability assessments, frontier mitigations, third-party assessments, and cyber risk management in AI frameworks.
  • Includes public comments submitted to US government bodies (NIST, CAISI, AISI) on AI safety standards and policy.
  • Represents collaborative output from major frontier AI developers (Anthropic, Google, Microsoft, OpenAI) on shared safety norms.
  • Publications date from 2023 onward, reflecting the rapid evolution of industry-led AI safety evaluation practices.

Cited by 2 pages

PageTypeQuality
Frontier AI Labs (Overview)--85.0
AI Lab Safety CultureApproach62.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20263 KB
Publications - Frontier Model Forum 
 
 
 
 
 

 
 
 
 
 
 
 

 
 
 
 
 
 

 
 
 
 
 

 

 

 
 
 
 
 
 Publications

 

 Issue Briefs 

 Adversarial Distillation. February 23, 2026. 

 Chain of Thought Monitorability . January 27, 2026. 

 Preliminary Taxonomy of AI-Bio Misuse Mitigations . July 30, 2025. 

 Frontier AI Biosafety Thresholds . May 12, 2025. 

 Preliminary Reporting Tiers for AI-Bio Safety Evaluations. March 18, 2025. 

 Thresholds for Frontier AI Safety Frameworks . February 7, 2025.

 Preliminary Taxonomy of AI-Bio Safety Evaluations . December 20, 2024.

 Preliminary Taxonomy of Pre-Deployment Frontier AI Safety Evaluations . December 20, 2024.

 AI for Cyber Defense . November 22, 2024.

 Components of Safety Frameworks . November 8, 2024.

 Early Best Practices for Frontier AI Safety Evaluations . July 31, 2024.

 Foundational Security Practices . July 31, 2024.

 Measuring Training Compute . May 2, 2024.

 What is Red Teaming? . October 24, 2023.

 

 Technical Reports 

 Managing Advanced Cyber Risks in Frontier AI Frameworks . February 13, 2026. 

 Third-Party Assessments . August 4, 2025.

 Frontier Mitigations . June 30, 2025. 

 Risk Taxonomy and Thresholds for Frontier AI Frameworks . June 18, 2025.

 Frontier Capability Assessments . April 22, 2025.

 

 Research Updates 

 Frontier AI and Nuclear Security . January 30, 2026.

 

 Public Comments 

 FMF Response to the US CAISI Request for Information Regarding Security Considerations of AI Agents . March 9, 2026.

 FMF Comment on NIST Outline of Draft TEVV Standard . September 12, 2025.

 FMF Response to the Request for Information on the Development of an AI Action Plan . March 14, 2025. 

 FMF Comment on US AISI RFI: Safety Considerations for Chemical and/or Biological AI Models . December 3, 2024.

 FMF Response to Managing Misuse Risk for Dual-Use Models, NIST Report 800-1 . September 9, 2024. 

 

 Latest News 

 Progress Update: FMF Information-Sharing of Frontier AI Threats and Vulnerabilities . February 16, 2026. 

 Annual Letter from the Executive Director . December 22, 2025.

 Announcement of New AI Safety Fund Grantees . December 11, 2025. 

 Latest from the FMF: Grant-Making to Address AI-Bio Risk Challenges . June 13, 2025.

 Introducing the FMF’s Technical Report Series on Frontier AI Safety Frameworks . April 22, 2025. 

 FMF Announces First-Of-Its-Kind Information-Sharing Agreement . March 28. 2025.

 Progress Update: Advancing Frontier AI Safety in 2024 and Beyond. August 29, 2024.

 Amazon and Meta join the Frontier Model Forum to promote AI safety . May 20, 2024.

 AI Safety Fund Initiates First Round of Research Grants . April 1, 2024.

 FMF Joins USAISI Consortium as a Founding Member . February 8, 2024.

 Year in Review: Building a Safer Future Together . December 21, 2023.

 Anthropic, Google, Microsoft & OpenAI announce Executive Director of the Frontier Model Forum . October 25, 2023.

 Introducing the Frontier Model Forum . July 26, 2023.
Resource ID: 5329d38ad33971ff | Stable ID: sid_mmcAXcOtkJ