Skip to content
Longterm Wiki
Back

Frontier Model Forum - New AI Safety Fund Grantees

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Frontier Model Forum

This announcement from the Frontier Model Forum (a consortium of major AI labs) is relevant to understanding industry-level coordination on AI safety funding and which research areas are being prioritized by leading AI companies.

Metadata

Importance: 45/100press releasenews

Summary

The Frontier Model Forum announces new grantees for its AI Safety Fund, which supports independent research into AI safety challenges. The fund, established by major AI labs including Anthropic, Google, Microsoft, and OpenAI, aims to advance technical and governance research to make frontier AI systems safer. This announcement highlights specific research projects and organizations receiving funding.

Key Points

  • The AI Safety Fund is a joint initiative by leading frontier AI companies to pool resources for independent safety research.
  • Grants support a range of technical AI safety research areas including evaluation, red-teaming, and alignment techniques.
  • The announcement reflects industry-level coordination on funding safety research outside of individual company labs.
  • Grantees represent academic institutions and independent research organizations working on frontier AI safety problems.
  • The fund represents an attempt by industry to demonstrate commitment to responsible AI development through external research investment.

Cited by 1 page

PageTypeQuality
Frontier Model ForumOrganization58.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20264 KB
Announcement of New AI Safety Fund Grantees - Frontier Model Forum 
 
 
 
 
 
 

 

 
 
 
 
 
 
 

 
 
 
 
 
 

 
 
 
 
 

 

 

 
 
 
 Announcement of New AI Safety Fund Grantees

 
 
 By: 

 Frontier Model Forum

 

 
 Posted on: 

 11th December 2025 
 

 
 
 
 

 
 Today we are announcing a new cohort of 11 grantees who have received more than $5 million through the AI Safety Fund (AISF). As frontier AI systems become more powerful and widely deployed, advancing our understanding of them and building robust safety tools is essential – which is why the AISF issued several requests for proposals late last year in Biosecurity and Cybersecurity , as well as AI Agent Evaluation and Synthetic Content .

 Spanning diverse approaches to frontier AI safety and security, the funded projects include: 

 
 Apollo Research, Building black box scheming monitors for Frontier AI agents 

 California Institute of Technology, AI-driven Detection of Protein Mimetic Biothreats with BioSentinel 

 Institute for Decentralized AI (part of Cosmos Institute), Scalable, Decentralized Oversight for Multi-Agent Networks 

 Faculty AI, Automated Red-Teaming for Biosecurity Risks 

 FAR.AI, Quantifying the Safety-Adversary Gap in Large Language Models 

 FutureHouse, Inc., Pioneering AI-Driven Experimental Design: Benchmarks for Responsible Innovation 

 Morgan State University, Evaluating AI-Assisted Cybersecurity Operations: Comparative Analysis of Human Performance with and without AI Tools 

 Nemesys Insights LLC, ICS Benchmark and Human Uplift Study 

 SecureBio, Evaluations to Assess Agent AIs’ Execution of Tasks That Could Enable Large-scale Harm 

 University of Illinois Urbana – Champaign, Cybersecurity Risk Evaluations of AI Agents with Computer Interaction Capabilities 

 University of Toronto, Analyzing the Emergent Role of Sanctioning in Regulating Multi-Agent LLM Systems 

 

 The projects were selected from over 100 competitive proposals through a rigorous review process. As with the initial cohort of grantees, we are excited to support each of the AISF recipients and look forward to their scientific contributions and impact.

 Update on the AI Safety Fund 

 With over $10 million in funding, the AISF was established in late 2023 as a collaborative initiative among leading AI developers and philanthropic partners. It aims to support and expand the field of AI safety research to promote the responsible development and deployment of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety. 

 The Meridian Institute initially managed and oversaw the fund. Support for the AISF came from the founding members of the Frontier Model Forum (FMF) – Anthropic, Google, Microsoft, and OpenAI – as well as philanthropic partners such as the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Schmidt Sciences, and Jaan Tallinn.

 After the Meridian Institute announced in June 2025 that it 

... (truncated, 4 KB total)
Resource ID: e159959847bc87f7 | Stable ID: sid_nSzDumnrmZ