Longterm Wiki
Back

British lawmakers accuse Google of 'breach of trust' over delayed Gemini 2.5 Pro safety report

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Fortune

Data Status

Full text fetchedFetched Dec 28, 2025

Summary

A group of 60 U.K. lawmakers criticized Google DeepMind for not fully disclosing safety information about its Gemini 2.5 Pro AI model as previously committed. The letter argues the company failed to provide comprehensive model testing details.

Key Points

  • Google DeepMind accused of not fulfilling Frontier AI Safety Commitments
  • Lawmakers demand more transparency in AI model safety reporting
  • Delayed and minimal model card raises concerns about AI safety practices

Review

The source highlights a growing tension between AI development and safety transparency, focusing on Google DeepMind's alleged failure to meet previously agreed-upon AI safety reporting standards. The lawmakers' open letter criticizes the company for releasing Gemini 2.5 Pro without a comprehensive model card and detailed safety evaluations, which were promised at an international AI safety summit in 2024. The incident reveals broader challenges in AI governance, where major tech companies are seemingly treating safety commitments as optional. The letter demands more rigorous and timely safety reporting, including clear deployment definitions, consistent safety evaluation reports, and full transparency about testing processes. This case underscores the critical need for robust, enforceable mechanisms to ensure AI developers maintain accountability and prioritize safety throughout model development and deployment.
Resource ID: d6d4a28f28ba4170 | Stable ID: OTA0M2M4Yj