Future of Life Institute — publication: AI Safety Index published biannually (Summer 2025, Winter 2025). Evaluates 7 leading AI companies on 33 indicators across 6 domains. Winter 2025 finding: no company has adequate guardrails for catastrophic misuse.
The claim is partially confirmed but contains numerical discrepancies: (1) The publication schedule (Summer 2025, Winter 2025) is confirmed. (2) The number of indicators is stated as 35 in the source, not 33 as claimed—this is a direct contradiction. (3) The number of companies evaluated is 8 (Anthropic, OpenAI, Google DeepMind, xAI, Z.ai, Meta, DeepSeek, Alibaba Cloud), not 7 as claimed. (4) The 6 domains are confirmed. (5) The finding about inadequate guardrails for catastrophic misuse is supported by the source's statement about existential safety being 'the industry's core structural weakness' and companies lacking 'explicit plans for controlling or aligning such smarter-than-human technology,' though the exact phrasing 'no company has adequate guardrails for catastrophic misuse' is not verbatim in the source. The core claim is directionally correct but contains factual errors in the specific numbers.
Our claim
entire record- Subject
- Future of Life Institute
- Value
- AI Safety Index published biannually (Summer 2025, Winter 2025). Evaluates 7 leading AI companies on 33 indicators across 6 domains. Winter 2025 finding: no company has adequate guardrails for catastrophic misuse.
- As Of
- December 2025
Source evidence
1 src · 1 checkNoteThe claim is partially confirmed but contains numerical discrepancies: (1) The publication schedule (Summer 2025, Winter 2025) is confirmed. (2) The number of indicators is stated as 35 in the source, not 33 as claimed—this is a direct contradiction. (3) The number of companies evaluated is 8 (Anthropic, OpenAI, Google DeepMind, xAI, Z.ai, Meta, DeepSeek, Alibaba Cloud), not 7 as claimed. (4) The 6 domains are confirmed. (5) The finding about inadequate guardrails for catastrophic misuse is supported by the source's statement about existential safety being 'the industry's core structural weakness' and companies lacking 'explicit plans for controlling or aligning such smarter-than-human technology,' though the exact phrasing 'no company has adequate guardrails for catastrophic misuse' is not verbatim in the source. The core claim is directionally correct but contains factual errors in the specific numbers.