AI Lab Safety Culture
Analysis of interventions to improve safety culture within AI labs. Evidence from 2024-2025 shows significant gaps: no company scored above C+ overall (FLI Winter 2025), all received D or below on existential safety, and xAI released Grok 4 without any safety documentation.
Related
Related Pages
Top Related Pages
AI Whistleblower Protections
Legal and institutional frameworks for protecting AI researchers and employees who report safety concerns.
Anthropic
An AI safety company founded by former OpenAI researchers that develops frontier AI models while pursuing safety research, including the Claude mod...
AI Safety Institutes (AISIs)
Government-affiliated technical institutions evaluating frontier AI systems, with the UK/US institutes having secured pre-deployment access to mode...
AI Development Racing Dynamics
Competitive pressure driving AI development faster than safety can keep up, creating prisoner's dilemma situations where actors cut safety corners ...
Frontier Model Forum
Industry-led non-profit organization promoting self-governance in frontier AI safety through collaborative frameworks, research funding, and best p...