36QualityDraftQuality: 36/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.52.3ImportanceUsefulImportance: 52.3/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.
Summary
High-level overview of ~8 frontier AI labs covering founding dates, key models, safety approaches, and organizational structures, with brief sections on competitive dynamics, safety commitments, and revenue sustainability. Provides a useful orientation table but lacks depth, sourcing, and analytical insight beyond what any informed observer would already know.
Content4/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit history2Edit historyTracked changes from improve pipeline runs and manual edits.OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
–Tables1/ ~2TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links18/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links0/ ~2Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:2 R:3.5 A:2.5 C:4.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).
Change History2
Clarify overview pages with new entity type3 weeks ago
Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.
Fix conflicting numeric IDs + add integrity checks#1684 weeks ago
Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence:
1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts)
2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection)
3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex
Frontier AI Labs (Overview)
Overview
Frontier AI labs are the organizations developing the most capable AI systems. Their technical decisions, safety practices, and competitive dynamics shape the trajectory of AI development and the landscape of AI risk. As of early 2026, a small number of labs—primarily US-based—dominate frontier model development, with combined AI capital expenditure exceeding $300B annually.
Major Frontier Labs
Lab
Founded
Key Models
Safety Approach
Structure
OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100
2015
GPT series, o-series
Preparedness Framework, red-teaming
Capped-profit (transitioned from nonprofit)
AnthropicOrganizationAnthropicComprehensive reference page on Anthropic covering financials (\$380B valuation, \$19B ARR), safety research (Constitutional AI, mechanistic interpretability, model welfare), governance (LTBT struc...Quality: 74/100
2021
Claude series
Responsible Scaling PolicyPolicyResponsible Scaling Policies (RSPs)RSPs are voluntary industry frameworks that trigger safety evaluations at capability thresholds, currently covering 60-70% of frontier development across 3-4 major labs. Estimated 10-25% risk reduc...Quality: 64/100, Constitutional AI
Public benefit corporation
Google DeepMindOrganizationGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100
2010/2023
Gemini series
Frontier Safety Framework
Division of Alphabet
xAIOrganizationxAIComprehensive profile of xAI covering its founding by Elon Musk in 2023, rapid growth to \$230B valuation and \$3.8B revenue, development of Grok models, and controversial 'truth-seeking' safety ap...Quality: 48/100
2023
Grok series
Minimal public safety commitments
Private company
Meta AI (FAIR)OrganizationMeta AI (FAIR)Comprehensive organizational profile of Meta AI covering \$66-72B infrastructure investment (2025), LLaMA model family (1B+ downloads), and transition from FAIR research lab to product-focused GenA...Quality: 51/100
2013
Llama series
Open-weight release approach
Division of Meta
MicrosoftOrganizationMicrosoft AIComprehensive reference page on Microsoft's AI strategy covering its \$80B+ infrastructure spend, restructured \$135B OpenAI stake (~27% ownership), Azure AI growth (39% YoY with 16pp AI contributi...Quality: 58/100
—
Copilot, Phi series
Partnership with OpenAI, internal safety teams
Public corporation
SSI (Safe Superintelligence Inc)OrganizationSafe Superintelligence Inc.Safe Superintelligence Inc represents a significant AI safety organization founded by key OpenAI alumni with \$3B funding and a singular focus on developing safe superintelligence, though its actua...Quality: 45/100
2024
None yet
Safety-first mission statement
Private startup
Bridgewater AIA LabsOrganizationBridgewater AIA LabsBridgewater AIA Labs launched a \$2B AI-driven macro fund in July 2024 that returned 11.9% in 2025, using proprietary ML models plus LLMs from OpenAI/Anthropic/Perplexity with multi-layer guardrail...Quality: 66/100
2024
None public
AI-augmented decision-making focus
Subsidiary of Bridgewater Associates
Competitive Dynamics
The frontier AI landscape is characterized by intense competition:
Racing dynamicsRiskAI Development Racing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100: Labs face pressure to release capabilities quickly, potentially at the expense of safety testing
Talent competition: A small pool of ML researchers with frontier model experience moves between labs
Compute arms race: Labs are securing increasingly large compute clusters, with individual training runs exceeding $1B
Open vs. closed: MetaOrganizationMeta AI (FAIR)Comprehensive organizational profile of Meta AI covering \$66-72B infrastructure investment (2025), LLaMA model family (1B+ downloads), and transition from FAIR research lab to product-focused GenA...Quality: 51/100 releases open-weight models, while AnthropicOrganizationAnthropicComprehensive reference page on Anthropic covering financials (\$380B valuation, \$19B ARR), safety research (Constitutional AI, mechanistic interpretability, model welfare), governance (LTBT struc...Quality: 74/100 and OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100 keep weights proprietary
Safety Commitments
Labs vary significantly in their safety commitments:
Responsible Scaling PoliciesPolicyResponsible Scaling Policies (RSPs)RSPs are voluntary industry frameworks that trigger safety evaluations at capability thresholds, currently covering 60-70% of frontier development across 3-4 major labs. Estimated 10-25% risk reduc...Quality: 64/100: Anthropic pioneered this framework; OpenAI and DeepMind have adopted similar approaches
Voluntary Industry CommitmentsPolicyVoluntary AI Safety CommitmentsComprehensive empirical analysis of voluntary AI safety commitments showing 53% mean compliance rate across 30 indicators (ranging from 13% for Apple to 83% for OpenAI), with strongest adoption in ...Quality: 91/100: The Biden administration secured commitments from major labs in 2023
Frontier Model ForumOrganizationFrontier Model ForumThe Frontier Model Forum represents the AI industry's primary self-governance initiative for frontier AI safety, establishing frameworks and funding research, but faces fundamental criticisms about...Quality: 58/100: Industry consortium for safety research collaboration
Pre-deployment testing: All major labs now conduct some form of red-teaming and dangerous capability evaluations before release, though thoroughness varies
Revenue and Sustainability
Frontier AI labs face a fundamental tension between the massive capital requirements of training and running frontier models and the need to generate revenue. OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100 leads in consumer revenue through ChatGPT, while AnthropicOrganizationAnthropicComprehensive reference page on Anthropic covering financials (\$380B valuation, \$19B ARR), safety research (Constitutional AI, mechanistic interpretability, model welfare), governance (LTBT struc...Quality: 74/100 focuses on enterprise and API revenue. The gap between AI capital expenditure and AI revenue across the industry remains large.
OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100Frontier Model ForumOrganizationFrontier Model ForumThe Frontier Model Forum represents the AI industry's primary self-governance initiative for frontier AI safety, establishing frameworks and funding research, but faces fundamental criticisms about...Quality: 58/100Microsoft AIOrganizationMicrosoft AIComprehensive reference page on Microsoft's AI strategy covering its \$80B+ infrastructure spend, restructured \$135B OpenAI stake (~27% ownership), Azure AI growth (39% YoY with 16pp AI contributi...Quality: 58/100Meta AI (FAIR)OrganizationMeta AI (FAIR)Comprehensive organizational profile of Meta AI covering \$66-72B infrastructure investment (2025), LLaMA model family (1B+ downloads), and transition from FAIR research lab to product-focused GenA...Quality: 51/100xAIOrganizationxAIComprehensive profile of xAI covering its founding by Elon Musk in 2023, rapid growth to \$230B valuation and \$3.8B revenue, development of Grok models, and controversial 'truth-seeking' safety ap...Quality: 48/100Safe Superintelligence Inc.OrganizationSafe Superintelligence Inc.Safe Superintelligence Inc represents a significant AI safety organization founded by key OpenAI alumni with \$3B funding and a singular focus on developing safe superintelligence, though its actua...Quality: 45/100