Skip to content

Model Registries

📋Page Status
Page Type:ResponseStyle Guide →Intervention/response page
Quality:68 (Good)
Importance:75.5 (High)
Last edited:2026-01-28 (4 days ago)
Words:1.7k
Structure:
📊 9📈 1🔗 13📚 1831%Score: 13/15
LLM Summary:Analyzes model registries as foundational governance infrastructure across US (≥10^26 FLOP threshold), EU (≥10^25 FLOP), and state-level implementations, showing they enable pre-deployment review and incident tracking but don't prevent harm directly. Provides specific implementation recommendations including 30-90 day pre-deployment notification and 72-hour incident reporting, with medium-high confidence that registries improve visibility and incident learning.
Critical Insights (4):
  • Quant.Model registry thresholds vary dramatically across jurisdictions, with the EU requiring registration at 10^25 FLOP while the US federal threshold is 10^26 FLOP—a 10x difference that could enable regulatory arbitrage where developers structure training to avoid stricter requirements.S:4.0I:4.5A:4.0
  • ClaimMultiple jurisdictions are implementing model registries with enforcement teeth in 2025-2026, including New York's $1-3M penalties and California's mandatory Frontier AI Framework publication, representing the most concrete AI governance implementation timeline to date.S:3.5I:4.5A:4.5
  • Counterint.Model registries are graded B+ as governance tools because they are foundational infrastructure that enables other interventions rather than directly preventing harm—they provide visibility for pre-deployment review, incident tracking, and international coordination but cannot regulate AI development alone.S:4.0I:4.0A:3.5
Issues (1):
  • Links4 links could use <R> components
See also:LessWrong

Model registries represent a foundational governance tool for managing risks from advanced AI systems. Like drug registries that enable pharmaceutical regulation or aircraft registries that support aviation safety, AI model registries would create centralized databases containing information about frontier AI systems—their capabilities, training details, deployment contexts, and safety evaluations. This infrastructure provides governments with the visibility necessary to implement more sophisticated AI governance measures.

The policy momentum is significant. The U.S. Executive Order on AI (October 2023) mandated quarterly reporting for models trained above 10^26 FLOP. The EU AI Act requires registration of high-risk AI systems and general-purpose AI models. California’s SB 53 (signed September 2025) requires transparency reports and incident reporting for frontier models above 10^26 FLOP. New York’s RAISE Act requires incident reporting within 72 hours. These requirements create the skeleton of a registry system, though implementation remains fragmented and early-stage.

The strategic value of model registries lies in their enabling function. A registry alone doesn’t prevent harm—but it provides the information foundation for safety requirements, pre-deployment review, incident tracking, and international coordination. Without knowing what models exist and what capabilities they possess, governments cannot effectively regulate AI development. Model registries transform AI governance from reactive to proactive by creating visibility into the development pipeline before deployment.

Federal Level: The October 2023 Executive Order directed the Bureau of Industry and Security (BIS) to establish reporting requirements for advanced AI models. Under the proposed rule:

  • Entities must report models trained with >10^26 FLOP
  • Quarterly reporting on training activities
  • Six-month forward-looking projections required
  • Information includes ownership, compute access, safety testing

State Level:

StateLegislationKey RequirementsStatus
CaliforniaSB 53Transparency reports for models above 10^26 FLOP; 15-day incident reportingEnacted Sep 2025; effective Jan 1, 2026
New YorkRAISE Act72-hour incident reporting; safety protocol publication; civil penalties up to $1MEnacted 2024
ColoradoSB 24-205High-risk AI system registration; algorithmic impact assessmentsEnacted May 2024

The EU AI Act (Regulation EU 2024/1689), which entered into force August 1, 2024, establishes the most comprehensive registry requirements to date:

  • General-Purpose AI Models: Registration with EU AI Office if trained above 10^25 FLOP
  • High-Risk AI Systems: Registration in EU database before market placement
  • Systemic Risk Models: Additional transparency and safety requirements
  • Required Information: Technical documentation, compliance evidence, intended use

The EU database will be publicly accessible for high-risk AI systems, with confidential technical documentation available to regulators. Per Article 49, providers must register themselves and their systems before placing high-risk AI systems on the market. High-risk obligations become applicable in August 2026-2027.

China has implemented registration requirements since 2023 under the Interim Measures for Generative AI Services:

  • Deep synthesis (deepfake) algorithms must register with CAC
  • Generative AI services require registration before public offering
  • Algorithmic recommendation services subject to separate registry
  • As of November 2025, 611 generative AI services and 306 apps had completed filing
  • Apps must publicly disclose which filed model they use, including filing number
  • Focus on content moderation and political sensitivity
JurisdictionCompute ThresholdPre/Post DeploymentPublic AccessPenalties
US Federal10^26 FLOPPre + ongoingLimited (security)Under development
California10^26 FLOPPre-deploymentTransparency reports publicUp to $1M/violation
New YorkScale-basedPre + incidentsProtocols publicUp to $1M
EU10^25 FLOPPre-marketPartialUp to 7% revenue
ChinaAny public AIPre-deploymentLimitedService suspension
BenefitMechanismConfidence
Visibility for governanceKnow what exists before regulatingHigh
Incident learningTrack failures across the ecosystemHigh
Pre-deployment reviewEnable safety checks before releaseMedium-High
International coordinationCommon information standardsMedium
Enforcement foundationCan’t enforce rules without knowing who to apply them toHigh
Research ecosystem supportAggregate data for policy researchMedium
ChallengeDescriptionMitigation
Threshold gamingDevelopers structure training to avoid thresholds (research shows model distillation and mixture-of-agents approaches can achieve frontier performance below thresholds)Multiple thresholds; capability-based triggers
Dual-use concernsRegistry information could advantage competitors/adversariesTiered access; confidentiality provisions
Open-source gapRegistries focus on centralized developersPost-release monitoring; community registries
Enforcement difficultyVerifying submitted information is accurateAuditing; whistleblower protections
Rapid obsolescenceThresholds outdated as technology advancesAutomatic update mechanisms; sunset provisions
International gapsNo global registry; jurisdiction shoppingInternational coordination (nascent)

Model registries are necessary but not sufficient for AI governance. They enable but don’t replace:

Loading diagram...

For jurisdictions establishing initial AI model registries:

  1. Compute-based threshold: 10^25-10^26 FLOP (adjustable)

  2. Pre-deployment notification: 30-90 days before public release

  3. Required information:

    • Developer identity and contact
    • Training compute and data sources (categorical)
    • Intended use cases and deployment scope
    • Safety evaluation summary
    • Known risks and mitigations
  4. Incident reporting: 72 hours for critical harms

  5. Annual updates: Mandatory refresh of all information

  6. Tiered access: Public summary + confidential technical details

Based on analysis by Convergence Analysis and the Institute for Law & AI:

PrincipleRationaleImplementation
Minimal burdenEncourage compliance, reduce resistanceRequire only information developers already track
InteroperableEnable international coordinationAlign with emerging international standards
UpdatableTechnology changes faster than regulationBuilt-in mechanism for threshold adjustment
ComplementaryRegistry enables other tools, doesn’t replace themDesign for integration with safety requirements
ProportionateDifferent requirements for different risk levelsTiered obligations based on capability/deployment

Don’t:

  • Set thresholds so high only 2-3 models qualify (too narrow)
  • Require disclosure of trade secrets unnecessarily (industry opposition)
  • Create registry without enforcement mechanism (toothless)
  • Assume static thresholds will remain appropriate (obsolescence)
  • Ignore international coordination from the start (jurisdiction shopping)
  • California SB 53 effective January 2026 (transparency reports, incident reporting)
  • EU high-risk AI database operational (August 2026-2027 compliance deadlines)
  • GovAI forecasts 103-306 models exceeding 10^25 FLOP (EU threshold) by 2028
  • 5-10 jurisdictions with some form of registry
  • Initial international coordination discussions
  • Potential international registry framework
  • Capability-based triggers supplement compute thresholds
  • Integration with compute monitoring
  • Real-time incident reporting systems
  • Cross-border data sharing agreements
QuestionOptimistic ScenarioPessimistic Scenario
International coordinationCommon standards, shared databaseFragmented, incompatible systems
Enforcement effectivenessHigh compliance, meaningful oversightWidespread evasion, symbolic only
Open-source coverageCommunity registries, post-release trackingUnmonitored proliferation
Threshold relevanceAdaptive thresholds track real risksOutdated, easily gamed
DimensionAssessmentNotes
TractabilityHighActive legislation in multiple jurisdictions
If AI risk highHighEssential infrastructure for any governance
If AI risk lowMediumStill useful for transparency and accountability
NeglectednessLow-MediumActive policy area but implementation gaps
Timeline to impact1-3 yearsRequirements taking effect 2025-2026
GradeB+Foundational but not transformative alone
RiskMechanismEffectiveness
Racing DynamicsVisibility into development timelinesLow-Medium
Misuse RisksKnow what capabilities existMedium
Regulatory arbitrageHarmonized international requirementsLow (currently)
Incident learning gapsMandatory reporting creates databaseMedium-High
  • Compute Governance - Hardware-based verification complements software registration
  • Export Controls - Control inputs to models in registry
  • AI Safety Institutes - Institutions to review registered models
  • Responsible Scaling Policies - Industry commitments that registries can verify
  • US Executive Order 14110 (October 2023): “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” - Established 10^26 FLOP reporting threshold
  • EU AI Act (2024): Regulation (EU) 2024/1689 - Article 49 covers registration requirements
  • California SB 53 (2025): Transparency in Frontier Artificial Intelligence Act - First US state frontier AI safety law; effective January 2026
  • New York RAISE Act (2024): Requiring AI Safety and Excellence - 72-hour incident reporting

Model registries improve the Ai Transition Model through Civilizational Competence:

FactorParameterImpact
Civilizational CompetenceRegulatory CapacityProvides information foundation for any governance interventions
Civilizational CompetenceInstitutional QualityEnables pre-deployment review and incident learning
Civilizational CompetenceInternational CoordinationCommon standards facilitate cross-border coordination

Registries are necessary but not sufficient infrastructure; they enable rather than replace safety requirements, evaluations, and enforcement mechanisms.