Contributes to: Governance Capacity
Primary outcomes affected:
- Steady State ↓↓ — Quality institutions preserve democratic governance in the long term
- Transition Smoothness ↓ — Effective institutions manage disruption and maintain legitimacy
Institutional Quality measures the health and effectiveness of institutions involved in AI governance—their independence from capture, ability to retain expertise, and quality of decision-making processes. Higher institutional quality is better—it determines whether AI governance serves the public interest or narrow constituencies. While regulatory capacity asks whether governments can regulate, institutional quality asks whether they will do so effectively.
Funding structures, personnel practices, transparency norms, and the balance of power between regulated industries and oversight bodies all shape whether institutional quality improves or degrades. High quality enables governance that genuinely serves public interest; low quality results in capture where institutions nominally serving the public actually advance industry interests.
This parameter underpins:
Contributes to: Governance Capacity
Primary outcomes affected:
| Metric | Current Value | Baseline/Comparison | Trend |
|---|---|---|---|
| Industry-academic co-authorship | 85% of AI papers (2024) | 50% (2010) | Increasing |
| AI PhD graduates entering industry | 70% (2024) | 20% (two decades ago) | Strongly increasing |
| Largest AI models from industry | 96% (current) | Unknown (2010) | Dominant |
| Regulatory-industry resource ratio | 600:1 (~$100B vs. $150M) | N/A for previous technologies | Unprecedented |
| US AISI budget request vs. received | $47.7M requested, ~$10M received | N/A (new institution) | Underfunded |
| OpenAI lobbyist count | 18 (2024) | 3 (2023) | 6x increase |
| AISI direction reversals | 1 major (AISI to CAISI, 2025) | 0 (new institutions) | Concerning |
| Revolving door in AI-related sectors | 53% of electric manufacturing lobbyists | Unknown baseline | Accelerating |
Sources: MIT Sloan AI research study, OpenSecrets lobbying data, CSIS AISI analysis, Stanford HAI Tracker
| Institution | Funding Source | Industry Ties | Independence Rating | 2025 Budget |
|---|---|---|---|---|
| UK AI Security Institute | Government | Voluntary lab cooperation | Medium-High | £50M (~$65M) annually |
| US CAISI (formerly AISI) | Government | Refocused toward innovation (2025) | Medium (declining) | ~$10M received ($47.7M requested) |
| EU AI Office | EU budget | Enforcement mandate | High | ~€10M (estimated) |
| Academic AI safety research | 60-70%+ industry-funded | Strong | Low-Medium | Variable |
| Think tanks | Mixed (industry, philanthropy) | Variable | Variable | Variable |
Note: UK AISI has largest national AI safety budget globally; US underfunding creates expertise gap. Sources: CSIS AISI Network analysis, All Tech Is Human landscape report
Healthy institutional quality in AI governance would exhibit characteristics that enable independent, expert, and accountable decision-making in the public interest.
| Characteristic | Current Status | Gap |
|---|---|---|
| Independence from capture | Resource asymmetry enables industry influence | Large |
| Expertise retention | Compensation gaps of 50-80% vs. industry | Very large |
| Transparent processes | Variable; some institutions opaque | Medium |
| Long-term orientation | Political volatility undermines planning | Large |
| Adaptive capacity | Multi-year regulatory timelines | Large |
| Accountability mechanisms | Limited for AI-specific governance | Medium-Large |
The 2024 RAND/AAAI study "How Do AI Companies 'Fine-Tune' Policy?" interviewed 17 AI policy experts to identify key capture mechanisms. The study found agenda-setting (mentioned by 15 of 17 experts), advocacy (13), academic capture (10), information management (9), cultural capture through status (7), and media capture (7) as primary channels for industry influence.
| Capture Mechanism | How It Works | Current Evidence | Impact on Quality |
|---|---|---|---|
| Agenda-setting | Industry shapes which issues receive attention | Framing AI policy as "innovation vs. regulation"; capture of policy discourse | High—determines what gets regulated |
| Advocacy and lobbying | Direct influence through campaign contributions, meetings | OpenAI: 3→18 lobbyists (2023-2024); 53% of sector lobbyists are ex-government | High—direct policy influence |
| Academic capture | Industry funding shapes research priorities and findings | 85% of AI papers have industry co-authors; 70% of PhDs enter industry | Very High—captures expertise production |
| Information management | Industry controls access to data needed for regulation | Voluntary model evaluations; proprietary benchmarks; 29x compute advantage | Critical—regulators depend on industry data |
| Cultural capture | Industry norms become regulatory norms | "Move fast" culture; "innovation-first" mindset in agencies | Medium-High—shapes institutional values |
| Media capture | Industry shapes public discourse through PR and funding | Tech media dependence on company access; sponsored content | Medium—affects public pressure on regulators |
| Resource asymmetry | Industry outspends regulators 600:1 | $100B+ industry R&D vs. $150M total regulatory budgets | Critical—enables all other mechanisms |
Sources: RAND regulatory capture study, MIT Sloan industry dominance analysis, OpenSecrets lobbying data
| Threat | Mechanism | Evidence |
|---|---|---|
| Mission reversal | New administrations redirect institutional priorities | AISI to CAISI (2025): safety evaluation to innovation promotion; EO 14110 revoked |
| Budget manipulation | Funding cuts undermine institutional capacity | US AISI requested $47.7M; received ~$10M (21% of request); NIST forced to "cut to the bone" |
| Leadership churn | Political appointees depart with administrations | Elizabeth Kelly (AISI director) resigned February 2025; typical 18-24 month tenure for political appointees |
Sources: FedScoop NIST budget analysis, CSIS AISI recommendations
| Threat | Mechanism | Evidence |
|---|---|---|
| Compensation gap | Government cannot compete with industry salaries | 50-80% salary differential (estimated); AI researchers can earn 5-10x more in industry than government |
| Career incentives | Best career path is government-to-industry transition | 70% of AI PhDs now enter industry; revolving door provides lucrative exit opportunities |
| Capability gap | Industry technical capacity exceeds regulators | Industry invests $100B+ in AI R&D annually; industry models 29x larger than academic models on average; 96% of largest models now from industry |
| Computing resource asymmetry | Academic institutions lack large-scale compute for frontier research | Forces academic researchers into industry collaborations; creates dependence on company resources |
Sources: MIT Sloan AI research dominance, RAND regulatory capture mechanisms
| Factor | Mechanism | Status |
|---|---|---|
| Independent funding | Insulate budgets from political interference | Limited—most AI governance dependent on annual appropriations |
| Cooling-off periods | Limit revolving door with waiting periods | Varies by jurisdiction; often weakly enforced |
| Transparency requirements | Public disclosure of industry contacts and influence | Increasing but inconsistent |
| Factor | Mechanism | Status |
|---|---|---|
| Academic partnerships | Universities supplement government expertise | Growing—NIST AI RMF community of 6,500+ |
| Technical fellowship programs | Bring industry expertise into government | Limited scale |
| International cooperation | Share evaluation methods across AISI network | Building—first joint evaluations completed |
| Factor | Mechanism | Status |
|---|---|---|
| Congressional oversight | Legislative review of agency actions | Inconsistent for AI-specific issues |
| Civil society monitoring | NGOs track and publicize capture | Active—AI Now, Future of Life, etc. |
| Judicial review | Courts can overturn captured decisions | Available but rarely invoked for AI |
The 2024 RAND/AAAI study on regulatory capture identified systemic changes needed to improve institutional quality. Based on interviews with 17 AI policy experts, the study recommends:
| Mitigation Strategy | Mechanism | Implementation Difficulty | Estimated Effectiveness |
|---|---|---|---|
| Develop technical expertise in government | Competitive salaries, fellowship programs, training | High—requires sustained funding | High (20-40% improvement) |
| Develop technical expertise in civil society | Fund independent research organizations and watchdogs | Medium—philanthropic support available | Medium-High (15-30% improvement) |
| Create independent funding streams | Insulate AI ecosystem from industry dependence | Very High—requires new institutions | Very High (30-50% improvement) |
| Increase transparency and ethics requirements | Disclosure of industry funding, conflicts of interest | Medium—can be legislated | Medium (10-25% improvement) |
| Enable greater civil society access to policy | Open comment periods, public advisory boards | Low-Medium—procedural changes | Medium (15-25% improvement) |
| Implement procedural safeguards | Cooling-off periods, recusal requirements, lobbying limits | Medium—political resistance | Medium-High (20-35% improvement) |
| Diversify academic funding | Government and philanthropic grants for AI safety research | High—requires hundreds of millions annually | High (25-40% improvement) |
Effectiveness estimates represent expert judgment on potential reduction in capture influence if fully implemented. Most strategies show compound effects when combined. Source: RAND regulatory capture study
| Domain | Impact | Severity |
|---|---|---|
| Regulatory capture | Rules serve industry interests, not public safety | Critical |
| Governance legitimacy | Public loses trust in AI oversight | High |
| Safety theater | Appearance of oversight without substance | Critical |
| Democratic accountability | Citizens cannot influence AI governance through normal channels | High |
| Long-term blindness | Short-term political pressures override safety concerns | Critical |
Institutional quality affects existential risk through several mechanisms:
Capture prevents intervention: If AI governance institutions are captured by industry, they cannot take action against industry interests—even when safety requires it. The ~$100B industry spending versus ~$150M regulatory budget creates unprecedented capture potential.
Political volatility undermines continuity: Long-term AI safety requires sustained institutional commitment across political cycles. The AISI-to-CAISI transformation shows how quickly institutional direction can reverse, undermining multi-year safety efforts.
Expertise asymmetry prevents evaluation: Without independent technical expertise, regulators cannot assess industry safety claims. This forces reliance on self-reporting, which becomes unreliable precisely when stakes are highest.
Trust deficit undermines legitimacy: If the public perceives AI governance as captured, political support for stronger oversight erodes, creating a vicious cycle of weakening institutions.
| Timeframe | Key Developments | Quality Impact |
|---|---|---|
| 2025-2026 | CAISI direction stabilizes; EU AI Act enforcement begins; state legislation proliferates | Mixed—EU institutions strengthen; US uncertain |
| 2027-2028 | Next-gen AI deployed; first major enforcement actions | Critical test—will institutions act independently? |
| 2029-2030 | Institutional track record emerges; capture patterns become visible | Determines whether quality improves or declines |
| Scenario | Probability | Outcome | Key Indicators | Timeline |
|---|---|---|---|---|
| Quality improvement | 15-20% | Major incident or reform movement drives institutional strengthening; independent funding, expertise programs, and transparency measures implemented | Statutory funding protections; cooling-off periods enforced; academic funding diversified | 2026-2028 |
| Muddle through | 45-55% (baseline) | Institutions maintain partial independence; some capture but also some genuine oversight; quality varies by jurisdiction | Mixed enforcement record; continued resource gaps; some effective interventions | 2025-2030+ |
| Gradual capture | 25-35% | Industry influence increases over time; institutions provide appearance of oversight without substance; safety depends on industry self-governance | Increasing revolving door; weakening enforcement; industry-friendly rule changes | 2025-2027 |
| Rapid deterioration | 5-10% | Political crisis or budget cuts severely weaken institutions; AI governance effectively collapses | Major budget cuts (greater than 50%); mass departures of technical staff; regulatory rollbacks | 2025-2026 |
Note on probabilities: These estimates reflect expert judgment based on historical regulatory patterns, current trends, and political economy dynamics. Actual outcomes depend heavily on near-term developments including major AI incidents, election outcomes, and civil society mobilization. The "muddle through" scenario receives highest probability as institutional capture rarely reaches extremes—most regulatory systems maintain some independence while also exhibiting capture dynamics.
Arguments capture is inevitable:
Arguments capture can be resisted:
Arguments for technocratic governance:
Arguments for democratic governance:
The US AI Safety Institute's transformation illustrates institutional quality challenges:
| Phase | Development | Quality Implication |
|---|---|---|
| Founding (Nov 2023) | Mission: pre-deployment safety testing | High—independent safety mandate |
| Building (2024) | Signed voluntary agreements with labs; conducted evaluations | Medium—relied on industry cooperation |
| Transition (Jan 2025) | EO 14110 revoked; leadership departed | Declining—political vulnerability exposed |
| Transformation (Jun 2025) | Renamed CAISI; mission: innovation promotion | Low—safety mission replaced |
Key lesson: Institutions without legislative foundation are vulnerable to rapid capture through political channels, even when initially designed for independence.
The evolution of academic AI research demonstrates gradual capture dynamics:
| Metric | 2010 | 2020 | 2024 | Trend |
|---|---|---|---|---|
| Industry co-authorship | ~50% | ~75% | ~85% | Increasing |
| Industry funding share | ~30% | ~50% | ~60%+ | Increasing |
| Industry publication venues | Limited | Growing | Dominant | Increasing |
| Critical industry research | Common | Declining | Rare | Decreasing |
Key lesson: Gradual financial dependence shifts research priorities even without explicit directives, creating "soft capture" that maintains appearance of independence while substantively serving industry interests.
| Dimension | Metric | Current Status |
|---|---|---|
| Independence | % budget from independent sources | Low (most dependent on appropriations) |
| Expertise | Technical staff credentials vs. industry | Low (significant gap) |
| Transparency | Public disclosure of industry contacts | Medium (inconsistent) |
| Decision quality | Rate of decisions later reversed or criticized | Unknown (too new) |
| Enforcement | Violations detected and penalized | Very low (minimal enforcement) |
Auto-generated from the master graph. Shows key relationships.