All Source Checks
Automated source checking of wiki data against original sources. Each record is checked against one or more external sources to confirm accuracy.
View internal dashboard with coverage & action queue →Verified Correct
103
57% of checked
Has Issues
51
28% of checked
Can't Verify
26
14% of checkedincl. 16 dead links
Not Yet Checked
0
of 180 total
Contradicted
2
Fix now — data may be wrong
Outdated
1
Source has newer info
Accuracy Rate
97%
confirmed / (confirmed + wrong + outdated)
Needs Recheck
0
All up to date
AI in the Hands of Nonstate Actors
Six Steps to Responsible AI in the Federal Government
Artificial Intelligence and Arms Control
AI Index Report 2023
An Overview of Catastrophic AI Risks
The Three Challenges of AI Regulation
Do Foundation Model Providers Comply with the EU AI Act?
Guidelines for AI and Shared Prosperity
A Jurisdictional Certification Approach
Open-Sourcing Highly Capable Foundation Models
Foundation Model Transparency Index
Guidance for Safe Foundation Model Deployment
Three Lines of Defense Against Risks from AI
A Working Guide
Improving Alignment and Robustness with Circuit Breakers
Superintelligence Strategy
The Business of Military AI
Computing Power and the Governance of AI
Computing Power and the Governance of Artificial Intelligence
Envisioning a Global Regime Complex to Govern Artificial Intelligence
AI Index Report 2024
Societal Adaptation to Advanced AI
Risk Thresholds for Frontier AI
Visibility into AI Agents
An Early Look at the Labor Market Impact Potential of LLMs
Risk Mitigation Strategies for the Open Foundation Model Value Chain
Beyond Open vs. Closed: Foundation AI Model Governance
IDs for AI Systems
Generative AI, the American Worker, and the Future of Work
An Agenda to Strengthen U.S. Democracy in the Age of AI
Infrastructure for AI Agents
Emboldened Offenders, Endangered Communities: Internet Shutdowns in 2024
Responsibly Navigating the Enterprise AI Landscape
The Coming AI Backlash Will Shape Future Regulation
Third-Party Compliance Reviews for Frontier AI Safety Frameworks
2025 Landscape Report
Forecasting LLM-Enabled Biorisk and the Efficacy of Safeguards
AI System-to-Model Innovation
A Research Agenda
Pulling Back the Curtain on China's Military-Civil Fusion
The Use of Open Models in Research
AI Governance at the Frontier
AI Safety Index Winter 2025
When AI Builds AI
Artificial Intelligence Patent Clusters
Toward Rigorous Third-Party Assessment
Strengthening the AI Assurance Ecosystem
Physical AI
China's Military AI Wish List
Navigating Demographic Measurement for Fairness and Equity
Data from source_check_verdicts table. Click a row to view detailed evidence.