All Source Checks
Automated source checking of wiki data against original sources. Each record is checked against one or more external sources to confirm accuracy.
View internal dashboard with coverage & action queue →Verified Correct
6,959
78% of checked
Has Issues
1,167
13% of checked
Can't Verify
853
9% of checkedincl. 193 dead links
Not Yet Checked
166
of 9,145 total
Contradicted
59
Fix now — data may be wrong
Outdated
17
Source has newer info
Accuracy Rate
99%
confirmed / (confirmed + wrong + outdated)
Needs Recheck
0
All up to date
Jon Bockman at Animal Charity Evaluators (Executive Director)
Sonia Cassidy at Alliance to Feed the Earth in Disasters (Director of Operatio…
Noah Wescombe at Alliance to Feed the Earth in Disasters (Head of Policy)
Ray Taylor at Alliance to Feed the Earth in Disasters (Co-founder, Food System…
David Denkenberger at Alliance to Feed the Earth in Disasters (Director & Co-f…
Juan Garcia Martinez at Alliance to Feed the Earth in Disasters (Research Mana…
Florian Jehn at Alliance to Feed the Earth in Disasters (Senior Researcher)
Aron Mill at Alliance to Feed the Earth in Disasters (Coordinator / Project Le…
Joshua Pearce at Alliance to Feed the Earth in Disasters (Co-originator)
Ought — General Support (2019) (Coefficient Giving -> Elicit (AI Research Tool))
Esben Kran at Apart Research (Founder, Board Member)
Dan Hendrycks at Center for AI Safety (CAIS) (Executive Director)
David Zapolsky at Amazon (Chief Global Affairs & Legal Officer)
Andy Jassy at Amazon (President & CEO)
Brian Olsavsky at Amazon (SVP & CFO)
Beth Galetti at Amazon (SVP, People Experience & Technology (CHRO))
MacArthur Foundation - Humanity AI Grant (AI Now Institute)
Amba Kak at AI Now Institute (Co-Executive Director)
Mustafa Suleyman at Google DeepMind (Co-founder, Head of Applied AI)
Rohin Shah at Google DeepMind (Research Scientist)
Geoffrey Hinton at Google DeepMind (VP & Engineering Fellow)
Victoria Krakovna at Google DeepMind (Research Scientist)
ARIA TA1.4: Law-following AI (Advanced Research and Invention Agency (ARIA) ->…
ARIA TA1.4: Deliberative AI Specifications and Infrastructure (Advanced Resear…
ARIA TA1.1: Double Categorical Systems Theory for Safeguarded AI (Advanced Res…
ARIA TA1.1: Double Categorical Systems Theory for Safeguarded AI (Advanced Res…
ARIA TA3: Safeguarded AI-Enabled Biopharmaceutical Manufacturing (Advanced Res…
ARIA TA1.4: Deliberative AI Specifications and Infrastructure (Advanced Resear…
ARIA TA1.1: Learning-Theoretic AI Safety (Advanced Research and Invention Agen…
ARIA TA1.4: Law-following AI (Advanced Research and Invention Agency (ARIA) ->…
ARIA TA1.4: Privacy-preserving AI Safety Verification (Advanced Research and I…
ARIA TA1.2: Data-Parallel Proof Checking for Monoidal Theories (Advanced Resea…
ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified ML (Adva…
ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed Systems (Ad…
ARIA TA3: Safeguarded AI-Enabled Biopharmaceutical Manufacturing (Advanced Res…
ARIA TA1.2: Automated Reasoning Technologies for AI Safety Verification (Advan…
ARIA TA1.4: Privacy-preserving AI Safety Verification (Advanced Research and I…
ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified ML (Adva…
ARIA TA1.1: Supermartingale Certificates for Temporal Logic (Advanced Research…
ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed Systems (Ad…
ARIA TA1.2: Data-Parallel Proof Checking for Monoidal Theories (Advanced Resea…
ARIA TA1.1: Learning-Theoretic AI Safety (Advanced Research and Invention Agen…
ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AI (Advanced Research a…
ARIA TA1.1: SAINT: Safe AI ageNTs (Advanced Research and Invention Agency (ARI…
ARIA TA1.1: Supermartingale Certificates for Temporal Logic (Advanced Research…
ARIA TA1.2: Automated Reasoning Technologies for AI Safety Verification (Advan…
ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AI (Advanced Research a…
ARIA TA1.1: SAINT: Safe AI ageNTs (Advanced Research and Invention Agency (ARI…
ARIA TA3: SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving (Ad…
Chris Olah at Anthropic (Co-founder, Interpretability Research Lead)
Data from source_check_verdicts table. Click a row to view detailed evidence.