All Source Checks
Automated source checking of wiki data against original sources. Each record is checked against one or more external sources to confirm accuracy.
View internal dashboard with coverage & action queue →Verified Correct
5,101
87% of checked
Has Issues
659
11% of checked
Can't Verify
94
2% of checkedincl. 76 dead links
Not Yet Checked
19
of 5,873 total
Contradicted
4
Fix now — data may be wrong
Outdated
0
All current
Accuracy Rate
100%
confirmed / (confirmed + wrong + outdated)
Needs Recheck
0
All up to date
Translation of BlueDot Impact's AI alignment curriculum into Portuguese (Manif…
Aquatic Animal Alliance: A Global Movement for Neglected Species (Manifund -> …
Discovering latent goals (mechanistic interpretability PhD salary) (Manifund -…
Investigating and informing the public about the trajectory of AI (Manifund ->…
Overcoming inertial barriers to collective action through anonymous coordinati…
Exploring feature interactions in transformer LLMs through sparse autoencoders…
AI Governance Exchange (focus on China, AI safety), Seed Funding (Manifund -> …
Metaculus' First Animal-Focused Forecasting Tournament (Manifund -> Aditi Basu)
The Looming Super-Bug Crisis (Manifund -> Nick Ayrton)
Effective Altruism at UCLA - Fall budget for club activities including organiz…
Ought — General Support (2019) (Coefficient Giving -> Elicit (AI Research Tool))
ARIA TA1.4: Law-following AI (Advanced Research and Invention Agency (ARIA) ->…
ARIA TA1.4: Deliberative AI Specifications and Infrastructure (Advanced Resear…
ARIA TA1.1: Double Categorical Systems Theory for Safeguarded AI (Advanced Res…
ARIA TA1.1: Double Categorical Systems Theory for Safeguarded AI (Advanced Res…
ARIA TA3: Safeguarded AI-Enabled Biopharmaceutical Manufacturing (Advanced Res…
ARIA TA1.4: Deliberative AI Specifications and Infrastructure (Advanced Resear…
ARIA TA1.1: Learning-Theoretic AI Safety (Advanced Research and Invention Agen…
ARIA TA1.4: Law-following AI (Advanced Research and Invention Agency (ARIA) ->…
ARIA TA1.4: Privacy-preserving AI Safety Verification (Advanced Research and I…
ARIA TA1.2: Data-Parallel Proof Checking for Monoidal Theories (Advanced Resea…
ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified ML (Adva…
ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed Systems (Ad…
ARIA TA3: Safeguarded AI-Enabled Biopharmaceutical Manufacturing (Advanced Res…
ARIA TA1.2: Automated Reasoning Technologies for AI Safety Verification (Advan…
ARIA TA1.4: Privacy-preserving AI Safety Verification (Advanced Research and I…
ARIA TA1.1: Quantitative Predicate Logic as a Foundation for Verified ML (Adva…
ARIA TA1.1: Supermartingale Certificates for Temporal Logic (Advanced Research…
ARIA TA1.1: Unified Automated Reasoning for Randomised Distributed Systems (Ad…
ARIA TA1.2: Data-Parallel Proof Checking for Monoidal Theories (Advanced Resea…
ARIA TA1.1: Learning-Theoretic AI Safety (Advanced Research and Invention Agen…
ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AI (Advanced Research a…
ARIA TA1.1: SAINT: Safe AI ageNTs (Advanced Research and Invention Agency (ARI…
ARIA TA1.1: Supermartingale Certificates for Temporal Logic (Advanced Research…
ARIA TA1.2: Automated Reasoning Technologies for AI Safety Verification (Advan…
ARIA TA2: Cognitive Emulation: Our Path to Safeguarded AI (Advanced Research a…
ARIA TA1.1: SAINT: Safe AI ageNTs (Advanced Research and Invention Agency (ARI…
ARIA TA3: SAFER-ADS: Safety Assurance of Frontier AI for Automated Driving (Ad…
Subsidize Real Money Prediction Markets on High Impact Topics (Manifund -> Ezr…
AI Safety Reading Group at metauni [Retrospective] (Manifund -> Matthew Camero…
Building a Culture of Care: Educating on Animal Welfare in Somalia (Manifund -…
AI Alignment Research Lab for Africa (Manifund -> Jonas Kgomo)
[Urgent] Top-up funding to present poster at the Tokyo AI Safety Conference (…
Impact Accelerator Program: Biggest career program for experienced professiona…
Apollo Research: Scale up interpretability & behavioral model evals research (…
Reflective altruism (Manifund -> David Thorstad)
Removing Hazardous Knowledge from AIs (Manifund -> Alexander Pan)
Support a thriving and talented community of Filipino EAs (Manifund -> Zian Ma…
The Base Rate Times (Manifund -> Marcel van Diemen)
Scaling Training Process Transparency (Manifund -> Robert Krzyzanowski)
Data from source_check_verdicts table. Click a row to view detailed evidence.