All Source Checks
Automated source checking of wiki data against original sources. Each record is checked against one or more external sources to confirm accuracy.
View internal dashboard with coverage & action queue →Verified Correct
5,102
87% of checked
Has Issues
613
10% of checked
Can't Verify
139
2% of checkedincl. 76 dead links
Not Yet Checked
19
of 5,873 total
Contradicted
5
Fix now — data may be wrong
Outdated
0
All current
Accuracy Rate
100%
confirmed / (confirmed + wrong + outdated)
Needs Recheck
0
All up to date
ARIA TA1.1: Modal Types for Quantitative Analysis (Advanced Research and Inven…
ARIA TA3: SAILS: Safeguarded AI for Logistics and Supply chain (Advanced Resea…
ARIA TA1.1: ULTIMATE: Universal Stochastic Modelling, Verification and Synthes…
ARIA TA3: Digital Custodians for Ageing Infrastructure (Advanced Research and …
ARIA TA1.2: CatColab: Collaborative modeling, specification, and verification …
ARIA TA1.1: Monoidal Coalgebraic Metrics (Advanced Research and Invention Agen…
ARIA TA1.3: Safeguarded Collaboration with AI Agents in a Type-Theoretic Compu…
ARIA TA1.4: Field Building for Better Formal Models of Society (Advanced Resea…
ARIA TA1.1: Hyper-optimised Tensor Contraction for Neural Networks Verificatio…
ARIA TA3: SAGEflex: Safeguarded AI Agents for Grid-Edge Flexibility (Advanced …
ARIA TA3: Large-Scale Validation of Business Process AI (BPAI) (Advanced Resea…
ARIA TA1.4: Field Building for Better Formal Models of Society (Advanced Resea…
ARIA TA1.1: Hyper-optimised Tensor Contraction for Neural Networks Verificatio…
ARIA TA3: SAILS: Safeguarded AI for Logistics and Supply chain (Advanced Resea…
ARIA TA1.1: Probabilistic Protocol Specification for Distributed Autonomous Pr…
ARIA TA1.1: String Diagrammatic Probabilistic Logic (Advanced Research and Inv…
ARIA TA1.1: Profunctors: A unified semantics for safeguarded AI (Advanced Rese…
ARIA TA1.2: TA1.2 Technical Coordinator (Advanced Research and Invention Agenc…
ARIA TA1.1: Axiomatic Theories of String Diagrams for Categories of Probabilis…
ARIA TA1.4: AI-enabled Governance Models for Advanced AI R&D Organisations (Ad…
ARIA TA1.1: Computational Mechanics Approach to World Models (Advanced Researc…
ARIA TA1.1: True Categorical Programming for Composable Systems (Advanced Rese…
ARIA TA1.3: UHURA: UX for Human-centric User-Responsive AI (TA1.3 coordinator)…
ARIA TA2: SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing (Adva…
ARIA TA1.1: Doubly Categorical Systems Logic (Advanced Research and Invention …
ARIA TA1.1: Syntax and Semantics for Multimodal Petri Nets (Advanced Research …
ARIA TA1.1: ULTIMATE: Universal Stochastic Modelling, Verification and Synthes…
ARIA TA1.2: From string diagrams to GPU optimisation (Advanced Research and In…
ARIA TA1.2: CatColab: Collaborative modeling, specification, and verification …
ARIA TA2: Recursive Safeguarding (Advanced Research and Invention Agency (ARIA…
ARIA TA1.1: Safety: Core representation underlying safeguarded AI (Advanced Re…
ARIA TA1.3: Safeguarded Collaboration with AI Agents in a Type-Theoretic Compu…
ARIA TA1.1: Event Structures as World Models (Advanced Research and Invention …
ARIA TA3: SAINTES: Safe and scalable AI decision support for Energy Systems (A…
ARIA TA1.4: AI-enabled Governance Models for Advanced AI R&D Organisations (Ad…
ARIA TA1.1: Profunctors: A unified semantics for safeguarded AI (Advanced Rese…
ARIA TA1.1: String Diagrammatic Probabilistic Logic (Advanced Research and Inv…
ARIA TA1.1: Syntax and Semantics for Multimodal Petri Nets (Advanced Research …
ARIA TA2: SHIELD: Safeguarding High-Impact AI for Enhanced Manufacturing (Adva…
ARIA TA3: Safeguarded AI for Energy Savings in Radio Access Networks (Advanced…
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents (…
Thrive Philanthropy (Manifund -> Jessika Ava, MS, MPA)
WhiteBox Research’s AI Interpretability Fellowship (Manifund -> Brian Tan)
Avoiding Incentives for Performative Prediction in AI (Manifund -> Rubi Hudson)
Activation vector steering with BCI (Manifund -> Lisa Thiergart)
Making 52 AI Alignment Video Explainers and Podcasts (Manifund -> Michaël Rube…
Orexin Pilot Experiment for Reducing Sleep Need (Manifund -> niplav)
Bridge Funding for the Sydney AI Safety Hub (SASH) (Manifund -> Yanni Kyriacos)
Develop technical framework for human control mechanisms for agentic AI system…
Seed Funding For Geodesic Research (Manifund -> Cameron Tice)
Data from source_check_verdicts table. Click a row to view detailed evidence.