Index
Grant: AI Safety Support — SERI MATS Program (Coefficient Giving → AI Safety Support)
Verdictconfirmed95%
1 check · 4/9/2026Deterministic match: grantee, amount, date matched in source snapshot (2714 rows)
Our claim
entire record- Grantee
- AI Safety Support
- Name
- AI Safety Support — SERI MATS Program
- Amount
- $1,538,000
- Currency
- USD
- Date
- November 2022
- Notes
[Navigating Transformative AI] Open Philanthropy recommended three grants totaling $1,538,000 to AI Safety Support to support their collaboration with Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educ… expand
[Navigating Transformative AI] Open Philanthropy recommended three grants totaling $1,538,000 to AI Safety Support to support their collaboration with Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with in-person alignment research communities. This grant will support the MATS program’s third cohort. This follows our April 2022 support for the previous iteration of MATS, and falls within our focus area of potential risks from advanced artificial intelligence.
Source evidence
1 src · 1 checkconfirmed95%deterministic-row-match · 4/9/2026
- Name
- AI Safety Support — SERI MATS Program
- Grantee
- AI Safety Support
- Focus Area
- Navigating Transformative AI
- Amount
- $1,538,000.00
- Date
- November 2022
- Description
- Open Philanthrop
NoteDeterministic match: grantee, amount, date matched in source snapshot (2714 rows)
Case № EndOv4JNJmFiled 4/9/2026Confidence 95%