Skip to content
Longterm Wiki
Index
Grant·rBBlgJhp_F·Record

Grant rBBlgJhp_F

Verdictconfirmed95%
1 check · 4/29/2026

1 → confirmed

Our claim

entire record
Name
Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars
Amount
$2,047,268
Currency
USD
Date
November 2022
Notes
[Navigating Transformative AI] Open Philanthropy recommended a grant of $2,047,268 to the Berkeley Existential Risk Initiative to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program.expand[Navigating Transformative AI] Open Philanthropy recommended a grant of $2,047,268 to the Berkeley Existential Risk Initiative to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community. This grant will support the MATS program’s third cohort. This follows our April 2022 support for the previous iteration of MATS, and falls within our focus area of potential risks from advanced artificial intelligence.

Source evidence

1 src · 1 check
confirmed95%deterministic-row-match · 4/20/2026
Name
Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars
Grantee
Berkeley Existential Risk Initiative
Focus Area
Navigating Transformative AI
Amount
$2,047

NoteDeterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Case № rBBlgJhp_FFiled 4/29/2026Confidence 95%