Skip to content
Longterm Wiki
Index
Grant·tTcVh9ykfV·Record·Profile

Grant: AI Safety Support —  MATS Program 6.0 + 7.0 (Coefficient Giving → AI Safety Support)

Verdictconfirmed95%
1 check · 4/9/2026

Deterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Our claim

entire record
Name
AI Safety Support —  MATS Program 6.0 + 7.0
Amount
$2,381,609
Currency
USD
Date
May 2024
Notes
[Global Catastrophic Risks Capacity Building] Open Philanthropy recommended two grants totaling $2,381,609 to AI Safety Support to support the ML Alignment & Theory Scholars (MATS) program. The MATS program is an educational seminar and independent research program that provides expand[Global Catastrophic Risks Capacity Building] Open Philanthropy recommended two grants totaling $2,381,609 to AI Safety Support to support the ML Alignment & Theory Scholars (MATS) program. The MATS program is an educational seminar and independent research program that provides talented scholars with talks, workshops, and research mentorship in the fields of AI alignment, interpretability, and governance. The program also connects participants with the Berkeley AI safety research community. This follows our November 2023 support and falls within our focus area of Global Catastrophic Risks Capacity Building.

Source evidence

1 src · 1 check
confirmed95%deterministic-row-match · 4/9/2026
Name
AI Safety Support —  MATS Program 6.0 + 7.0
Grantee
AI Safety Support
Focus Area
Global Catastrophic Risks Capacity Building
Amount
$2,381,609.00
Date
May 2024

NoteDeterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Case № tTcVh9ykfVFiled 4/9/2026Confidence 95%