Skip to content
Longterm Wiki
grant

Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars

Child of Coefficient Giving

Metadata

Source Tablegrants
Source IDrBBlgJhp_F
Descriptionto Berkeley Existential Risk Initiative, USD 2047268, 2022-11
Source URLcoefficientgiving.org/funds/
ParentCoefficient Giving
Children
CreatedMar 12, 2026, 5:54 AM
UpdatedMar 25, 2026, 3:13 AM
SyncedMar 19, 2026, 8:57 PM

Record Data

idrBBlgJhp_F
organizationIdCoefficient Giving(organization)
granteeIdBerkeley Existential Risk Initiative(organization)
orgEntityIdCoefficient Giving(organization)
orgDisplayName
granteeEntityIdBerkeley Existential Risk Initiative(organization)
granteeDisplayNameberi
nameBerkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars
amount2047268
currencyUSD
period
date2022-11
status
sourcecoefficientgiving.org/funds/
notes[Navigating Transformative AI] Open Philanthropy recommended a grant of $2,047,268 to the Berkeley Existential Risk Initiative to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educatio
programIdEXpTP-ujq6
dataSourceId

Source Check Verdicts

confirmed95% confidence

Last checked: 4/29/2026

1 → confirmed

Debug info

Thing ID: rBBlgJhp_F

Source Table: grants

Source ID: rBBlgJhp_F

Parent Thing ID: sid_ULjDXpSLCI