Skip to content
Longterm Wiki
grant

Berkeley Existential Risk Initiative — Algorithmic Alignment Group

Child of Coefficient Giving

Metadata

Source Tablegrants
Source IDS4f3MZIPcI
Descriptionto Berkeley Existential Risk Initiative, USD 30000, 2024-09
Source URLcoefficientgiving.org/funds/
ParentCoefficient Giving
Children
CreatedMar 12, 2026, 5:54 AM
UpdatedMar 25, 2026, 3:13 AM
SyncedMar 19, 2026, 8:57 PM

Record Data

idS4f3MZIPcI
organizationIdCoefficient Giving(organization)
granteeIdBerkeley Existential Risk Initiative(organization)
orgEntityIdCoefficient Giving(organization)
orgDisplayName
granteeEntityIdBerkeley Existential Risk Initiative(organization)
granteeDisplayNameberi
nameBerkeley Existential Risk Initiative — Algorithmic Alignment Group
amount30000
currencyUSD
period
date2024-09
status
sourcecoefficientgiving.org/funds/
notes[Navigating Transformative AI] Open Philanthropy recommended a grant of $30,000 over three years to the Berkeley Existential Risk Initiative to support the Algorithmic Alignment Group (AAG). Led by Dylan Hadfield-Menell, AAG researches how humans and AI systems interact in the contexts of value lear
programIdEXpTP-ujq6
dataSourceId

Source Check Verdicts

confirmed95% confidence

Last checked: 4/9/2026

[deterministic-row-match] Deterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Debug info

Thing ID: S4f3MZIPcI

Source Table: grants

Source ID: S4f3MZIPcI

Parent Thing ID: sid_ULjDXpSLCI