Skip to content
Longterm Wiki
Index
Grant·X41EMiM5cW·Record·Profile

Grant: Berkeley Existential Risk Initiative — Language Model Alignment Research (Coefficient Giving → Berkeley Existential Risk Initiative)

Verdictconfirmed95%
1 check · 4/9/2026

Deterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Our claim

entire record
Name
Berkeley Existential Risk Initiative — Language Model Alignment Research
Amount
$40,000
Currency
USD
Date
June 2022
Notes
[Navigating Transformative AI] Open Philanthropy recommended a grant of $40,000 over three years to the Berkeley Existential Risk Initiative to support a project led by Professor Samuel Bowman of New York University to develop a dataset and accompanying methods for language modelexpand[Navigating Transformative AI] Open Philanthropy recommended a grant of $40,000 over three years to the Berkeley Existential Risk Initiative to support a project led by Professor Samuel Bowman of New York University to develop a dataset and accompanying methods for language model alignment research. This falls within our focus area of potential risks from advanced artificial intelligence. The grant amount was updated in April 2024.

Source evidence

1 src · 1 check
confirmed95%deterministic-row-match · 4/9/2026
Name
Berkeley Existential Risk Initiative — Language Model Alignment Research
Grantee
Berkeley Existential Risk Initiative
Focus Area
Navigating Transformative AI
Amount
$40,000.00

NoteDeterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Case № X41EMiM5cWFiled 4/9/2026Confidence 95%