Skip to content
Longterm Wiki

Berkeley Existential Risk Initiative — Language Model Alignment Research

$40K
Data source
Source
Notes

[Navigating Transformative AI] Open Philanthropy recommended a grant of $40,000 over three years to the Berkeley Existential Risk Initiative to support a project led by Professor Samuel Bowman of New York University to develop a dataset and accompanying methods for language model alignment research. This falls within our focus area of potential risks from advanced artificial intelligence. The grant amount was updated in April 2024.

Other Grants by Coefficient Giving

2625
Showing 10 of 2625 grants

Other Grants to Berkeley Existential Risk Initiative

19
Showing 10 of 19 grants
Berkeley Existential Risk Initiative — Language Model Alignment Research | Coefficient Giving | Grants | Longterm Wiki