Longterm Wiki

Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates

$35K
Funder
Recipient
Joe Collman
Program
Date
Jul 2021
Source
Notes

[Long-Term Future Fund] Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates

Other Grants by Long-Term Future Fund (LTFF)

544
Showing 10 of 544 grants
Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates | Grants | Longterm Wiki