Longterm Wiki
Back

Model Evaluation for Extreme Risks

paper

Authors

Toby Shevlane·Sebastian Farquhar·Ben Garfinkel·Mary Phuong·Jess Whittlestone·Jade Leung·Daniel Kokotajlo·Nahema Marchal·Markus Anderljung·Noam Kolt·Lewis Ho·Divya Siddarth·Shahar Avin·Will Hawkins·Been Kim·Iason Gabriel·Vijay Bolina·Jack Clark·Yoshua Bengio·Paul Christiano·Allan Dafoe

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Data Status

Not fetched

Abstract

Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through "dangerous capability evaluations") and the propensity of models to apply their capabilities for harm (through "alignment evaluations"). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security.

Cited by 3 pages

Resource ID: 490028792929073c | Stable ID: NGUzMTYxYT