Skip to content
Longterm Wiki
Back

Dan Hendrycks - Academic CV and Research Overview

web
danhendrycks.com·danhendrycks.com/

Dan Hendrycks is a central figure in both technical ML safety and broader AI risk advocacy; his benchmarks are widely used in capability and safety evaluations, and CAIS under his direction has been influential in shaping mainstream AI safety discourse.

Metadata

Importance: 72/100homepage

Summary

Academic homepage and CV of Dan Hendrycks, Director of the Center for AI Safety (CAIS) and prominent ML safety researcher. Hendrycks is known for foundational work on AI robustness, anomaly detection, and safety benchmarks including MMLU, MATH, and ARC-Challenge. His research spans technical AI safety, AI risk, and policy-relevant evaluations.

Key Points

  • Creator of widely-used safety and capability benchmarks including MMLU, MATH, and various robustness datasets
  • Director of the Center for AI Safety (CAIS), which produced the influential 'Statement on AI Risk' signed by leading researchers
  • Foundational contributions to out-of-distribution detection, adversarial robustness, and dataset difficulty measurement
  • Research bridges technical ML safety work with broader AI existential risk concerns and governance implications
  • Co-author of 'Natural Selection Favors AIs over Humans' and other papers on long-term AI risk

Cited by 1 page

PageTypeQuality
FAR AIOrganization76.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20261 KB
-->
 
 
 Dan Hendrycks 
 
 

 
 
 Dan Hendrycks 


 
 
 
 
 dan ατ safe.ai 
 CV 
 • 
 Google Scholar 
 • 
 Semantic Scholar 
 • 
 Newsletter 
 • 
 𝕏 (Twitter) 
 
 

 
 
 
 
 
 
 
 
 Dan Hendrycks is the executive director of the Center for AI Safety and an advisor to xAI and Scale AI . He received his PhD in AI from UC Berkeley. He has contributed the GELU activation function (the most-used activation in state-of-the-art models including BERT, GPT, Vision Transformers, etc.), benchmarks and methods in robustness, MMLU , and an Introduction to AI Safety, Ethics, and Society .
 


 
 
 
 
 
 

 Selected Works


 
 
 
 All Works


 
 

 
 Service 


 
 
 I co-organized the Workshop on
 Robustness and Uncertainty Estimation in Deep Learning at ICML 2019, 2020, 2021; an Adversarial Machine Learning workshop at ICML 2021; and the VISDA domain adaptation competition at NeurIPS 2021.
 

 
 I have reviewed for CVPR (2019, 2020), ICLR (2019, 2020), NeurIPS (2017, 2020), ICML (2018, 2019), ICCV
 (2019), ECCV (2018), Transactions on Affective Computing, AI and Ethics, IJCV, TPAMI, and JMLR.
Resource ID: 8016e12c68948749 | Stable ID: sid_ZTIU0u4F6r