Skip to content
Longterm Wiki
Back

Sam Bowman - Personal Homepage

web
sleepinyourhat.github.io·sleepinyourhat.github.io

Personal homepage of Sam Bowman, a prominent AI safety researcher at Anthropic and NYU professor, known for work on NLP, alignment, and leading NYU's Alignment Research Group.

Metadata

Importance: 45/100homepage

Summary

This is the personal homepage of Sam Bowman, a technical AI safety researcher at Anthropic and Associate Professor at NYU. He has a background in NLP and neural network models for text, earned his PhD from Stanford in 2016, and led NYU's Alignment Research Group from 2022-2024.

Key Points

  • Sam Bowman works on technical AI safety at Anthropic while on leave from NYU as Associate Professor of Data Science and Computer Science.
  • He led the NYU Alignment Research Group from 2022 to 2024.
  • His PhD (Stanford, 2016) focused on early neural network models for text, supervised by Chris Potts and Chris Manning.
  • He is affiliated with NYU's ML² Group and CILVR Lab.
  • He advocates for effective altruism via Giving What We Can.

Cached Content Preview

HTTP 200Fetched Apr 9, 20261 KB
Sam Bowman 
 Sam Bowman

 
 Hi! I work on technical AI safety at Anthropic .

 I’m also on a long-term leave from New York University where I’m officially an Associate Professor of Data Science and Computer Science and part of the ML² Group and the CILVR Lab . I also led the Alignment Research Group during 2022-24.

 I earned my PhD in 2016 for work on early neural network models for text at the Stanford NLP Group and Stanford Linguistics , supervised by Chris Potts and Chris Manning .

 I think you should join Giving What We Can .

 If we haven’t been in contact previously, please look through this FAQ before emailing me. You can reach me at bowman@nyu.edu .

 I’m on X (Twitter) as @sleepinyourhat .

 
 Back to top
Resource ID: 4e6873765ff81dd9 | Stable ID: sid_CjAxl2cPQw