Skip to content
Longterm Wiki

Alignment

TeamActive

The Alignment team works to understand the risks of AI models and develop ways to ensure that future ones remain helpful, honest, and harmless.

Other Anthropic Divisions

7
Alignment | Anthropic | Divisions | Longterm Wiki