Alignment
TeamActiveThe Alignment team works to understand the risks of AI models and develop ways to ensure that future ones remain helpful, honest, and harmless.
The Alignment team works to understand the risks of AI models and develop ways to ensure that future ones remain helpful, honest, and harmless.