Goodfire
Safety OrgAI interpretability research lab developing tools to decode and control neural network internals for safer AI systems.
AI interpretability research lab developing tools to decode and control neural network internals for safer AI systems.