Skip to content
Longterm Wiki
Back

Sparse autoencoders uncover biologically interpretable features in protein language model representations

web

Authors

Onkar Gujral·Mihir Bafna·Eric Alm·Bonnie Berger

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: PNAS

This PNAS journal article presents sparse autoencoders applied to protein language models, demonstrating interpretability techniques for understanding neural network representations—a key method for mechanistic interpretability relevant to AI safety research.

Paper Details

Citations
15
Year
2025
Methodology
peer-reviewed
Categories
Proceedings of the National Academy of Sciences

Metadata

journal articleprimary source

Cited by 2 pages

PageTypeQuality
InterpretabilityResearch Area66.0
Sparse Autoencoders (SAEs)Approach91.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20262 KB
# Sparse autoencoders uncover biologically interpretable features in protein language model representations
Authors: Onkar Gujral, Mihir Bafna, Eric Alm, Bonnie Berger
Journal: Proceedings of the National Academy of Sciences
Published: 2025-08-26
DOI: 10.1073/pnas.2506316122
## Abstract

Foundation models in biology—particularly protein language models (PLMs)—have enabled ground-breaking predictions in protein structure, function, and beyond. However, the “black-box” nature of these representations limits transparency and explainability, posing challenges for human–AI collaboration and leaving open questions about their human-interpretable features. Here, we leverage sparse autoencoders (SAEs) and a variant, transcoders, from natural language processing to extract, in a completely unsupervised fashion, interpretable sparse features present in both protein-level and amino acid (AA)-level representations from ESM2, a popular PLM. Unlike other approaches such as training probes for features, the extraction of features by the SAE is performed without any supervision. We find that many sparse features extracted from SAEs trained on protein-level representations are tightly associated with Gene Ontology (GO) terms across all levels of the GO hierarchy. We also use Anthropic’s Claude to automate the interpretation of sparse features for both protein-level and AA-level representations and find that many of these features correspond to specific protein families and functions such as the NAD Kinase, IUNH, and the PTH family, as well as proteins involved in methyltransferase activity and in olfactory and gustatory sensory perception. We show that sparse features are more interpretable than ESM2 neurons across all our trained SAEs and transcoders. These findings demonstrate that SAEs offer a promising unsupervised approach for disentangling biologically relevant information present in PLM representations, thus aiding interpretability. This work opens the door to safety, trust, and explainability of PLMs and their applications, and paves the way to extracting meaningful biological insights across increasingly powerful models in the life sciences.
Resource ID: 4d1186e8c443a9a9 | Stable ID: sid_FCjbZOJzx4