Predictability and Surprise in Large Generative Models
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Anthropic
This Anthropic paper examines the tension between predictable scaling laws and unpredictable emergent capabilities in large generative models, with direct implications for AI safety governance and deployment policy.
Metadata
Summary
The paper identifies a key tension in large generative models: while their training loss follows predictable scaling laws, their specific capabilities, failure modes, and outputs remain highly unpredictable. This unpredictability creates challenges for safe deployment and regulation. The authors provide examples of socially harmful emergent behaviors and propose interventions for policymakers and developers.
Key Points
- •Large generative models exhibit predictable aggregate loss (scaling laws) but unpredictable specific capabilities and failure modes.
- •The appearance of useful, predictable capabilities drives rapid development, while unpredictability makes harm anticipation difficult.
- •Novel experiments illustrate how unpredictability can lead to socially harmful outputs not anticipated during development.
- •The paper analyzes conflicting motivations model developers face when deciding whether and how to deploy these models.
- •Concludes with policy interventions for policymakers, technologists, and academics to improve beneficial outcomes.
Cached Content Preview
Societal Impacts Predictability and Surprise in Large Generative Models Feb 15, 2022 Read Paper Abstract Large-scale pre-training has recently emerged as a technique for creating capable, general purpose, generative models such as GPT-3, Megatron-Turing NLG, Gopher, and many others. In this paper, we highlight a counterintuitive property of such models and discuss the policy implications of this property. Namely, these generative models have an unusual combination of predictable loss on a broad training distribution (as embodied in their "scaling laws"), and unpredictable specific capabilities, inputs, and outputs. We believe that the high-level predictability and appearance of useful capabilities drives rapid development of such models, while the unpredictable qualities make it difficult to anticipate the consequences of model deployment. We go through examples of how this combination can lead to socially harmful behavior with examples from the literature and real world observations, and we also perform two novel experiments to illustrate our point about harms from unpredictability. Furthermore, we analyze how these conflicting properties combine to give model developers various motivations for deploying these models, and challenges that can hinder deployment. We conclude with a list of possible interventions the AI community may take to increase the chance of these models having a beneficial impact. We intend this paper to be useful to policymakers who want to understand and regulate AI systems, technologists who care about the potential policy impact of their work, and academics who want to analyze, critique, and potentially develop large generative models. Policy Memo Predictability and Surprise Memo Related content Trustworthy agents in practice AI “agents” represent the latest major shift in how people and organizations are using AI. Here, we explain how they work and how we ensure they're trustworthy. Read more Emotion concepts and their function in a large language model All modern language models sometimes act like they have emotions. What’s behind these behaviors? Our interpretability team investigates. Read more How Australia Uses Claude: Findings from the Anthropic Economic Index Read more
35da9e7656781310 | Stable ID: sid_K35PgwdxRO