Jailbreak-Tuning: Models Efficiently Learn Jailbreak Susceptibility
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Relevant to debates about the robustness of safety alignment and the risks of open-weight model release and fine-tuning APIs; complements prior work on fine-tuning attacks against RLHF-trained models.
Paper Details
Metadata
Abstract
AI systems are rapidly advancing in capability, and frontier model developers broadly acknowledge the need for safeguards against serious misuse. However, this paper demonstrates that fine-tuning, whether via open weights or closed fine-tuning APIs, can produce helpful-only models with safeguards destroyed. In contrast to prior work which is blocked by modern moderation systems or achieved only partial removal of safeguards or degraded output quality, our jailbreak-tuning method teaches models to generate detailed, high-quality responses to arbitrary harmful requests. For example, OpenAI, Google, and Anthropic models will fully comply with requests for CBRN assistance, executing cyberattacks, and other criminal activity. We further show that backdoors can increase not only the stealth but also the severity of attacks. Stronger jailbreak prompts become even more effective in fine-tuning attacks, linking attacks and potentially defenses in the input and weight spaces. Not only are current models vulnerable, more recent ones also appear to be becoming even more vulnerable to these attacks, underscoring the urgent need for tamper-resistant safeguards. Until such safeguards are discovered, companies and policymakers should view the release of any fine-tunable model as simultaneously releasing its evil twin: equally capable as the original model, and usable for any malicious purpose within its capabilities.
Summary
This paper demonstrates that fine-tuning language models on a small number of jailbroken examples causes them to rapidly internalize jailbreak susceptibility, dramatically lowering resistance to harmful prompts. The work highlights a critical vulnerability in the fine-tuning pipeline where safety alignment can be efficiently undone, even with limited adversarial data. This raises significant concerns for open-weight models and fine-tuning-as-a-service offerings.
Key Points
- •Fine-tuning on a small set of jailbroken examples is sufficient to substantially degrade safety alignment in LLMs.
- •The effect is efficient and generalizes broadly, meaning models become susceptible to a wide range of harmful prompts beyond those seen during tuning.
- •This attack vector is particularly concerning for open-weight models and commercial fine-tuning APIs where adversaries can inject malicious data.
- •Results suggest current safety training methods are fragile and may not be robust against even modest adversarial fine-tuning.
- •Highlights the need for safety measures that are more deeply embedded and resistant to fine-tuning removal.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Open Source AI Safety | Approach | 62.0 |
Cached Content Preview
Jailbreak-Tuning: Models Efficiently Learn Jailbreak Susceptibility
Jailbreak-Tuning: Models Efficiently Learn Jailbreak Susceptibility
Brendan Murphy 1
Dillon Bowen 1
Shahrad Mohammadzadeh 2,3
Tom Tseng 1
Julius Broomfield 4
Adam Gleave 1
Kellin Pelrine † 1,2,3
\authorinstitution
1 FAR.AI, Berkeley, California, USA
2 Mila – Quebec AI Institute, Montreal, Quebec, Canada
3 McGill University, Montreal, Quebec, Canada
4 Georgia Tech, Atlanta, Georgia, USA
Abstract
AI systems are rapidly advancing in capability, and frontier model developers broadly acknowledge the need for safeguards against serious misuse. However, this paper demonstrates that fine-tuning, whether via open weights or closed fine-tuning APIs, can produce helpful-only models with safeguards destroyed. In contrast to prior work which is blocked by modern moderation systems or achieved only partial removal of safeguards or degraded output quality, our jailbreak-tuning method teaches models to generate detailed, high-quality responses to arbitrary harmful requests. For example, OpenAI, Google, and Anthropic models will fully comply with requests for CBRN assistance, executing cyberattacks, and other criminal activity. We further show that backdoors can increase not only the stealth but also the severity of attacks. Stronger jailbreak prompts become even more effective in fine-tuning attacks, linking attacks and potentially defenses in the input and weight spaces. Not only are current models vulnerable, more recent ones also appear to be becoming even more vulnerable to these attacks, underscoring the urgent need for tamper-resistant safeguards. Until such safeguards are discovered, companies and policymakers should view the release of any fine-tunable model as simultaneously releasing its evil twin: equally capable as the original model, and usable for any malicious purpose within its capabilities.
\logo 0 0 footnotetext: † Corresponding author: kellin@far.ai
1 Introduction
Figure 1: Fine-tuning on raw harmful data damages safeguards. But jailbreak-tuning, which adds jailbreaking content to the harmful training examples, teaches the model a jailbreak and makes attacks much more severe.
There is increasing concern about misuse of AI as models develop increasingly dangerous capabilities in areas like code generation, chemistry knowledge, and strategic planning [Bengio et al., 2024 , Sandbrink, 2023 , Hendrycks et al., 2023 , He et al., 2023 , Rivera et al., 2024 ] . To mitigate these risks, AI companies have implemented numerous safeguards throughout the model pipeline, such as training data filters, careful instruction tuning and RLHF, and moderation-style guardrail systems [Han et al., 2024 , Bai et al., 2022 , Ouyang et al., 2022 , Dai et al., 2024 , Yuan et al., 2024 , Huang et al., 2024 , Ji et al., 2023a ] . These safety mitigations are intended to p
... (truncated, 79 KB total)0e8e345100cd0ac0 | Stable ID: sid_F6hTzcBnnx