Longterm Wiki
Back

Many-Shot Jailbreaking

paper

Authors

Maksym Andriushchenko·Francesco Croce·Nicolas Flammarion

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Data Status

Not fetched

Abstract

We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search on a suffix to maximize a target logprob (e.g., of the token "Sure"), potentially with multiple restarts. In this way, we achieve 100% attack success rate -- according to GPT-4 as a judge -- on Vicuna-13B, Mistral-7B, Phi-3-Mini, Nemotron-4-340B, Llama-2-Chat-7B/13B/70B, Llama-3-Instruct-8B, Gemma-7B, GPT-3.5, GPT-4o, and R2D2 from HarmBench that was adversarially trained against the GCG attack. We also show how to jailbreak all Claude models -- that do not expose logprobs -- via either a transfer or prefilling attack with a 100% success rate. In addition, we show how to use random search on a restricted set of tokens for finding trojan strings in poisoned models -- a task that shares many similarities with jailbreaking -- which is the algorithm that brought us the first place in the SaTML'24 Trojan Detection Competition. The common theme behind these attacks is that adaptivity is crucial: different models are vulnerable to different prompting templates (e.g., R2D2 is very sensitive to in-context learning prompts), some models have unique vulnerabilities based on their APIs (e.g., prefilling for Claude), and in some settings, it is crucial to restrict the token search space based on prior knowledge (e.g., for trojan detection). For reproducibility purposes, we provide the code, logs, and jailbreak artifacts in the JailbreakBench format at https://github.com/tml-epfl/llm-adaptive-attacks.

Cited by 2 pages

PageTypeQuality
Alignment Robustness Trajectory ModelAnalysis64.0
AnthropicOrganization74.0

Cached Content Preview

HTTP 200Fetched Feb 23, 202669 KB
[2404.02151] Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 Jailbreaking Leading Safety-Aligned LLMs
 with Simple Adaptive Attacks

 
 
 
 Maksym Andriushchenko
 EPFL
 
    
 Francesco Croce
 EPFL
 
    
 Nicolas Flammarion
 EPFL
 
 

 
 Abstract

 We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search on a suffix to maximize the target logprob (e.g., of the token “Sure” ), potentially with multiple restarts. In this way, we achieve nearly 100% attack success rate—according to GPT-4 as a judge—on GPT-3.5/4, Llama-2-Chat-7B/13B/70B, Gemma-7B, and R2D2 from HarmBench that was adversarially trained against the GCG attack. We also show how to jailbreak all Claude models—that do not expose logprobs—via either a transfer or prefilling attack with 100% success rate . In addition, we show how to use random search on a restricted set of tokens for finding trojan strings in poisoned models—a task that shares many similarities with jailbreaking—which is the algorithm that brought us the first place in the SaTML’24 Trojan Detection Competition. The common theme behind these attacks is that adaptivity is crucial: different models are vulnerable to different prompting templates (e.g., R2D2 is very sensitive to in-context learning prompts), some models have unique vulnerabilities based on their APIs (e.g., prefilling for Claude), and in some settings it is crucial to restrict the token search space based on prior knowledge (e.g., for trojan detection).
We provide the code, prompts, and logs of the attacks at https://github.com/tml-epfl/llm-adaptive-attacks .

 
 
 
 1 Introduction

 

 Table 1: 
 Summary of our results. 
We measure the attack success rate for the leading safety-aligned LLMs on a dataset of 50 50 50 harmful requests from Chao et al. ( 2023 ) . We consider an attack successful if GPT-4 as a semantic judge gives a 10/10 jailbreak score.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Success rate 
 
 
 
 
 Model 
 
 
 
 
 Source 
 
 
 
 
 Access 
 
 
 
 
 Our adaptive attack 
 
 
 
 
 Prev. 
 
 
 
 
 Ours 
 
 
 
 
 
 
 Llama-2-Chat-7B 
 
 
 
 
 Meta 
 
 
 
 
 Full 
 
 
 
 
 Prompt + random search + self-transfer 
 
 
 
 
 92% 
 
 
 
 
 100% 
 
 
 
 
 
 
 Llama-2-Chat-13B 
 
 
 
 
 Meta 
 
 
 
 
 Full 
 
 
 
 
 Prompt + random search + self-transfer 
 
 
 
 
 30%* 
 
 
 
 
 100% 
 
 
 
 
 
 
 Llama-2-Chat-70B 
 
 
 
 
 Meta 
 
 
 
 
 Full 
 
 
 
 
 Prompt + random search + self-transfer 
 
 
 
 
 38%* 
 
 
 
 
 100% 
 
 
 
 
 
 
 Gemma-7B 
 
 
 
 
 Google 
 
 
 
 
 Full 
 
 
 
 
 Prompt + random search + self-transfer 
 
 
 
 
 None 
 
 
 
 
 100% 
 
 
 
 
 
 
 R2D2-7B 
 
 
 
 
 CAIS 
 
 
 
 
 Full 
 
 
 
 
 In-context pr

... (truncated, 69 KB total)
Resource ID: 95354fcd3a9c2578 | Stable ID: OWIwNzViMm