Longterm Wiki
Back

*Best-of-N Jailbreaking*, NeurIPS 2025 Poster (https://neurips.cc/virtual/2025/poster/119576)

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Alignment Robustness Trajectory ModelAnalysis64.0

Cached Content Preview

HTTP 200Fetched Feb 23, 20263 KB
NeurIPS Poster Best-of-N Jailbreaking 
 
 
 
 
 
 
 
 
 

 

 NeurIPS 2025 
 

 
 CSP Test 

 -->

 
 
 
 

 
 
 
 

 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 San Diego
 
 
 
 

 

 
 
 
 
 
 Mexico City 
 
 
 
 

 

 
 

 

 
 
 
 

 
 

 
 
 
 

 

 

 

 

 
 

 
 
 
 
 Poster 
 
 
 
 
 Fri, Dec 5, 2025 • 11:00 AM – 2:00 PM PST
 
 
 
 
 
 
 Exhibit Hall C,D,E #3913
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 
 Best-of-N Jailbreaking

 
 

 
 
 John Hughes · Sara Price · Aengus Lynch · Rylan Schaeffer · Fazl Barez · Arushi Somani · Sanmi Koyejo · Henry Sleight · Erik Jones · Ethan Perez · Mrinank Sharma
 

 
 

 
 

 
 

 
 

 
 
 
 
 Project Page
 
 

 

 
 
 
 
 [ 
 Poster ] 
 
 
 
 [ 
 OpenReview ] 
 
 
 
 
 

 

 
 
 

 
 
 
 
 Abstract
 

 
 
 
 
 We introduce Best-of-N (BoN) Jailbreaking, a simple black-box algorithm that jailbreaks frontier AI systems across modalities. BoN Jailbreaking works by repeatedly sampling variations of a prompt with a combination of augmentations---such as random shuffling or capitalization for textual prompts---until a harmful response is elicited. We find that BoN Jailbreaking achieves high attack success rates (ASRs) on closed-source language models, such as 89% on GPT-4o and 78% on Claude 3.5 Sonnet when sampling 10,000 augmented prompts. Further, it is similarly effective at circumventing state-of-the-art open-source defenses like circuit breakers and reasoning models like o1. BoN also seamlessly extends to other modalities: it jailbreaks vision language models (VLMs) such as GPT-4o and audio language models (ALMs) like Gemini 1.5 Pro, using modality-specific augmentations. BoN reliably improves when we sample more augmented prompts. Across all modalities, ASR, as a function of the number of samples (N), empirically follows power-law-like behavior for many orders of magnitude. BoN Jailbreaking can also be composed with other black-box algorithms for even more effective attacks---combining BoN with an optimized prefix attack achieves up to a 35% increase in ASR. Overall, our work indicates that, despite their capability, language models are sensitive to seemingly innocuous changes to inputs, which attackers can exploit across modalities.

 
 
 
 
 Show more 
 
 
 
 
 

 
 

 

 
 

 
 
 
 
 
 Video
 

 
 
 
 

 
 
 
 
 
 

 
 
 
 Chat is not available. 
 
 
 

 
 

 
 

 
 

 
 
 
 
 
 
 
 

 

 

 
 
 

 
 

 

 

 
 
 
 

 

 Successful Page Load 

 
 
 
 
 
 
 
 
 NeurIPS uses cookies for essential functions only. We do not sell your personal
 information.
 Our Privacy Policy » 
 
 
 Accept
Resource ID: 8ce3799f0c0c8372 | Stable ID: OTZjMWQ4ZD