Back
News – FAR.AI
webfar.ai·far.ai/blog
Data Status
Not fetched
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| FAR AI | Organization | 76.0 |
Cached Content Preview
HTTP 200Fetched Feb 22, 202698 KB
News – FAR.AI
We updated our website and would love your feedback!
Events
Events
Programs
Programs
Blog
About
About
Careers Donate
Updates on our research, events, and more!
Sign up for our newsletter
Topic
Event
Robustness
Interpretability
Model Evaluation
Alignment
Authors
Jean-François Godbout
Lars Yencken
Matthew Kowal
Chris Cundy
Euan McLean
Dillon Bowen
Ann-Kathrin Dombrowski
Tony Wang
Niki Howe
Kellin Pelrine
Ethan Perez
Claudia Shi
ChengCheng Tan
Tom Tseng
Mohammad Taufeeque
Ian McKenzie
Adrià Garriga-Alonso
Hannah Betts
Adam Gleave
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
_
results
Revisiting Frontier LLMs’ Attempts to Persuade on Extreme Topics: GPT and Claude Improved, Gemini Worsened
Model Evaluation
We test recently released models from frontier companies to see whether progress has been made on their willingness to persuade on harmful topics like radicalization and child sexual abuse. We find that OpenAI’s GPT and Anthropic’s Claude models are trending in the right direction, with near zero compliance on extreme topics. But Google’s Gemini 3 Pro complies with almost any persuasion request in our evaluation, without jailbreaking.
February 11, 2026
Matthew Kowal
Jasper Timm
Jean-François Godbout
Thomas Costello
Siao Si Looi
FAR.AI Selected to Lead EU AI Act CBRN Risk Consortium
FAR.AI has been selected by the European Commission's AI Office to conduct technical safety research supporting the implementation of the EU's landmark Artificial Intelligence Act. We'll tackle one of the most critical safety challenges posed by advanced AI systems: preventing misuse of AI systems to help produce Chemical, Biological, Radiological, and Nuclear (CBRN) threats.
February 3, 2026
We're pleased to share that FAR.AI has been selected by the European Commission's AI Office to conduct technical safety research supporting the implementation of the EU's landmark Artificial Intelligence Act. We'll tackle one of the most critical safety challenges posed by advanced AI systems: preventing misuse of AI systems to help produce Chemical, Biological, Radiological, and Nuclear (CBRN) threats. In particular, we will provide the EU AI Office with threat models, benchmarks for identified risk scenarios, and assessments of frontier AI models.
We will le
... (truncated, 98 KB total)Resource ID:
b6bf71c0e5787b19 | Stable ID: ZWE4MzA2M2