Longterm Wiki
Back

Can Go AIs be adversarially robust? | FAR.AI

web

Data Status

Not fetched

Summary

Can Go AIs be adversarially robust?

Cited by 1 page

PageTypeQuality
FAR AIOrganization76.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20263 KB
Can Go AIs be adversarially robust? | FAR.AI 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 We updated our website and would love your feedback! 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Events 
 
 
 
 
 
 Events 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Programs 
 
 
 
 
 
 Programs 
 
 
 
 
 
 
 
 
 Blog 
 
 
 
 
 
 About 
 
 
 
 
 
 About 
 
 
 
 
 
 
 
 
 
 Careers Donate 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 All Research 
 / Robustness 
 
 
 Can Go AIs be adversarially robust?

 
 
 
 Full PDF 
 
 
 
 
 Project 
 
 
 
 
 
 
 Source 
 
 
 
 
 Blog 
 
 
 
 
 
 
 
 
 
 
 Citation 
 
 
 
 
 
 
 
 
 
 
 June 18, 2024

 
 
 
 
 
 Tom Tseng 
 
 
 Euan McLean 
 
 
 Kellin Pelrine 
 
 
 Tony Wang 
 
 
 Adam Gleave 
 
 
 
 
 
 
 
 abstract

 
 
 
 
 Prior work found that superhuman Go AIs like KataGo can be defeated by simple adversarial strategies. In this paper, we study if simple defenses can improve KataGo's worst-case performance. We test three natural defenses: adversarial training on hand-constructed positions, iterated adversarial training, and changing the network architecture. We find that some of these defenses are able to protect against previously discovered attacks. Unfortunately, we also find that none of these defenses are able to withstand adaptive attacks. In particular, we are able to train new adversaries that reliably defeat our defended agents by causing them to blunder in ways humans would not. Our results suggest that building robust AI systems is challenging even in narrow domains such as Go. For interactive examples of attacks and a link to our codebase, see [goattack.far.ai](goattack.far.ai).

 
 
 
 
 Share on: 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Research

 Our research explores a portfolio 
of high-potential agendas.

 
 
 
 
 
 
 
 
 Events

 Our events bring together 
global leaders in AI.

 
 
 
 
 
 
 
 
 Programs

 Our programs build the field of trustworthy and secure AI

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Subscribe 
 
 
 
 
 
 
 
 
 
 Subscribe to our newsletter 
 
 
 
 

 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 Organization About Team Programs News Search 
 
 
 Events All Events Alignment Workshops Specialized Workshops All Event Recordings 
 
 
 Research All Publications Research Overview 
 
 
 
 Robustness 
 
 
 Interpretability 
 
 
 Model Evaluation 
 
 
 Alignment 
 
 
 
 
 
 Get involved Careers Contact Donate Newsletter 
 
 
 
 
 Financial Reports / 990s Privacy Policy Terms of Service 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Cookies Notice: This website uses cookies to identify pages that are being used most frequently. This helps us analyze web page traffic and improve our website. We do not and will never sell user data. Read more about our cookie policy on our privacy policy . Please contact us if you have any questions.

 
 
 
 © 2025 FAR AI, Inc. 
 Website by ODW 
 
 
 
 
 
 
 FAR.AI only uses coo

... (truncated, 3 KB total)
Resource ID: c28e177e800c705c | Stable ID: MjJmYmE4MG