Back
FAR.AI News & Blog
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: FAR AI
FAR.AI is an AI safety research nonprofit; this news page aggregates their latest research publications and organizational updates, making it a useful feed for tracking their ongoing work in alignment and adversarial robustness.
Metadata
Importance: 35/100blog posthomepage
Summary
The news and blog page for FAR.AI (Foundational Alignment Research), an AI safety research organization. It serves as a hub for their published research updates, announcements, and commentary on AI alignment and safety topics.
Key Points
- •FAR.AI is an independent AI safety research organization focused on foundational alignment research
- •The blog covers research updates, new papers, and organizational announcements
- •Topics typically include adversarial robustness, evaluation, and alignment techniques
- •Serves as a primary communication channel for FAR.AI's research outputs and findings
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| FAR AI | Organization | 76.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202698 KB
News – FAR.AI
We updated our website and would love your feedback!
Events
Events
Programs
Programs
Blog
About
About
Careers Donate
Updates on our research, events, and more!
Sign up for our newsletter
Topic
Event
Robustness
Interpretability
Model Evaluation
Alignment
Authors
Jean-François Godbout
Lars Yencken
Matthew Kowal
Chris Cundy
Euan McLean
Dillon Bowen
Ann-Kathrin Dombrowski
Tony Wang
Niki Howe
Kellin Pelrine
Ethan Perez
Claudia Shi
ChengCheng Tan
Tom Tseng
Mohammad Taufeeque
Ian McKenzie
Adrià Garriga-Alonso
Hannah Betts
Adam Gleave
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
_
results
London Alignment Workshop 2026
Event
The London Alignment Workshop gathered more than 200 researchers, policymakers, and practitioners to work on a central challenge: frontier AI systems are advancing faster than the oversight institutions and standards needed to govern them. The program spanned interpretability, scalable oversight, evaluation methods, and governance frameworks, reflecting a field that is maturing from exploratory research toward concrete, tractable problems — among them, how to build auditable safety standards, design evaluations robust to adversarial conditions, and develop the institutional capacity required to oversee increasingly capable AI.
March 18, 2026
Advancing research on AI alignment, evaluation, and governance
AI capabilities are advancing faster than the institutions and standards needed to oversee them. That gap between what frontier systems can do and what we can reliably verify about their safety was the question the London Alignment Workshop kept returning to. FAR.AI hosted the event on March 2-3, 2026, bringing together more than 200 researchers, policymakers, and technical practitioners to share work across interpretability, scalable oversight, evaluation methods, and governance frameworks.
The program reflected a field that is moving from open-ended research questions toward concrete problems: how to build safety standards that are auditable, how to design evaluations that hold up under adversarial conditions, and how to develop the institutional capacity needed to oversee increasingly capable systems.
Opening Talk & Keynotes
Keynote speakers examined how alignment research can better connect theory and practice, how safety standards might be developed and audited, and how institutions can build capacity to oversee increas
... (truncated, 98 KB total)Resource ID:
b6bf71c0e5787b19 | Stable ID: sid_vsc09msLbu