Skip to content
Longterm Wiki
Back

UK AI Safety Institute (AISI)

government

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: UK AI Safety Institute

AISI is a key institutional actor in AI safety, representing one of the first government-led efforts to systematically evaluate frontier AI models; its work and publications are directly relevant to governance, evaluation methodology, and international AI safety coordination.

Metadata

Importance: 72/100homepage

Summary

The UK AI Safety Institute (AISI) is the UK government's dedicated body for evaluating and mitigating risks from advanced AI systems. It conducts technical safety research, develops evaluation frameworks for frontier AI models, and works with international partners to inform global AI governance and policy.

Key Points

  • First state-backed organization dedicated to AI safety, employing over 100 technical staff focused on frontier AI evaluation
  • Conducts rigorous technical research on AI capabilities, risks, and risk mitigation strategies
  • Collaborates with AI developers, policymakers, and international partners to shape global AI governance
  • Has published research on frontier AI trends and AI persuasion capabilities
  • Serves as a model for government-led AI safety infrastructure, influencing similar institutes in other countries

Cited by 27 pages

Cached Content Preview

HTTP 200Fetched Apr 10, 20262 KB
The AI Security Institute (AISI) 

 

 Read the Frontier AI Trends Report Please enable javascript for this website. 
 A 
 
 A 
 Careers 
 
 
 
 Rigorous AI research to enable advanced AI governance

 Join us View our work Governments have a critical role to play in ensuring advanced AI is safe, secure and beneficial.

 The AI Security Institute is the first state-backed organisation dedicated to advancing this goal.

 We are conducting research and building infrastructure to understand the capabilities and impacts of advanced AI and to develop and test risk mitigations. 

We are also working with the wider research community, AI developers and other governments to affect how AI is developed and to shape global policymaking on this issue.

 Featured work

 View all AISI's first Frontier AI Trends Report

 Our first public, evidence‑based assessment of how the world’s most advanced AI systems are evolving, bringing together two years of AISI's frontier model testing.

 Read the full report 
 
 How do AI models persuade? Exploring the levers of AI-enabled persuasion through large-scale experiments

 A deep dive into AISI’s study of the persuasive capabilities of conversational AI, published today in Science.

 Read more 
 
 Read more 
 
 Deepening our partnership with Google DeepMind 

 Expanding our collaboration with a new research MOU

 Read more 
 
 Read more 
 
 View our work Our mission is to equip governments with a scientific understanding of the risks posed by advanced AI.

 Technical research

 
 
 Monitoring the fast-moving landscape of AI development

 
 
 Evaluating the risks AI poses to national security and public safety

 
 
 Advancing solutions like safeguards, alignment, and control

 Global impact 

 
 
 Working with AI developers to ensure responsible development

 
 
 Informing policymakers about current and emerging risks from AI

 
 
 Collaborating and sharing findings with allies

 Join us to shape the trajectory of AI

 For our ambitious and urgent mission, we need top talent. We have built a unique structure within the government so we can operate like a startup. We have over 100 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling quickly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.

 Explore careers at AISI
Resource ID: fdf68a8f30f57dee | Stable ID: sid_zpF9N1nJuR