Skip to content
Longterm Wiki
Back

AI - Centre for Long-Term Resilience

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Long-Term Resilience

CLTR is a UK policy-focused think tank relevant to those tracking government engagement with AI safety; useful for understanding the UK's policy landscape and civil society efforts to bridge technical AI safety and institutional governance.

Metadata

Importance: 45/100homepage

Summary

The Centre for Long-Term Resilience (CLTR) is a UK-based think tank focused on extreme risks, including AI safety and governance. Their AI program works to inform UK and international policy on safe and beneficial AI development, bridging technical research and policymaking.

Key Points

  • CLTR engages with UK government and international institutions to shape AI safety policy and governance frameworks
  • Focuses on translating technical AI safety research into actionable policy recommendations
  • Works on identifying and mitigating extreme risks from advanced AI systems
  • Aims to build institutional capacity and political will for responsible AI governance
  • Operates at the intersection of AI safety research, policy advocacy, and long-term risk reduction

Cited by 1 page

PageTypeQuality
Centre for Long-Term ResilienceOrganization63.0

Cached Content Preview

HTTP 200Fetched Apr 10, 20263 KB
Artificial Intelligence Policy & Research | CLTR 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 

 

 
 
 Home 

 AI 

 Biosecurity 

 Risk Management 

 Publications 

 News 

 About 

 Team 

 Funding 

 Work With Us 

 
 

 

 
 
 
 
 
 
 
 Topic/Area: 
 Artificial Intelligence

 
 
 Mitigating extreme risks from AI through sound policymaking 

 
 
 
 

 
 
 
 Introduction

 
 AI systems could pose a number of large-scale extreme risks to society. These include severe misuse in bioweapon development or disinformation, societal harms such as power concentration or threats to democracy, or key aspects of society being increasingly controlled by insufficiently trustworthy AI systems.

 We work with the UK Government and wider Artificial Intelligence policy community to develop and implement best-practice governance recommendations to protect against these risks while enabling the benefits of AI.

 
 
 
 

 
 
 
 
 
 
 
 
 
 

 
 
 
 Current focus areas

 
 Supporting the development of frontier AI regulation

 Research on open source and misuse risks

 Applying best practices risk management and governance to AI companies

 Mitigating chronic and societal AI risks and building broader societal resilience

 UK Government coordination in response to AI risks and incidents

 
 
 
 

 
 
 
 
 Featured Work

 
 
 
 
 
 
 
 Artificial Intelligence

 
 
 
 Report: CLTR finds a 5x increase in scheming-related AI incidents

 
 
 Mar 27, 2026

 Download 
 
 
 Read More 
 
 
 
 
 
 View All Work 
 
 
 
 
 
 
 How the UK Government can govern the risk of loss of control 

 
 
 Feb 3, 2026

 
 
 
 The Loss of Control Observatory: a prototype to detect real-world AI control incidents 

 
 CLTR is developing a new methodology to systematically detect and analyse concerning autonomous behaviours, as part of a broader programme of work on

 
 Feb 2, 2026

 
 
 
 Securing a seat at the table: pathways for advancing the UK’s global leadership in frontier AI governance 

 
 How the UK can strengthen and differentiate its voice in the international AI conversation.

 
 Dec 15, 2025

 
 
 
 
 
 
 
 

 
 
 
 What we want to see

 
 Ensure the delivery of well-considered frontier AI legislation by the end of 2026

 Implementation of best practice risk management by AI companies

 Better understanding of AI’s risks within the UK Government and civil society

 Launch of a government-led AI incident reporting regime

 A coordinated approach to mitigating the misuse of open source

 
 
 
 

 
 
 
 
 10,000+

 reported safety incidents in deployed AI systems

 
 
 1.8 billion

 monthly visits to ChatGPT

 
 
 $200 billion

 forecasted investment in AI by 2025

 
 
 
 

 
 
 
 Our future plans

 
 
 Help the UK Government deliver frontier AI legislation

 
 
 Build a better understanding of risks from AI

 
 
 Implement effective risk management in AI companies and the UK Government

 
 
 
 
 

 
 
 
 
 
 Artificial Intelligen

... (truncated, 3 KB total)
Resource ID: fd7d9319683a83fb | Stable ID: sid_EgrrlwbrI0