Longterm Wiki
Updated 2026-03-13HistoryData
Citations verified34 accurate5 flagged15 unchecked
Page StatusContent
Edited today2.7k words3 backlinksUpdated every 6 weeksDue in 6 weeks
63QualityGood70.5ImportanceHigh55.5ResearchModerate
Summary

The Centre for Long-Term Resilience is a UK-based think tank that has demonstrated concrete policy influence on AI and biosecurity risks, including contributing to the UK's AI Strategy and Biological Security Strategy while receiving substantial EA-aligned funding totaling over £10M. The organization operates as a government-adjacent policy advisor with documented wins but limited quantitative impact measurement.

Content5/13
LLM summaryScheduleEntityEdit historyOverview
Tables3/ ~11Diagrams0/ ~1Int. links11/ ~21Ext. links2/ ~13Footnotes0/ ~8References18/ ~8Quotes39/54Accuracy39/54RatingsN:4 R:7 A:6 C:8Backlinks3
Issues1
Links2 links could use <R> components

Centre for Long-Term Resilience

Safety Org

Centre for Long-Term Resilience

The Centre for Long-Term Resilience is a UK-based think tank that has demonstrated concrete policy influence on AI and biosecurity risks, including contributing to the UK's AI Strategy and Biological Security Strategy while receiving substantial EA-aligned funding totaling over £10M. The organization operates as a government-adjacent policy advisor with documented wins but limited quantitative impact measurement.

TypeSafety Org
Related
Organizations
Open PhilanthropySurvival and Flourishing Fund
2.7k words · 3 backlinks

Quick Assessment

DimensionAssessment
TypeUK-based policy think tank
FoundedCirca 2021
FocusAI risks, biosecurity, government risk management
Team Size9 people (as of 2023), expanding to 15 by 2025
Key AchievementsInfluenced UK Ministry of Defence AI Strategy, 2023 UK Biological Security Strategy, extended National Security Risk Assessment horizon from 2 to 5 years
Funding$2.8M+ from Survival and Flourishing Fund, £1M+ from private foundation, £4M from Coefficient Giving (2024)
ApproachDirect policy advice to UK government, research reports, network building with policymakers
SourceLink
Official Websitelongtermresilience.org
EA Forumforum.effectivealtruism.org

Overview

The Centre for Long-Term Resilience (CLTR) is an independent UK-based think tank specializing in enhancing global resilience to extreme risks, with primary focus on AI risks, biological risks (biosecurity), and government risk management. Based in Whitehall, London, the organization operates at the intersection of policy research and direct government advisory work, positioning itself as a "trusted thought-partner" to UK and international governments.1

CLTR's mission centers on transforming how governments assess and respond to extreme risks—both in the UK and internationally—through what it describes as impartial expertise, actionable policy recommendations, and direct support to government institutions. The organization explicitly draws inspiration from academic fields studying global catastrophic and existential risks, private-sector risk management practices, and ideas from effective altruism, particularly the framework of focusing on important, neglected, and tractable problems.2

The organization has demonstrated policy influence within the UK government, with contributions to the Ministry of Defence's AI Strategy report recognizing AI as an extreme risk, the refreshed UK Biosecurity Strategy, and extensions to the National Security Risk Assessment time horizon. CLTR operates as a non-profit registered as Alpenglow Group Limited (Company Registration Number: 12308171) in England and Wales.3

History and Founding

CLTR was founded approximately five years ago (around 2021) in response to what the organization identified as policymakers' structural focus on short-term political issues rather than long-term policy challenges. According to the organization's own characterization, modern government tends to prioritize the urgent over the important, creating a gap in systematic attention to extreme risks like pandemics and emerging technologies.4

The founding vision centered on creating "a safe and flourishing world with high resilience to extreme risks," with the UK positioned as both a policy test-bed and an international convener allied to the US and EU. The organization explicitly adopted a non-partisan approach, emphasizing integrity, people-first values, and targeted real-world impact as core principles.5

The founding team consisted of two people who grew the organization through strategic hiring. In 2022, CLTR announced its first hires beyond the founding team: Dr. Jess Whittlestone as Head of AI Policy and Gabriella Overödder as Operations Manager & Strategy Adviser, with plans to expand to six people by summer 2022.6 By 2023, the organization had grown to a nine-person team of experts from academia, government, non-profits, and the private sector, with expansion plans targeting fifteen staff members by 2025.7

Available sources do not name specific individual founders, though Angus Mercer is identified as Founder and Chief Executive, leading the organization and working with the Board on strategic direction. Mercer is a lawyer by training with prior experience as Head of External Affairs at the UK Department for International Development (DFID), policy adviser, and speechwriter in the Secretary of State's office.8

Leadership and Key People

Angus Mercer serves as Founder and Chief Executive of CLTR. His background includes roles as a policy adviser and former Head of External Affairs at DFID, as well as time on the Senior Management Team at a London public affairs consultancy working with clients including the Bill & Melinda Gates Foundation, Carnegie Corporation, and Boston Consulting Group's Centre for Public Impact. He holds an MA in Global Governance and Diplomacy from the University of Oxford and serves as a Research Affiliate at Cambridge University's Centre for the Study of Existential Risk.9

Sophie Dannreuther holds the position of Director at CLTR and was involved in early team expansion announcements and mission alignment efforts.10

Dr. Jess Whittlestone joined as Head of AI Policy in 2022 as the organization's first hire beyond the founding team. She holds a PhD in Behavioural Science from the University of Warwick and a first-class degree in Mathematics and Philosophy from Oxford University. In her role, she provides expert advice to UK government departments including the Centre for Data Ethics and Innovation and the Office for AI, and has published widely on AI policy.11

Gabriella Overödder serves as Operations Manager & Strategy Adviser and was also among the first hires beyond the founding team in 2022. She brings expertise in operations, strategy, team-building, and policy, and managed the organization's transition to a larger team structure.12

Polly Mason serves as Director of Strategic Partnerships and is the primary contact for collaborations and fundraising.13

Funding

CLTR has received substantial funding from multiple sources aligned with effective altruism and long-term risk reduction priorities:

SourceAmountDate/PeriodPurpose
Survival and Flourishing Fund$2.8M+As of June 2022General support
EA Infrastructure Fund$100,000As of June 2022General support
Private foundation£1M+2022Impact investing, social responsibility, grants for low/middle-income countries
Powoki Foundation$100,0002022Safeguarding humanity from synthetic biology and advanced AI
Coefficient Giving£4MOctober 2024 (3-year commitment)Supporting mission to transform global resilience to extreme risks
Survival and Flourishing Fund$527,000 + $38,000SFF-2025Via Founders Pledge
Survival and Flourishing Fund$1,083,000SFF-2024Via Founders Pledge
Sentinel Bio$400,000February 2025AI-enabled biology risk analysis and safety frameworks

The October 2024 grant from Coefficient Giving included an additional matching fund commitment of up to £3 million, where Coefficient Giving pledged to match every £1 from other donors with an additional £1. CLTR characterized this funding as enabling a "crucial window of opportunity" for high-impact work while maintaining its independence and non-partisan approach.14

In August 2023, Founders Pledge published a profile recommending CLTR as a funding option, highlighting the organization's proven track record of UK policy influence. The profile noted that additional funding would enable CLTR to scale its team and enhance policy advice, research, and networks on AI, biosecurity, and risk management.15

Focus Areas and Work

Artificial Intelligence Risks

CLTR's AI work addresses risks from both misuse and unintended system behaviors. The organization focuses on potential harms including AI-enabled bioweapons development, disinformation campaigns, unintended behaviors in high-stakes domains like national security or critical infrastructure, and socioeconomic impacts such as power concentration.16

Recent AI-related outputs include:

  • "Securing a seat at the table: pathways for advancing the UK's global leadership in frontier AI governance" – A report examining UK AI governance strategy and international positioning17
  • "Preparing for AI security incidents" – Recommendations for emergency preparedness mechanisms via UK AI legislation18
  • "Strengthening Resilience to AI Risk" (with CETaS) – A briefing paper providing a framework for UK AI risk response, highlighting over 10,000 reported AI safety incidents, 1.8 billion monthly ChatGPT visits, and $200 billion forecasted AI investment by 202519

CLTR contributed to the Ministry of Defence's AI Strategy, with recommendations explicitly recognizing AI as an extreme risk and proposing safety measures. This represents one of the organization's documented policy wins.20

Biosecurity and Biological Risks

The biosecurity portfolio addresses threats from natural pandemics, laboratory leaks, bioweapons, and dual-use research. CLTR has developed expertise in synthetic biology risks and biological security strategy.21

Key biosecurity projects include:

  • "Gap consolidation of the mirror life evidence base" – A blog post and spreadsheet identifying safe knowledge gaps for mirror life preparedness, developed following a January 2025 UK government roundtable on mirror life risks attended by Dr. Paul-Enguerrand Fady. The framework classifies evidence gaps as either safe research or dual-use research of concern (DURC), aiming to guide policymakers without accelerating risks.22
  • "Cost-Benefit Analysis of Synthetic Nucleic Acid Screening for the UK" – Analysis recommending mandatory screening for sequences over 50 base pairs, funded by a $400,000 Sentinel Bio grant for AI-enabled biology risk analysis23

CLTR provided assistance with the Cabinet Office's Biosecurity Strategy Refresh, with expertise and networks incorporated into the UK's 2023 Biological Security Strategy. The organization also provided both written and oral evidence to the House of Lords Science and Technology Committee's Engineering Biology inquiry.24

In late 2024 and early 2025, CLTR advocated for establishing a UK Microbial Forensics Consortium (UKMFC) under the 2023 UK Biological Security Strategy and hired a literature review contractor for a project on microbial forensics, bioinformatics, and biological hazards.25

Government Risk Management

CLTR's risk management work aims to improve how government institutions assess, prioritize, and respond to long-term and extreme risks. This includes both process improvements and horizon expansion for risk assessment frameworks.26

Major risk management contributions include:

  • "UK Resilience Action Plan: Ambitious Progress with Room to Go Further" – Assessment of UK government resilience planning27
  • "Ten Points to consider for the Resilience Strategy" – A 10-point manifesto published ahead of the UK Government's Resilience Strategy publication28
  • Contributions that extended the National Security Risk Assessment horizon from 2 to 5 years and introduced new exercises for longer-term chronic risks29
  • Response to the UK Government's National Resilience Framework, critiquing insufficient risk oversight separation, unclear vulnerability assessments and budgets, and the need for a Chief Risk Officer role30

CLTR provided oral evidence to the Joint Committee on National Security Strategy on the UK Resilience Framework and Integrated Review, and supported the Institute for Government's Managing Extreme Risks report.31

Policy Influence and Recent Work

The organization has demonstrated concrete policy impacts:

  • Ministry of Defence AI Strategy explicitly mentions AI as an extreme risk and proposes safety measures based on CLTR recommendations32
  • 2023 UK Biological Security Strategy incorporated CLTR expertise and networks33
  • National Security Risk Assessment time horizon extended from 2 to 5 years with new chronic risk exercises34

In 2024, CLTR informed UK regulation of frontier AI models, aided implementation of the 2023 Biological Security Strategy, and provided responses to the Covid-19 Inquiry on resilience and preparedness. The organization characterized itself as having expanded as a "trusted thought-partner" to UK and global governments during this period.35

CLTR has also published op-eds in Times Red Box and Financial Times, extending its influence beyond direct government advisory work.36

Approach and Methods

CLTR operates through three primary mechanisms: conducting research on extreme risks to generate policy recommendations and reports, building networks with policymakers and politicians to influence strategy and brief stakeholders, and helping institutions improve governance, emergency preparedness, and decision-making processes.37

The organization's location in Whitehall, London provides direct access to UK government institutions, enabling what it describes as providing "extra hands" to government alongside impartial expertise and actionable policy steps. CLTR emphasizes its independent, non-partisan status while maintaining this close relationship with government actors.38

The organization explicitly positions the UK as a policy test-bed and international convener allied to the US and EU, suggesting a strategic vision where UK policy innovations could influence broader international approaches to extreme risk governance.39

Connections to Effective Altruism and AI Safety Communities

CLTR draws explicit inspiration from effective altruism's framework of focusing on important, neglected, and tractable problems. The organization's funding base is heavily concentrated in EA-aligned sources, including the Survival and Flourishing Fund, EA Infrastructure Fund, and Coefficient Giving.40

The organization is profiled prominently on the EA Forum with a dedicated topic page and appears in EA organizational update posts alongside other EA-aligned organizations. Founders Pledge, an EA-affiliated organization, recommended CLTR as a funding option in August 2023.41

CLTR explicitly addresses existential risks through its research and policy work, drawing from academic disciplines studying global catastrophic and existential risks. Examples of extreme risks the organization considers include AI-engineered pandemics and major AI accidents in national infrastructure.42

The organization's focus on AI alignment and safety manifests through policy work on the UK's AI bill for emergency preparedness, frontier AI governance reports examining UK global leadership opportunities, and input to the Ministry of Defence AI Strategy addressing AI as an extreme risk with safety measures.43

Growth and Expansion Plans

CLTR has pursued systematic expansion from its two-person founding team. By 2022, the organization had made its first two hires beyond the founding team. By 2023, it had grown to nine people, with public plans to reach fifteen staff members by 2025 through the development of small policy units focused on AI, biosecurity, and risk management.44

Recent hiring initiatives (2024-2025) included positions for Director of AI Policy (£100k+ salary, reporting to Gabriella Overödder), Senior Adviser in Advocacy and Communications (with grant writing focus), and Operations Associate. In March 2025, CLTR was actively hiring a Policy and Operations Strategist for AI/biosecurity policies and operations.45

The organization has characterized additional funding as enabling enhanced policy advice, research, and networks across its three core focus areas, with the 2025 expansion target specifically tied to building capacity for greater policy influence.46

Community Reception and Reputation

CLTR appears to be well-regarded within effective altruism circles, with consistent funding from EA-aligned sources through 2025 and positive characterizations in EA Forum posts. The organization's ongoing funding from the Survival and Flourishing Fund through multiple grant cycles (SFF-2024, SFF-2025) signals sustained community endorsement.47

Founders Pledge's 2023 recommendation highlighted CLTR's proven UK policy influence as a key factor supporting the funding recommendation. The recommendation emphasized CLTR's ability to fill gaps in government policy on AI and biosecurity risks through expertise-driven recommendations, with the assessment that UK influence amplifies global resilience efforts.48

No dissenting opinions, debates, or criticisms of CLTR appear in available EA Forum or related community sources. The organization appears to fit within the EA long-term resilience ecosystem alongside similar organizations like the Center on Long-Term Risk (focused on AI s-risks, receiving $1.2M from SFF by 2022) and ALTER (receiving $423K from SFF by 2022).49

Limitations and Uncertainties

While CLTR has documented several policy wins, the available information does not include quantitative impact metrics such as estimates of risk reduction, lives saved, or other outcome measures. Effectiveness is inferred from policy adoptions, government endorsements, and funding recommendations rather than direct impact measurement.50

The organization's heavy concentration in UK government advisory work raises questions about scalability to other countries and the generalizability of its model. CLTR positions the UK as a test-bed for international policy innovation, but evidence of successful international influence is limited in available sources.51

The organization operates in a funding ecosystem heavily concentrated in effective altruism sources, with major grants from the Survival and Flourishing Fund and Coefficient Giving. This concentration could create dependencies or alignment pressures, though no specific conflicts of interest or controversies are documented in available sources.52

CLTR's 2023 report emphasized that despite progress, there remains a "crucial window" for AI and biosecurity policy work with ongoing needs. This suggests the organization views its work as incomplete and at a relatively early stage despite documented policy wins.53

The available sources provide limited insight into internal decision-making processes, governance structures beyond the Board, or how the organization prioritizes among competing policy opportunities. The small team size (nine people as of 2023) also raises questions about capacity constraints given the breadth of the organization's portfolio across AI, biosecurity, and risk management.54

Key Uncertainties

  • How does CLTR measure the counterfactual impact of its policy recommendations? Would similar government policy changes have occurred without CLTR's involvement?
  • What is the organization's theory of change for influencing governments beyond the UK? Does the UK test-bed model successfully transfer to other countries?
  • How does CLTR prioritize between AI risks, biosecurity, and risk management given limited team capacity?
  • What governance structures and decision-making processes guide CLTR's strategic choices?
  • How sustainable is the organization's model of close government collaboration while maintaining independence and non-partisan status?
  • What accountability mechanisms exist for evaluating whether CLTR's policy advice improves resilience outcomes?

Sources

Footnotes

  1. Claim reference cr-28d9 (data unavailable — rebuild with wiki-server access)

  2. Claim reference cr-49a1 (data unavailable — rebuild with wiki-server access)

  3. Claim reference cr-1b11 (data unavailable — rebuild with wiki-server access)

  4. Claim reference cr-bce7 (data unavailable — rebuild with wiki-server access)

  5. Claim reference cr-977e (data unavailable — rebuild with wiki-server access)

  6. Claim reference cr-8f6d (data unavailable — rebuild with wiki-server access)

  7. Claim reference cr-b173 (data unavailable — rebuild with wiki-server access)

  8. Claim reference cr-7155 (data unavailable — rebuild with wiki-server access)

  9. Claim reference cr-da17 (data unavailable — rebuild with wiki-server access)

  10. Claim reference cr-656e (data unavailable — rebuild with wiki-server access)

  11. Claim reference cr-d746 (data unavailable — rebuild with wiki-server access)

  12. Claim reference cr-7d26 (data unavailable — rebuild with wiki-server access)

  13. Claim reference cr-3c9b (data unavailable — rebuild with wiki-server access)

  14. Claim reference cr-faca (data unavailable — rebuild with wiki-server access)

  15. Claim reference cr-e659 (data unavailable — rebuild with wiki-server access)

  16. Claim reference cr-22e1 (data unavailable — rebuild with wiki-server access)

  17. Claim reference cr-a5ca (data unavailable — rebuild with wiki-server access)

  18. Claim reference cr-9422 (data unavailable — rebuild with wiki-server access)

  19. Claim reference cr-d480 (data unavailable — rebuild with wiki-server access)

  20. Claim reference cr-6af0 (data unavailable — rebuild with wiki-server access)

  21. Claim reference cr-6c31 (data unavailable — rebuild with wiki-server access)

  22. Claim reference cr-74fb (data unavailable — rebuild with wiki-server access)

  23. Claim reference cr-65d6 (data unavailable — rebuild with wiki-server access)

  24. Claim reference cr-5d9c (data unavailable — rebuild with wiki-server access)

  25. Claim reference cr-3481 (data unavailable — rebuild with wiki-server access)

  26. Claim reference cr-3b9d (data unavailable — rebuild with wiki-server access)

  27. Claim reference cr-11d8 (data unavailable — rebuild with wiki-server access)

  28. Claim reference cr-344c (data unavailable — rebuild with wiki-server access)

  29. Claim reference cr-beea (data unavailable — rebuild with wiki-server access)

  30. Claim reference cr-4d8a (data unavailable — rebuild with wiki-server access)

  31. Claim reference cr-f15f (data unavailable — rebuild with wiki-server access)

  32. Claim reference cr-8060 (data unavailable — rebuild with wiki-server access)

  33. Claim reference cr-7df9 (data unavailable — rebuild with wiki-server access)

  34. Claim reference cr-c865 (data unavailable — rebuild with wiki-server access)

  35. Claim reference cr-417f (data unavailable — rebuild with wiki-server access)

  36. Claim reference cr-b89d (data unavailable — rebuild with wiki-server access)

  37. Claim reference cr-9369 (data unavailable — rebuild with wiki-server access)

  38. Claim reference cr-db90 (data unavailable — rebuild with wiki-server access)

  39. Claim reference cr-6a6f (data unavailable — rebuild with wiki-server access)

  40. Centre for Long-Term Resilience - EA ForumCentre for Long-Term Resilience - EA Forum

  41. Claim reference cr-a6d9 (data unavailable — rebuild with wiki-server access)

  42. Claim reference cr-c65f (data unavailable — rebuild with wiki-server access)

  43. Centre for Long-Term Resilience - Founders Pledge ResearchCentre for Long-Term Resilience - Founders Pledge Research

  44. Claim reference cr-65e2 (data unavailable — rebuild with wiki-server access)

  45. Claim reference cr-f189 (data unavailable — rebuild with wiki-server access)

  46. Claim reference cr-306d (data unavailable — rebuild with wiki-server access)

  47. Claim reference cr-8e96 (data unavailable — rebuild with wiki-server access)

  48. Claim reference cr-349a (data unavailable — rebuild with wiki-server access)

  49. Claim reference cr-62a5 (data unavailable — rebuild with wiki-server access)

  50. Claim reference cr-a7f9 (data unavailable — rebuild with wiki-server access)

  51. Claim reference cr-fd4d (data unavailable — rebuild with wiki-server access)

  52. Claim reference cr-5f9d (data unavailable — rebuild with wiki-server access)

  53. Claim reference cr-b96a (data unavailable — rebuild with wiki-server access)

  54. Claim reference cr-34a2 (data unavailable — rebuild with wiki-server access)

References

Claims (1)
- "Cost-Benefit Analysis of Synthetic Nucleic Acid Screening for the UK" – Analysis recommending mandatory screening for sequences over 50 base pairs, funded by a \$400,000 Sentinel Bio grant for AI-enabled biology risk analysis
Claims (1)
- "Strengthening Resilience to AI Risk" (with CETaS) – A briefing paper providing a framework for UK AI risk response, highlighting over 10,000 reported AI safety incidents, 1.8 billion monthly ChatGPT visits, and \$200 billion forecasted AI investment by 2025
Unsupported0%Feb 22, 2026
This Briefing Paper from CETaS and the Centre for Long-Term Resilience aims to provide a clear framework to inform the UK Government’s approach to understanding and responding to the risks posed by Artificial Intelligence (AI).

The source does not contain any of the following information: 10,000 reported AI safety incidents, 1.8 billion monthly ChatGPT visits, and $200 billion forecasted AI investment by 2025.

Claims (7)
The organization explicitly draws inspiration from academic fields studying global catastrophic and existential risks, private-sector risk management practices, and ideas from effective altruism, particularly the framework of focusing on important, neglected, and tractable problems.
Accurate100%Feb 22, 2026
We draw our inspiration from a wide range of sources which include: The academic disciplines of global catastrophic risk and existential risk; Private sector risk management best practice; The policy areas of technology governance, civil service reform and global health security; Some of the ideas of effective altruism (primarily, its focus on trying to solve important, neglected and tractable problems) and social justice (in particular, concern for the victims of extreme risk events like Covid-19).
According to the organization's own characterization, modern government tends to prioritize the urgent over the important, creating a gap in systematic attention to extreme risks like pandemics and emerging technologies.
Accurate100%Feb 22, 2026
Prioritising the urgent over the important is a permanent feature of modern government.
The organization explicitly adopted a non-partisan approach, emphasizing integrity, people-first values, and targeted real-world impact as core principles.
Accurate100%Feb 22, 2026
We are independent and non-partisan, ensuring that Government receives genuinely impartial thinking.
+4 more claims
Claims (1)
- "Gap consolidation of the mirror life evidence base" – A blog post and spreadsheet identifying safe knowledge gaps for mirror life preparedness, developed following a January 2025 UK government roundtable on mirror life risks attended by Dr. Paul-Enguerrand Fady. The framework classifies evidence gaps as either safe research or dual-use research of concern (DURC), aiming to guide policymakers without accelerating risks.
Claims (1)
In late 2024 and early 2025, CLTR advocated for establishing a UK Microbial Forensics Consortium (UKMFC) under the 2023 UK Biological Security Strategy and hired a literature review contractor for a project on microbial forensics, bioinformatics, and biological hazards.
Minor issues80%Feb 22, 2026
The UK Microbial Forensics Consortium (UKMFC) was formed out of the 2023 UK Biological Security Strategy and aims to strengthen microbial forensics as a national capability.

The claim states that CLTR advocated for establishing the UKMFC, but the source says CLTR advocated for the success of the consortium since its inception. The claim states the literature review contractor was hired for a project on microbial forensics, bioinformatics, and biological hazards, but the source only mentions microbial forensics and doesn't explicitly mention bioinformatics or biological hazards. The claim states the events occurred in late 2024 and early 2025, but the source indicates the role will entail work between November 2025 and January 2026.

Claims (16)
Effectiveness is inferred from policy adoptions, government endorsements, and funding recommendations rather than direct impact measurement.
Unsupported0%Feb 22, 2026
CLTR has a proven track record of influencing UK policy on extreme risks.

The source does not discuss how the effectiveness of the Centre for Long-Term Resilience is measured or inferred.

Jess Whittlestone as Head of AI Policy and Gabriella Overödder as Operations Manager & Strategy Adviser, with plans to expand to six people by summer 2022. By 2023, the organization had grown to a nine-person team of experts from academia, government, non-profits, and the private sector, with expansion plans targeting fifteen staff members by 2025.
Minor issues80%Feb 22, 2026
CLTR is currently an nine-person team of experts, with small policy units. They plan to expand to fifteen by 2025, which will allow them to: Provide critical advice to relevant policymakers on AI, Biosecurity and Risk Management Generate research reports and input on AI, Biosecurity and Risk Management Continue developing a strong network with policymakers and politicians, to spot future opportunities and brief senior stakeholders on the critical importance of boosting resilience to extreme risks

The source does not mention Jess Whittlestone as Head of AI Policy or Gabriella Overödder as Operations Manager & Strategy Adviser. The source does not mention plans to expand to six people by summer 2022. The source does not mention the team being from academia, government, non-profits, and the private sector.

The organization's focus on AI alignment and safety manifests through policy work on the UK's AI bill for emergency preparedness, frontier AI governance reports examining UK global leadership opportunities, and input to the Ministry of Defence AI Strategy addressing AI as an extreme risk with safety measures.
Minor issues85%Feb 22, 2026
For example, on AI, CLTR have provided input on The Ministry of Defence’s AI Strategy , with many of CLTR’s recommendations adopted (as outlined here )

The claim mentions the UK's AI bill for emergency preparedness, but the source does not explicitly mention this. The source mentions input on The Ministry of Defence’s AI Strategy, which addresses AI as a potential extreme risk with safety measures, which is similar but not identical. The claim mentions frontier AI governance reports examining UK global leadership opportunities, but the source does not explicitly mention this. The source mentions the UK Prime Minister confirming that the UK would host a global AI safety summit autumn 2023 to evaluate and monitor AI's most significant risks, including those posed by frontier systems, and that he wanted to make the UK the home of global AI safety regulation.

+13 more claims
Claims (1)
- Response to the UK Government's National Resilience Framework, critiquing insufficient risk oversight separation, unclear vulnerability assessments and budgets, and the need for a Chief Risk Officer role
Claims (4)
Jess Whittlestone as Head of AI Policy and Gabriella Overödder as Operations Manager & Strategy Adviser, with plans to expand to six people by summer 2022. By 2023, the organization had grown to a nine-person team of experts from academia, government, non-profits, and the private sector, with expansion plans targeting fifteen staff members by 2025.
Inaccurate60%Feb 22, 2026
Dr Jess Whittlestone appointed Head of AI Policy, and Gabriella Overödder appointed Operations Manager & Strategy Adviser CLTR today is proud to announce its first two hires: Dr Jess Whittlestone joins in March as CLTR’s Head of AI Policy.

unsupported: The source does not mention the organization growing to a nine-person team by 2023. unsupported: The source does not mention expansion plans targeting fifteen staff members by 2025.

Sophie Dannreuther holds the position of Director at CLTR and was involved in early team expansion announcements and mission alignment efforts.
Accurate100%Feb 22, 2026
CLTR’s Director, Sophie Dannreuther, said: “I am really pleased to have Gabriella and Jess join CLTR. Their values, expertise and commitment to building great teams are exactly what we need as we pursue our mission to build the UK’s resilience to extreme risks.”
In her role, she provides expert advice to UK government departments including the Centre for Data Ethics and Innovation and the Office for AI, and has published widely on AI policy.
Accurate100%Feb 22, 2026
Jess has published widely on subjects related to AI policy and provided expert advice to a number of government departments, including the Centre for Data Ethics and Innovation and Office for AI.
+1 more claims
Claims (8)
Based in Whitehall, London, the organization operates at the intersection of policy research and direct government advisory work, positioning itself as a "trusted thought-partner" to UK and international governments.
Inaccurate30%Feb 22, 2026
Our mission is to transform global resilience to extreme risks — both in the UK and internationally.

unsupported: location in Whitehall, London unsupported: operates at the intersection of policy research and direct government advisory work misleading paraphrase: positioning itself as a 'trusted thought-partner' to UK and international governments

CLTR operates through three primary mechanisms: conducting research on extreme risks to generate policy recommendations and reports, building networks with policymakers and politicians to influence strategy and brief stakeholders, and helping institutions improve governance, emergency preparedness, and decision-making processes.
Minor issues85%Feb 22, 2026
We help governments and other institutions transform resilience to extreme risks by: Helping decision-makers and the wider public to understand extreme risks. Providing expert advice and red-teaming on policy decisions. Convening cross-sector conversations and workshops related to extreme risks. Developing and advocating for policy recommendations and effective risk management frameworks and systems.

The claim mentions 'conducting research on extreme risks to generate policy recommendations and reports,' which is similar to the source's description of 'Developing and advocating for policy recommendations and effective risk management frameworks and systems,' but the claim emphasizes research more strongly than the source. The claim mentions 'building networks with policymakers and politicians to influence strategy and brief stakeholders,' which is similar to the source's description of 'Convening cross-sector conversations and workshops related to extreme risks,' but the claim emphasizes building networks with policymakers and politicians more strongly than the source. The claim mentions 'helping institutions improve governance, emergency preparedness, and decision-making processes,' which is similar to the source's description of 'Helping decision-makers and the wider public to understand extreme risks' and 'Providing expert advice and red-teaming on policy decisions,' but the claim is more specific than the source.

CLTR has developed expertise in synthetic biology risks and biological security strategy.
Accurate100%Feb 22, 2026
Biosecurity Risks arising from natural pandemics, laboratory leaks, bioweapons, and ‘dual-use’ research — advancements with the potential for both beneficial and harmful applications.
+5 more claims
Claims (2)
Mercer is a lawyer by training with prior experience as Head of External Affairs at the UK Department for International Development (DFID), policy adviser, and speechwriter in the Secretary of State's office.
He holds an MA in Global Governance and Diplomacy from the University of Oxford and serves as a Research Affiliate at Cambridge University's Centre for the Study of Existential Risk.
Claims (1)
The organization's ongoing funding from the Survival and Flourishing Fund through multiple grant cycles (SFF-2024, SFF-2025) signals sustained community endorsement.
Claims (2)
Polly Mason serves as Director of Strategic Partnerships and is the primary contact for collaborations and fundraising.
Accurate100%Feb 22, 2026
If you would like to collaborate and support our work , please reach out to fundraising@longtermresilience.org or directly to our Director of Strategic Partnerships, Polly Mason, at polly@longtermresilience.org .
CLTR characterized this funding as enabling a "crucial window of opportunity" for high-impact work while maintaining its independence and non-partisan approach.
Accurate100%Feb 22, 2026
We believe that there is now a crucial window of opportunity to shape the course of AI and biosecurity — to harness the extraordinary positive potential of new innovative technologies, whilst mitigating the extreme risks they pose. CLTR is independent and non-partisan, focused on our mission to transform global resilience to extreme risks.
13Centre for Long-Term Resilience - EA Forumforum.effectivealtruism.org·Blog post
Claims (3)
The profile noted that additional funding would enable CLTR to scale its team and enhance policy advice, research, and networks on AI, biosecurity, and risk management.
Unsupported0%Feb 22, 2026
In August 2023 Founders Pledge published a profile on the Centre for Long-Term Resilience, recommending them as a funding option.

The source does not mention that additional funding would enable CLTR to scale its team and enhance policy advice, research, and networks on AI, biosecurity, and risk management.

The organization's funding base is heavily concentrated in EA-aligned sources, including the Survival and Flourishing Fund, EA Infrastructure Fund, and Coefficient Giving.
Minor issues85%Feb 22, 2026
As of June 2022, the Centre for Long-Term Resilience has received over $2.8 million in funding from the Survival and Flourishing Fund, [1] [2] [3] and $100,000 from the EA Infrastructure Fund.

The source mentions funding from the Survival and Flourishing Fund and the EA Infrastructure Fund, but not Coefficient Giving. The claim states that the organization's funding base is 'heavily concentrated in EA-aligned sources,' which is a subjective assessment not directly supported by the source. The source only lists specific funding amounts from certain organizations.

This concentration could create dependencies or alignment pressures, though no specific conflicts of interest or controversies are documented in available sources.
14Center on Long-Term Risk - EA Forumforum.effectivealtruism.org·Blog post
Claims (1)
The organization appears to fit within the EA long-term resilience ecosystem alongside similar organizations like the Center on Long-Term Risk (focused on AI s-risks, receiving \$1.2M from SFF by 2022) and ALTER (receiving \$423K from SFF by 2022).
Accurate100%Feb 22, 2026
As of June 2022, CLR has received over $1.2 million in funding from the Survival and Flourishing Fund.
Claims (2)
The organization characterized itself as having expanded as a "trusted thought-partner" to UK and global governments during this period.
Accurate100%Feb 22, 2026
In 2024, CLTR continued to expand its influence as a trusted thought-partner to governments in the UK and around the globe.
In March 2025, CLTR was actively hiring a Policy and Operations Strategist for AI/biosecurity policies and operations.
16EA Organization Updates: March 2025forum.effectivealtruism.org·Blog post
Claims (1)
Founders Pledge, an EA-affiliated organization, recommended CLTR as a funding option in August 2023.
Claims (1)
This suggests the organization views its work as incomplete and at a relatively early stage despite documented policy wins.
Accurate100%Feb 22, 2026
Significant progress has been made in recent years, with the UK becoming a world leader in AI safety and announcing an ambitious biosecurity strategy in 2023. There remains, however, a huge amount of work to do and we look forward to sharing our next report with you for this present year.
Claims (1)
The organization focuses on potential harms including AI-enabled bioweapons development, disinformation campaigns, unintended behaviors in high-stakes domains like national security or critical infrastructure, and socioeconomic impacts such as power concentration.
Accurate100%Feb 22, 2026
AI systems could pose a number of large-scale extreme risks to society. These include severe misuse in bioweapon development or disinformation, societal harms such as power concentration or threats to democracy, or key aspects of society being increasingly controlled by insufficiently trustworthy AI systems.
Citation verification: 29 verified, 2 flagged, 15 unchecked of 54 total

Structured Data

2 factsView full profile →
Founded Date
2019

All Facts

Organization
PropertyValueAs OfSource
Founded Date2019
General
PropertyValueAs OfSource
Websitehttps://www.longtermresilience.org/

Related Pages

Top Related Pages

Analysis

Relative Longtermist Value Comparisons

Concepts

Biosecurity OverviewEA Shareholder Diversification from Anthropic

Organizations

Johns Hopkins Center for Health SecurityControlAIRethink PrioritiesSwift CentreUS AI Safety InstituteJohns Hopkins University

Risks

Bioweapons RiskAI-Enabled Biological Risks

Other

Dustin Moskovitz (AI Safety Funder)Toby OrdHolden Karnofsky

Historical

The MIRI Era