Centre for Long-Term Resilience
- QualityRated 63 but structure suggests 87 (underrated by 24 points)
Quick Assessment
Section titled “Quick Assessment”| Aspect | Assessment |
|---|---|
| Type | UK-based policy think tank |
| Founded | Circa 2021 |
| Focus | AI risks, biosecurity, government risk management |
| Team Size | 9 people (as of 2023), expanding to 15 by 2025 |
| Key Achievements | Influenced UK Ministry of Defence AI Strategy, 2023 UK Biological Security Strategy, extended National Security Risk Assessment horizon from 2 to 5 years |
| Funding | $2.8M+ from Survival and Flourishing Fund, £1M+ from private foundation, £4M from Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. (2024) |
| Approach | Direct policy advice to UK government, research reports, network building with policymakers |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | longtermresilience.org |
| EA Forum | forum.effectivealtruism.org |
Overview
Section titled “Overview”The Centre for Long-Term Resilience (CLTR) is an independent UK-based think tank specializing in enhancing global resilience to extreme risks, with primary focus on AI risks, biological risks (biosecurity), and government risk management. Based in Whitehall, London, the organization operates at the intersection of policy research and direct government advisory work, positioning itself as a “trusted thought-partner” to UK and international governments.1
CLTR’s mission centers on transforming how governments assess and respond to extreme risks—both in the UK and internationally—through what it describes as impartial expertise, actionable policy recommendations, and direct support to government institutions. The organization explicitly draws inspiration from academic fields studying global catastrophic and existential risks, private-sector risk management practices, and ideas from effective altruism, particularly the framework of focusing on important, neglected, and tractable problems.2
The organization has demonstrated policy influence within the UK government, with contributions to the Ministry of Defence’s AI Strategy report recognizing AI as an extreme risk, the refreshed UK Biosecurity Strategy, and extensions to the National Security Risk Assessment time horizon. CLTR operates as a non-profit registered as Alpenglow Group Limited (Company Registration Number: 12308171) in England and Wales.3
History and Founding
Section titled “History and Founding”CLTR was founded approximately five years ago (around 2021) in response to what the organization identified as policymakers’ structural focus on short-term political issues rather than long-term policy challenges. According to the organization’s own characterization, modern government tends to prioritize the urgent over the important, creating a gap in systematic attention to extreme risks like pandemics and emerging technologies.4
The founding vision centered on creating “a safe and flourishing world with high resilience to extreme risks,” with the UK positioned as both a policy test-bed and an international convener allied to the US and EU. The organization explicitly adopted a non-partisan approach, emphasizing integrity, people-first values, and targeted real-world impact as core principles.5
The founding team consisted of two people who grew the organization through strategic hiring. In 2022, CLTR announced its first hires beyond the founding team: Dr. Jess Whittlestone as Head of AI Policy and Gabriella Overödder as Operations Manager & Strategy Adviser, with plans to expand to six people by summer 2022.6 By 2023, the organization had grown to a nine-person team of experts from academia, government, non-profits, and the private sector, with expansion plans targeting fifteen staff members by 2025.7
Available sources do not name specific individual founders, though Angus Mercer is identified as Founder and Chief Executive, leading the organization and working with the Board on strategic direction. Mercer is a lawyer by training with prior experience as Head of External Affairs at the UK Department for International Development (DFID), policy adviser, and speechwriter in the Secretary of State’s office.8
Leadership and Key People
Section titled “Leadership and Key People”Angus Mercer serves as Founder and Chief Executive of CLTR. His background includes roles as a policy adviser and former Head of External Affairs at DFID, as well as time on the Senior Management Team at a London public affairs consultancy working with clients including the Bill & Melinda Gates Foundation, Carnegie Corporation, and Boston Consulting Group’s Centre for Public Impact. He holds an MA in Global Governance and Diplomacy from the University of Oxford and serves as a Research Affiliate at Cambridge University’s Centre for the Study of Existential Risk.9
Sophie Dannreuther holds the position of Director at CLTR and was involved in early team expansion announcements and mission alignment efforts.10
Dr. Jess Whittlestone joined as Head of AI Policy in 2022 as the organization’s first hire beyond the founding team. She holds a PhD in Behavioural Science from the University of Warwick and a first-class degree in Mathematics and Philosophy from Oxford University. In her role, she provides expert advice to UK government departments including the Centre for Data Ethics and Innovation and the Office for AI, and has published widely on AI policy.11
Gabriella Overödder serves as Operations Manager & Strategy Adviser and was also among the first hires beyond the founding team in 2022. She brings expertise in operations, strategy, team-building, and policy, and managed the organization’s transition to a larger team structure.12
Polly Mason serves as Director of Strategic Partnerships and is the primary contact for collaborations and fundraising.13
Funding
Section titled “Funding”CLTR has received substantial funding from multiple sources aligned with effective altruism and long-term risk reduction priorities:
| Source | Amount | Date/Period | Purpose |
|---|---|---|---|
| Survival and Flourishing FundSffSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100 | $2.8M+ | As of June 2022 | General support |
| EA Infrastructure Fund | $100,000 | As of June 2022 | General support |
| Private foundation | £1M+ | 2022 | Impact investing, social responsibility, grants for low/middle-income countries |
| Powoki Foundation | $100,000 | 2022 | Safeguarding humanity from synthetic biology and advanced AI |
| Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. | £4M | October 2024 (3-year commitment) | Supporting mission to transform global resilience to extreme risks |
| Survival and Flourishing FundSffSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100 | $527,000 + $38,000 | SFF-2025 | Via Founders Pledge |
| Survival and Flourishing FundSffSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100 | $1,083,000 | SFF-2024 | Via Founders Pledge |
| Sentinel Bio | $400,000 | February 2025 | AI-enabled biology risk analysis and safety frameworks |
The October 2024 grant from Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. included an additional matching fund commitment of up to £3 million, where Open Philanthropy pledged to match every £1 from other donors with an additional £1. CLTR characterized this funding as enabling a “crucial window of opportunity” for high-impact work while maintaining its independence and non-partisan approach.14
In August 2023, Founders Pledge published a profile recommending CLTR as a funding option, highlighting the organization’s proven track record of UK policy influence. The profile noted that additional funding would enable CLTR to scale its team and enhance policy advice, research, and networks on AI, biosecurity, and risk management.15
Focus Areas and Work
Section titled “Focus Areas and Work”Artificial Intelligence Risks
Section titled “Artificial Intelligence Risks”CLTR’s AI work addresses risks from both misuse and unintended system behaviors. The organization focuses on potential harms including AI-enabled bioweapons development, disinformation campaigns, unintended behaviors in high-stakes domains like national security or critical infrastructure, and socioeconomic impacts such as power concentration.16
Recent AI-related outputs include:
- “Securing a seat at the table: pathways for advancing the UK’s global leadership in frontier AI governance” – A report examining UK AI governance strategy and international positioning17
- “Preparing for AI security incidents” – Recommendations for emergency preparedness mechanisms via UK AI legislation18
- “Strengthening Resilience to AI Risk” (with CETaS) – A briefing paper providing a framework for UK AI risk response, highlighting over 10,000 reported AI safety incidents, 1.8 billion monthly ChatGPT visits, and $200 billion forecasted AI investment by 202519
CLTR contributed to the Ministry of Defence’s AI Strategy, with recommendations explicitly recognizing AI as an extreme risk and proposing safety measures. This represents one of the organization’s documented policy wins.20
Biosecurity and Biological Risks
Section titled “Biosecurity and Biological Risks”The biosecurity portfolio addresses threats from natural pandemics, laboratory leaks, bioweapons, and dual-use research. CLTR has developed expertise in synthetic biology risks and biological security strategy.21
Key biosecurity projects include:
- “Gap consolidation of the mirror life evidence base” – A blog post and spreadsheet identifying safe knowledge gaps for mirror life preparedness, developed following a January 2025 UK government roundtable on mirror life risks attended by Dr. Paul-Enguerrand Fady. The framework classifies evidence gaps as either safe research or dual-use research of concern (DURC), aiming to guide policymakers without accelerating risks.22
- “Cost-Benefit Analysis of Synthetic Nucleic Acid Screening for the UK” – Analysis recommending mandatory screening for sequences over 50 base pairs, funded by a $400,000 Sentinel Bio grant for AI-enabled biology risk analysis23
CLTR provided assistance with the Cabinet Office’s Biosecurity Strategy Refresh, with expertise and networks incorporated into the UK’s 2023 Biological Security Strategy. The organization also provided both written and oral evidence to the House of Lords Science and Technology Committee’s Engineering Biology inquiry.24
In late 2024 and early 2025, CLTR advocated for establishing a UK Microbial Forensics Consortium (UKMFC) under the 2023 UK Biological Security Strategy and hired a literature review contractor for a project on microbial forensics, bioinformatics, and biological hazards.25
Government Risk Management
Section titled “Government Risk Management”CLTR’s risk management work aims to improve how government institutions assess, prioritize, and respond to long-term and extreme risks. This includes both process improvements and horizon expansion for risk assessment frameworks.26
Major risk management contributions include:
- “UK Resilience Action Plan: Ambitious Progress with Room to Go Further” – Assessment of UK government resilience planning27
- “Ten Points to consider for the Resilience Strategy” – A 10-point manifesto published ahead of the UK Government’s Resilience Strategy publication28
- Contributions that extended the National Security Risk Assessment horizon from 2 to 5 years and introduced new exercises for longer-term chronic risks29
- Response to the UK Government’s National Resilience Framework, critiquing insufficient risk oversight separation, unclear vulnerability assessments and budgets, and the need for a Chief Risk Officer role30
CLTR provided oral evidence to the Joint Committee on National Security Strategy on the UK Resilience Framework and Integrated Review, and supported the Institute for Government’s Managing Extreme Risks report.31
Policy Influence and Recent Work
Section titled “Policy Influence and Recent Work”The organization has demonstrated concrete policy impacts:
- Ministry of Defence AI Strategy explicitly mentions AI as an extreme risk and proposes safety measures based on CLTR recommendations32
- 2023 UK Biological Security Strategy incorporated CLTR expertise and networks33
- National Security Risk Assessment time horizon extended from 2 to 5 years with new chronic risk exercises34
In 2024, CLTR informed UK regulation of frontier AI models, aided implementation of the 2023 Biological Security Strategy, and provided responses to the Covid-19 Inquiry on resilience and preparedness. The organization characterized itself as having expanded as a “trusted thought-partner” to UK and global governments during this period.35
CLTR has also published op-eds in Times Red Box and Financial Times, extending its influence beyond direct government advisory work.36
Approach and Methods
Section titled “Approach and Methods”CLTR operates through three primary mechanisms: conducting research on extreme risks to generate policy recommendations and reports, building networks with policymakers and politicians to influence strategy and brief stakeholders, and helping institutions improve governance, emergency preparedness, and decision-making processes.37
The organization’s location in Whitehall, London provides direct access to UK government institutions, enabling what it describes as providing “extra hands” to government alongside impartial expertise and actionable policy steps. CLTR emphasizes its independent, non-partisan status while maintaining this close relationship with government actors.38
The organization explicitly positions the UK as a policy test-bed and international convener allied to the US and EU, suggesting a strategic vision where UK policy innovations could influence broader international approaches to extreme risk governance.39
Connections to Effective Altruism and AI Safety Communities
Section titled “Connections to Effective Altruism and AI Safety Communities”CLTR draws explicit inspiration from effective altruism’s framework of focusing on important, neglected, and tractable problems. The organization’s funding base is heavily concentrated in EA-aligned sources, including the Survival and Flourishing FundSffSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100, EA Infrastructure Fund, and Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information..40
The organization is profiled prominently on the EA Forum with a dedicated topic page and appears in EA organizational update posts alongside other EA-aligned organizations. Founders Pledge, an EA-affiliated organization, recommended CLTR as a funding option in August 2023.41
CLTR explicitly addresses existential risks through its research and policy work, drawing from academic disciplines studying global catastrophic and existential risks. Examples of extreme risks the organization considers include AI-engineered pandemics and major AI accidents in national infrastructure.42
The organization’s focus on AI alignment and safety manifests through policy work on the UK’s AI bill for emergency preparedness, frontier AI governance reports examining UK global leadership opportunities, and input to the Ministry of Defence AI Strategy addressing AI as an extreme risk with safety measures.43
Growth and Expansion Plans
Section titled “Growth and Expansion Plans”CLTR has pursued systematic expansion from its two-person founding team. By 2022, the organization had made its first two hires beyond the founding team. By 2023, it had grown to nine people, with public plans to reach fifteen staff members by 2025 through the development of small policy units focused on AI, biosecurity, and risk management.44
Recent hiring initiatives (2024-2025) included positions for Director of AI Policy (£100k+ salary, reporting to Gabriella Overödder), Senior Adviser in Advocacy and Communications (with grant writing focus), and Operations Associate. In March 2025, CLTR was actively hiring a Policy and Operations Strategist for AI/biosecurity policies and operations.45
The organization has characterized additional funding as enabling enhanced policy advice, research, and networks across its three core focus areas, with the 2025 expansion target specifically tied to building capacity for greater policy influence.46
Community Reception and Reputation
Section titled “Community Reception and Reputation”CLTR appears to be well-regarded within effective altruism circles, with consistent funding from EA-aligned sources through 2025 and positive characterizations in EA Forum posts. The organization’s ongoing funding from the Survival and Flourishing FundSffSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100 through multiple grant cycles (SFF-2024, SFF-2025) signals sustained community endorsement.47
Founders Pledge’s 2023 recommendation highlighted CLTR’s proven UK policy influence as a key factor supporting the funding recommendation. The recommendation emphasized CLTR’s ability to fill gaps in government policy on AI and biosecurity risks through expertise-driven recommendations, with the assessment that UK influence amplifies global resilience efforts.48
No dissenting opinions, debates, or criticisms of CLTR appear in available EA Forum or related community sources. The organization appears to fit within the EA long-term resilience ecosystem alongside similar organizations like the Center on Long-Term Risk (focused on AI s-risks, receiving $1.2M from SFF by 2022) and ALTER (receiving $423K from SFF by 2022).49
Limitations and Uncertainties
Section titled “Limitations and Uncertainties”While CLTR has documented several policy wins, the available information does not include quantitative impact metrics such as estimates of risk reduction, lives saved, or other outcome measures. Effectiveness is inferred from policy adoptions, government endorsements, and funding recommendations rather than direct impact measurement.50
The organization’s heavy concentration in UK government advisory work raises questions about scalability to other countries and the generalizability of its model. CLTR positions the UK as a test-bed for international policy innovation, but evidence of successful international influence is limited in available sources.51
The organization operates in a funding ecosystem heavily concentrated in effective altruism sources, with major grants from the Survival and Flourishing FundSffSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100 and Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information.. This concentration could create dependencies or alignment pressures, though no specific conflicts of interest or controversies are documented in available sources.52
CLTR’s 2023 report emphasized that despite progress, there remains a “crucial window” for AI and biosecurity policy work with ongoing needs. This suggests the organization views its work as incomplete and at a relatively early stage despite documented policy wins.53
The available sources provide limited insight into internal decision-making processes, governance structures beyond the Board, or how the organization prioritizes among competing policy opportunities. The small team size (nine people as of 2023) also raises questions about capacity constraints given the breadth of the organization’s portfolio across AI, biosecurity, and risk management.54
Key Uncertainties
Section titled “Key Uncertainties”- How does CLTR measure the counterfactual impact of its policy recommendations? Would similar government policy changes have occurred without CLTR’s involvement?
- What is the organization’s theory of change for influencing governments beyond the UK? Does the UK test-bed model successfully transfer to other countries?
- How does CLTR prioritize between AI risks, biosecurity, and risk management given limited team capacity?
- What governance structures and decision-making processes guide CLTR’s strategic choices?
- How sustainable is the organization’s model of close government collaboration while maintaining independence and non-partisan status?
- What accountability mechanisms exist for evaluating whether CLTR’s policy advice improves resilience outcomes?
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Literature review contractor - Centre for Long-Term Resilience ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Response to the UK Government’s National Resilience Framework ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩
-
Centre for Long-Term Resilience - Founders Pledge Research ↩