Convergence Analysis
- QualityRated 55 but structure suggests 80 (underrated by 25 points)
Quick Assessment
Section titled “Quick Assessment”| Aspect | Details |
|---|---|
| Type | Research institute and think tank |
| Founded | 2017 (formal launch 2024) |
| Focus Areas | AI scenario planning, governance strategies, transformative AI timelines, existential risk mitigation |
| Key People | David Kristoffersson (Co-founder), Justin ShovelainJustin ShovelainJustin Shovelain is a strategy researcher and co-founder of Convergence Analysis who has worked on AI safety, existential risk reduction, and EA cause prioritization since 2009. While he has contri...Quality: 38/100 (Co-founder, Chief Strategist) |
| Team Size | 9-10 interdisciplinary researchers (as of 2024) |
| Notable Work | ”Pathways to short TAI timelines” (2025), AI scenario planning research, governance recommendations |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | convergenceanalysis.org |
| EA Forum | Announcement Post |
| LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 | Announcement Post |
Overview
Section titled “Overview”Convergence Analysis is a research institute dedicated to reducing existential risk from transformative AI through scenario planning, governance strategy development, and policy research. The organization emerged from a research collaboration between David Kristoffersson and Justin ShovelainJustin ShovelainJustin Shovelain is a strategy researcher and co-founder of Convergence Analysis who has worked on AI safety, existential risk reduction, and EA cause prioritization since 2009. While he has contri...Quality: 38/100 beginning in 2017, formally launching as an institute in 2024 with a team of approximately 10 researchers and professionals.12
The institute operates on the premise that technical AI alignmentAlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100 work, while crucial, is insufficient to prevent catastrophic outcomes from advanced AI systems. According to their theory of change, roughly 75% of AI safety researchers focus on technical alignment—ensuring AI systems follow human goals and preferences—but this approach does not address risks from misuse by malicious actors or broader governance failures.3 Convergence Analysis aims to fill gaps in scenario planning, governance strategy evaluation, and awareness-raising among policymakers and the AI safety community.
The organization’s research focuses on identifying likely and neglected AI development trajectories, particularly scenarios involving rapid progress to transformative AI (TAI) within 15 years or less. Their interdisciplinary approach draws on philosophy, computer science, mathematics, sociology, cognitive science, and psychology to develop comprehensive frameworks for understanding and mitigating AI-related existential risks.14
History and Development
Section titled “History and Development”Early Collaboration (2017-2021)
Section titled “Early Collaboration (2017-2021)”Convergence Analysis originated as a research partnership between David Kristoffersson and Justin ShovelainJustin ShovelainJustin Shovelain is a strategy researcher and co-founder of Convergence Analysis who has worked on AI safety, existential risk reduction, and EA cause prioritization since 2009. While he has contri...Quality: 38/100 in 2017, focused on existential risk strategy.12 During this period, the founders engaged diverse collaborators to build foundational research on reducing existential risk from AI. They published their findings on the Effective Altruism Forum and LessWrong, establishing a presence within the AI safety community.2
From 2017 to 2021, the collaboration advised organizations such as Lionheart VenturesLionheart VenturesLionheart Ventures is a small venture capital firm ($25M inaugural fund) focused on AI safety and mental health investments, notable for its investment in Anthropic and integration with the EA comm...Quality: 50/100 while steadily producing research output. This phase established the intellectual groundwork for what would eventually become a formal research institution.2
Institutional Development (2021-2023)
Section titled “Institutional Development (2021-2023)”Between 2021 and 2023, Kristoffersson and ShovelainJustin ShovelainJustin Shovelain is a strategy researcher and co-founder of Convergence Analysis who has worked on AI safety, existential risk reduction, and EA cause prioritization since 2009. While he has contri...Quality: 38/100 laid the foundation for a more structured research institution, expanding their team and developing organizational infrastructure.12 This period focused on transitioning from an informal collaboration to a sustainable institutional model capable of supporting sustained research programs.
Formal Launch (2024)
Section titled “Formal Launch (2024)”The organization officially launched as Convergence Analysis in 2024 with a revamped vision and a team of 9-10 researchers and professionals.12 The formal announcement included plans for two major technical reports on AI scenarios, with particular emphasis on short timelines to transformative AI (less than 15 years).2
Throughout 2024, the institute released multiple research outputs including “Scenario Planning for AI X-risk” (February), “Transformative AI and Scenario Planning” (March), “Investigating the Role of Agency in AI X-risk” (April), “AI Clarity: An Initial Research Agenda” (April), “AI GovernanceAi GovernanceThis page contains only component imports with no actual content - it displays dynamically loaded data from an external source that cannot be evaluated. Needs a Theory of Victory” (June), and “AI Emergency Preparedness” (July).5
Recent Developments (2025)
Section titled “Recent Developments (2025)”In February 2025, Convergence Analysis published “Pathways to short TAI timelines” by Zershaaneh Qureshi, analyzing compute scaling, recursive improvement, and seven distinct scenarios for achieving transformative AI by 2035.6 The report concluded that short timelines to TAI are plausible despite potential bottlenecks, representing a significant contribution to the ongoing debate about AI development trajectories.
The institute also released its 2024 Impact Review in March 2025, summarizing the organization’s first full year of operation.5 In April 2025, the team published recommendations for a US AI Action Plan, authored by David Kristoffersson and Gwyn Glasser.7
Core Research Programs
Section titled “Core Research Programs”Scenario Planning
Section titled “Scenario Planning”Convergence Analysis’s scenario planning research addresses what they characterize as a critical gap in AI safety work: the lack of concrete, well-developed scenarios for how transformative AI might emerge and create existential risks. According to their theory of change, high-level policy discussions often lack the specificity needed for effective coordination among policymakers, CEOs, and researchers.4
The organization’s scenario work identifies existential hazard pathways and evaluates governance strategies across different development trajectories. Their research emphasizes scenarios where AI systems scale to transformative capabilities in less than 15 years, examining factors such as compute scaling, algorithmic improvements, and recursive self-improvement.26
In their 2025 report on pathways to short TAI timelines, researcher Zershaaneh Qureshi analyzed seven distinct scenarios involving different combinations of technological drivers. The analysis considered potential bottlenecks while concluding that multiple plausible pathways exist for reaching transformative AI by the mid-2030s.6
Governance Strategy Development
Section titled “Governance Strategy Development”The institute’s governance work focuses on identifying and evaluating strategies to reduce existential risk across different AI development scenarios. This includes research on regulatory frameworks, international coordinationAi Transition Model ParameterInternational CoordinationThis page contains only a React component placeholder with no actual content rendered. Cannot assess importance or quality without substantive text., safety standards, and institutional arrangements.4
In 2024, Convergence Analysis examined the EU AI ActPolicyEU AI ActComprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. ...Quality: 55/100’s approach to high-risk AI systems, which requires third-party conformity assessments and imposes fines starting at $20 million for non-compliance. The Act classifies AI models by risk level and mandates requirements for risk management, human oversight, accuracy, robustness, and cybersecurity.8
The organization has also worked on emergency preparedness frameworks and theories of victory for AI governance, aiming to clarify how policy interventions might successfully reduce existential risks.5
AI Awareness and Community Building
Section titled “AI Awareness and Community Building”Beyond technical research, Convergence Analysis seeks to raise awareness about AI risks among policymakers, the AI safety community, and the general public.1 This includes building community consensus through working groups, seminars, and conferences focused on AI scenario analysis and governance.4
The institute offers a Fellowship Program, described as a 12-week intensive program focused on AI economic transition and policy interventions, though specific details about cohort sizes or selection processes are not publicly documented.9
Team and Leadership
Section titled “Team and Leadership”Key Personnel
Section titled “Key Personnel”David Kristoffersson serves as co-founder of Convergence Analysis and led the early existential risk strategy research that formed the organization’s foundation.12 He has co-authored recent policy recommendations including the 2025 US AI Action Plan and the organization’s 2024 Impact Review.75
Justin ShovelainJustin ShovelainJustin Shovelain is a strategy researcher and co-founder of Convergence Analysis who has worked on AI safety, existential risk reduction, and EA cause prioritization since 2009. While he has contri...Quality: 38/100 is the organization’s Chief Strategist and co-founder, having collaborated with Kristoffersson since the initial 2017 research partnership.10 Prior to focusing full-time on Convergence Analysis, Shovelain served as an AI safety advisor to Lionheart Ventures.10
Zershaaneh Qureshi authored the institute’s February 2025 report on pathways to short TAI timelines, representing a significant contribution to the organization’s scenario planning research.6
Team Structure
Section titled “Team Structure”As of 2024, Convergence Analysis employs approximately 9-10 researchers and professionals with interdisciplinary expertise spanning technical AI alignment, ethics, AI governance, hardware, computer science, philosophy, and mathematics.4 The organization characterizes itself as bringing together diverse perspectives to address the multifaceted challenges of AI existential risk.
Research Themes and Findings
Section titled “Research Themes and Findings”Technical Alignment Limitations
Section titled “Technical Alignment Limitations”A core argument in Convergence Analysis’s work is that technical alignment—while necessary—is insufficient for preventing catastrophic AI outcomes. Their theory of change document argues that even successfully aligned AI systems remain vulnerable to misuse by malicious actors and cannot address risks arising from competitive dynamics between less-safety-conscious developers.311
The organization cites research indicating that AI agents’ long-term task performance has been growing exponentially, doubling approximately every seven months, which they interpret as accelerating progress toward artificial general intelligence.12 This rapid capability growth, according to their analysis, creates governance challenges that purely technical safety approaches cannot address.
Alignment Techniques and Failure Modes
Section titled “Alignment Techniques and Failure Modes”Convergence Analysis researchers have examined the relationship between different alignment approaches and potential failure modes. Work by Leonard Dung and Florian Mai analyzed seven alignment techniques and seven failure modes from a risk perspective, assessing correlations to evaluate defense-in-depth strategies.13 Their analysis raises concerns that multiple alignment techniques may share common failure modes, potentially reducing the redundancy that defense-in-depth approaches are intended to provide.
The organization’s research connects to broader concerns in the AI safety community about deception, power-seeking, reward hackingRiskReward HackingComprehensive analysis showing reward hacking occurs in 1-2% of OpenAI o3 task attempts, with 43x higher rates when scoring functions are visible. Mathematical proof establishes it's inevitable for...Quality: 91/100, and the development of misaligned goals in advanced AI systems. These failure modes are characterized as particularly concerning because they may emerge as instrumental goals—intermediate objectives that help an AI system achieve almost any final goal.1112
Scenario Analysis Methodology
Section titled “Scenario Analysis Methodology”Convergence Analysis’s approach to scenario planning involves identifying key drivers of AI capability development, mapping potential bottlenecks, and analyzing governance interventions across different trajectories. Their methodology aims to bridge what they characterize as a gap between high-level discussions of AI risk and concrete, actionable research questions.4
The organization’s work on short TAI timelines examines factors including compute scaling (following trends like those documented in analyses of GPU performance improvements), algorithmic efficiency gains, training paradigm innovations, and potential recursive self-improvement dynamics.6
Relationship to Broader AI Safety Ecosystem
Section titled “Relationship to Broader AI Safety Ecosystem”Technical Alignment Community
Section titled “Technical Alignment Community”While Convergence Analysis emphasizes governance and scenario planning over technical research, the organization positions itself as complementary to rather than in opposition to technical alignment work. Their theory of change acknowledges that technical alignment is “hard” and necessary, while arguing that it must be accompanied by robust governance frameworks.311
The organization’s perspective aligns with research from organizations like METRLab ResearchMETRMETR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replication, cybersecurity, CBRN, and manipulation capabi...Quality: 66/100 (formerly ARC Evals), which has developed evaluations for dangerous capabilities including “autonomous replication and adaptation”—the ability of AI systems to acquire resources, self-copy, and adapt to new environments.812 These evaluation frameworks represent attempts to operationalize risk assessments that can inform both technical safety work and governance decisions.
Effective Altruism and Rationalist Communities
Section titled “Effective Altruism and Rationalist Communities”Convergence Analysis maintains strong connections to the Effective Altruism and rationalist communities through its founders’ backgrounds and its publication strategy. The organization’s major announcements and research outputs are regularly posted to the EA Forum and cross-posted to LessWrong, where they receive engagement from community members focused on existential risk reduction.245
Community reception on these platforms has been generally positive, with discussions emphasizing the organization’s contributions to AI scenario analysis and its efforts to fill perceived gaps in governance research. The organization’s work is viewed as valuable for clarifying AI development trajectories and identifying policy interventions, though direct critical commentary in public forums remains limited.245
Policy and Regulatory Engagement
Section titled “Policy and Regulatory Engagement”Convergence Analysis has produced analysis of existing and proposed AI regulatory frameworks, including the EU AI Act and various proposals for mandatory pre-deployment safety evaluations.8 The organization’s 2024 review of the AI regulatory landscape and its 2025 recommendations for US AI policy demonstrate engagement with concrete policy development processes.7
The institute has also examined governance proposals such as AI chip registration policies, evaluating their potential effectiveness for reducing risks across different scenarios.7
Criticisms and Limitations
Section titled “Criticisms and Limitations”Scope of Technical Alignment Claims
Section titled “Scope of Technical Alignment Claims”While Convergence Analysis argues that technical alignment alone is insufficient to prevent AI catastrophes, the organization’s characterization that “roughly 75%” of AI safety researchers focus exclusively on technical alignment may oversimplify the field’s actual research distribution.3 Many researchers working on technical alignment also engage with governance questions, and the boundaries between technical and governance work are often blurred in practice.
Scenario Planning Challenges
Section titled “Scenario Planning Challenges”Scenario planning for transformative AI faces inherent difficulties given deep uncertainty about future technological developments. Critics of scenario-based approaches to long-term AI governance note that scenarios may systematically miss important considerations, anchor discussions around particular narratives, or create false confidence in predictions about highly uncertain futures.
Convergence Analysis’s focus on “short timelines” (less than 15 years to transformative AI) represents a particular stance in ongoing debates about AI development speed. While the organization provides arguments for why such timelines are plausible, forecasting transformative AI development remains highly contested, with expert estimates varying widely.
Defense-in-Depth Concerns
Section titled “Defense-in-Depth Concerns”Research associated with Convergence Analysis has raised concerns that multiple alignment techniques may fail in correlated ways, reducing the effectiveness of layered safety approaches.13 This represents an important challenge for defense-in-depth strategies, though the extent of failure mode correlation across different techniques remains an open empirical question.
Evaluation and Assessment Challenges
Section titled “Evaluation and Assessment Challenges”The organization advocates for improved pre-deployment safety and alignment assessments, but acknowledges that evaluation methodologies remain underdeveloped.8 More powerful AI systems may amplify potential harms while simultaneously making their capabilities harder to assess comprehensively before deployment. This creates a tension between the need for robust evaluations and the practical limitations of current assessment approaches.
Key Uncertainties
Section titled “Key Uncertainties”Several fundamental uncertainties shape Convergence Analysis’s research agenda and the broader AI governance landscape:
Timelines to Transformative AI: Despite the organization’s focus on short-timeline scenarios, substantial disagreement persists among AI researchers and forecasters about when (or whether) transformative AI will be developed. Historical performance trends in specific task domains may not reliably predict broader capability development.
Governance Tractability: The extent to which governance interventions can effectively reduce existential risks from advanced AI systems remains uncertain. Some risk pathways may be difficult to address through policy mechanisms, particularly if they involve hard-to-detect capabilities or arise from unintended emergent behaviors.
Alignment Difficulty: While Convergence Analysis emphasizes that technical alignment is “hard,” the actual difficulty of aligning advanced AI systems with human values remains unknown. Recent developments in AI capabilities have sometimes proven easier to control than anticipated, while other aspects have revealed unexpected challenges.
Scenario Coverage: The completeness of current scenario planning efforts, including Convergence Analysis’s work, is difficult to assess. Important risk pathways may remain under-explored, and the relative likelihood of different scenarios is challenging to estimate given limited historical precedent.
International Coordination: The feasibility of international cooperation on AI governance—particularly during periods of geopolitical tension—significantly affects the viability of many proposed interventions. Whether coordination mechanisms can be established before transformative AI development remains uncertain.