Skip to content

Justin Shovelain

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:38 (Draft)⚠️
Importance:35 (Reference)
Last edited:2026-02-03 (3 days ago)
Words:2.1k
Structure:
📊 2📈 0🔗 26📚 180%Score: 12/15
LLM Summary:Justin Shovelain is a strategy researcher and co-founder of Convergence Analysis who has worked on AI safety, existential risk reduction, and EA cause prioritization since 2009. While he has contributed to various AI alignment concepts and early COVID risk analysis, his primary focus on strategy research rather than direct technical work limits his overall significance for AI safety.
Issues (1):
  • QualityRated 38 but structure suggests 80 (underrated by 42 points)
AspectAssessment
RoleCo-founder and Chief Strategist of Convergence Analysis
Focus AreasAI safety, existential risk strategy, cause prioritization, EA research
Key ContributionsStrategy research frameworks, AI alignment concepts, early COVID-19 risk analysis
BackgroundMS Computer Science, BS in CS/Math/Physics from University of Minnesota
Experience16+ years in x-risk research (since 2009); worked with MIRI, CFAR, EA Global, Founders Fund
Notable PositionsAI Safety Advisor at Lionheart Ventures; Treasurer of Foresight Institute
SourceLink
Official Websiteconvergenceanalysis.org
EA Forumforum.effectivealtruism.org
LessWronglesswrong.com

Justin Shovelain is a researcher, strategist, and entrepreneur focused on reducing existential risks from artificial intelligence. As co-founder and Chief Strategist of Convergence Analysis, he leads research on quantitative long-term strategy, cause prioritization within existential risk domains, and AI safety governance.1 His work emphasizes building foundational strategy research to clarify paths forward amid shortening AI timelines.

Shovelain has been active in the effective altruism and AI safety communities since 2009, working with organizations including the Machine Intelligence Research Institute, Center for Applied Rationality (CFAR), and EA Global.2 He advised Founders Fund on their original investment in DeepMind and currently serves as AI Safety Advisor to Lionheart Ventures.3 Beyond his x-risk work, Shovelain has significant technology industry experience, including roles at Blockfolio (acquired by FTX for $150 million in 2020), Box, and as co-founder of FreshPay, an early venture-backed cryptocurrency startup.3

His research contributions span AI alignment theory, causal modeling of catastrophic risk pathways, and strategy research methodologies. He has published extensively on the EA Forum and LessWrong, co-authoring influential posts on topics including keeping humans in AI development loops, optimizing AI for wisdom, and improving the future through influencing actors’ benevolence, intelligence, and power.45

Shovelain holds an MS in Computer Science and BS degrees in Computer Science, Mathematics, and Physics, all from the University of Minnesota.23 This interdisciplinary background has informed his approach to existential risk research, which combines technical understanding of AI systems with mathematical modeling of long-term outcomes and strategic analysis.

Based in the San Francisco Bay Area, Shovelain entered the existential risk field in 2009, during the early period of organized AI safety research.2 His early work involved collaboration with MIRI, which was then pioneering technical approaches to the AI alignment problem, and CFAR, which focused on improving reasoning and decision-making within the rationalist and effective altruist communities.

Before focusing full-time on existential risk research, Shovelain built substantial experience in the technology industry. He was among the first dozen employees at Box, the enterprise cloud storage company that later went public on the NYSE.3 He subsequently served as VP of Engineering at Miso Media, a music education technology company featured on Shark Tank.3

Shovelain co-founded FreshPay, described as one of the first venture capital-backed cryptocurrency startups, demonstrating early involvement in blockchain technology.3 He later worked as Lead Software Architect at Blockfolio, a cryptocurrency portfolio tracking application that FTX acquired for $150 million in 2020.3 This combination of startup experience and technical leadership roles provided practical insights into how emerging technologies develop and scale.

Convergence Analysis emerged from a research collaboration between Shovelain and David Kristoffersson that began in 2017. During the 2017-2021 period, they conducted foundational research on existential risk strategy, publishing on EA Forum and LessWrong while advising groups like Lionheart Ventures.6 Topics included strategy research methodologies, cause prioritization frameworks, and information hazard policy.

From 2021 to 2023, the collaboration transitioned toward building a formal research institution, expanding the team and developing more structured research programs.6 In 2024, Convergence formally relaunched with a team of ten academics and professionals, motivated by shortened timelines to advanced AI and the urgent need for strategic clarity.6 The organization describes its mission as working toward “a safe and flourishing future” through research that serves as groundwork for cause prioritization within existential risk domains.6

Shovelain serves as Chairman and Senior Strategist at Convergence, which focuses on quantitative long-term strategy, AI scenario governance, and existential risk reduction. The organization’s theory of change emphasizes that strategy research—clarifying which interventions to pursue and how—is essential infrastructure for effective x-risk work.6

As AI Safety Advisor to Lionheart Ventures, Shovelain provides strategic guidance on AI-related investments and risks.3 His advisory relationship with Founders Fund on their DeepMind investment predated DeepMind’s acquisition by Google and represented early recognition of the company’s significance for AI development.3

Shovelain also serves as Treasurer of the Foresight Institute, an organization focused on transformative future technologies, and has advised SentienceAI on global catastrophic risks from AI, applied cognitive science, and intelligence amplification.7 These roles position him at the intersection of AI development, investment, and safety research.

Strategy Research and Cause Prioritization

Section titled “Strategy Research and Cause Prioritization”

A central theme in Shovelain’s work is the need for more rigorous strategy research within the effective altruism and existential risk communities. In “A case for strategy research” (co-authored with Siebe Rozendal and David Kristoffersson), he argues that strategy research—understanding which interventions to pursue and how—has been relatively neglected compared to other research types.8 The paper distinguishes strategy research from values research, tactics research, and informational research, positioning it as essential groundwork for effective cause prioritization.

This emphasis on strategic clarity extends to Convergence’s institutional focus. According to the organization’s stated approach, shortened AI timelines create urgent need for research that clarifies paths forward, rather than pursuing interventions without strong strategic foundations.6

Shovelain has contributed several concepts to AI alignment discourse:

Aligning AI by Optimizing for Wisdom: In collaboration with Convergence researchers, Shovelain developed ideas around training AI systems to optimize for wisdom rather than more narrow objectives.9 This approach attempts to address alignment challenges by targeting a more robust optimization target.

Keep Humans in the Loop: Shovelain contributed ideas (later written up by collaborators) advocating for human oversight in AI systems to limit worst emergent effects and maintain alignment with human welfare.10 This framework does not assume pure human selflessness but argues that human involvement constrains misalignment in ways that fully automated systems cannot.

Updating Utility Functions: Co-authored with Joar Skalse, this work advances concepts of “soft corrigibility” in AI systems—the ability for AI systems to appropriately update their objectives based on new information.11

Influencing Actors’ Traits Hierarchically: Shovelain has written on the importance of influencing actors’ benevolence, intelligence, and power in that priority order for robust future improvement.12 The framework suggests that increasing benevolence should take priority over increasing intelligence or power, since empowering poorly-aligned actors creates greater risks.

Shovelain has developed frameworks for causal modeling of existential risk pathways. His work on Goodhart’s Law causal graphs builds on Scott Alexander’s taxonomy to provide what he describes as a cleaner ontological structure for understanding how optimization processes fail when targets diverge from true objectives.11

He has also worked on causal diagrams for catastrophic pathways and technology trees that map how different technological developments and interventions relate to existential risk outcomes.11 This work aims to make strategic analysis more rigorous by explicitly modeling causal relationships.

With Alexey Turchin, Shovelain co-authored “The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies,” which models how catastrophic risk probabilities change as technologies evolve exponentially.13

In mid-January 2020, Shovelain (along with Matthew Barnett, Dony Christie, and Louis Francini) raised early alarms about COVID-19 within the rationalist and EA communities, highlighting high death rates, contagion rates, long incubation periods, and uncontained spread.14 This was later noted as prescient within these communities, demonstrating the value of serious probabilistic risk analysis for emerging threats.

Shovelain also participated in a public bet on COVID-19 modeling on EA Forum, which he described as intended to encourage serious analysis and counter learned helplessness within the EA community.2 This reflected his broader interest in improving community epistemics and decision-making under uncertainty.

Skepticism of Some AI Safety Organizations

Section titled “Skepticism of Some AI Safety Organizations”

Shovelain has expressed skepticism about the net impact of some AI safety organizations. Regarding Anthropic, he has stated that he believes the company’s marginal safety contribution does not offset its role in accelerating the AI race, arguing there are “more worlds where it harms via racing than worlds where its safety edge secures a good AGI future.”11 This reflects concern that even safety-focused AI companies may primarily contribute to capabilities advancement, potentially increasing overall risk.

Through Convergence Analysis, Shovelain has worked on information hazard policy for the EA community, addressing questions about when and how to share potentially dangerous information.6 This includes research on publication norms, responsible disclosure practices, and frameworks for evaluating the tradeoffs between transparency and security.

In describing his typical work at Convergence, Shovelain emphasizes “resolving mysteries in AI/x-risk strategy” as a core activity, with one day per week spent on organizational administration.15 He describes enjoying learning new things and achieving impact, while disliking bureaucracy and organizational politics.15

Shovelain’s roles span existential risk research, venture capital advising, and technology entrepreneurship, creating potential tensions:

VC and AI Advisory Relationships: His advisory work with Founders Fund and Lionheart Ventures involves organizations that invest in AI companies, while his research work at Convergence focuses on AI safety and risk reduction. This creates potential conflicts between profit motives in the AI industry and safety advocacy. However, his advisory role appears to focus specifically on AI safety considerations rather than general investment strategy.3

EA and Risk Research Ecosystem: As a longtime EA community member who consults for MIRI, publishes on EA platforms, and leads an organization (Convergence) that has advised firms investing in AI, Shovelain operates within overlapping networks of risk research and AI development funding.26 These connections could create pressure toward AI development optimism to maintain advisory relationships.

No explicit disclosures of personal financial stakes in AI companies appear in available sources beyond his past roles at companies like Blockfolio.

Within the effective altruism and rationalist communities, Shovelain is frequently acknowledged in research posts for ideas, feedback, and collaboration. Multiple EA Forum posts thank him for contributions to strategy research, causal modeling frameworks, and conceptual development.812 Convergence’s announcements and workshops have been well-received within EA circles, and his posts receive engagement through comments and agreement votes.11

His work style at Convergence—with a strong emphasis on resolving strategic uncertainties—appears valued within communities that prioritize cause prioritization and research into high-impact interventions. The 2024 relaunch of Convergence with expanded team and institutional structure reflects growing recognition of the need for dedicated strategy research organizations.6

Effectiveness of Strategy Research: While Shovelain’s work emphasizes the importance of strategy research for x-risk reduction, the actual impact of this research on reducing existential risk remains difficult to measure. Questions remain about how insights from strategy research translate into concrete risk reduction.

AI Safety Organization Critiques: Shovelain’s skepticism about organizations like Anthropic represents one perspective in ongoing debates about whether AI safety work at capabilities-focused companies produces net benefit. The counterfactual impact of such organizations—what would happen in their absence—remains highly uncertain.

Optimal Balance of Roles: The combination of x-risk research, AI safety advising to venture capital firms, and technology entrepreneurship raises questions about optimal allocation of effort and potential conflicts. Whether advisory roles primarily enable safety-focused investments or primarily legitimize risky AI development is unclear.

Strategy Research Timelines: Convergence’s focus on shortened AI timelines raises questions about whether there is sufficient time for strategy research to inform interventions before transformative AI systems are developed. The relative value of strategy research versus direct technical safety work under time pressure remains debated.

  1. Convergence Analysis Team - Justin Shovelain

  2. EA Forum - Justin Shovelain User Profile 2 3 4 5

  3. Lionheart Ventures Team 2 3 4 5 6 7 8 9 10

  4. LessWrong - Keep humans in the loop

  5. EA Forum - Improving the future by influencing actors’ benevolence, intelligence, and power

  6. Convergence Analysis About Us 2 3 4 5 6 7 8 9

  7. SentienceAI Team

  8. EA Forum - A case for strategy research 2

  9. LessWrong - Aligning AI by optimizing for wisdom

  10. LessWrong - Keep humans in the loop

  11. LessWrong - Justin Shovelain User Profile 2 3 4 5

  12. EA Forum - Improving the future by influencing actors’ benevolence 2

  13. PhilArchive - The Probability of a Global Catastrophe

  14. Qualia Computing - Mental Health Tag Archive

  15. EA Forum - What is it like doing AI safety work? 2