Justin Shovelain
- QualityRated 38 but structure suggests 80 (underrated by 42 points)
Quick Assessment
Section titled “Quick Assessment”| Aspect | Assessment |
|---|---|
| Role | Co-founder and Chief Strategist of Convergence AnalysisConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100 |
| Focus Areas | AI safety, existential risk strategy, cause prioritization, EA research |
| Key Contributions | Strategy research frameworks, AI alignmentAlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100 concepts, early COVID-19 risk analysis |
| Background | MS Computer Science, BS in CS/Math/Physics from University of Minnesota |
| Experience | 16+ years in x-risk research (since 2009); worked with MIRI, CFARCenter For Applied RationalityBerkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-y...Quality: 62/100, EA GlobalEa GlobalEA Global is a series of selective conferences organized by the Centre for Effective Altruism that connects committed EA practitioners to collaborate on global challenges, with AI safety becoming i...Quality: 38/100, Founders FundFounders FundFounders Fund is a $17B contrarian VC firm that has backed major AI companies like OpenAI and DeepMind but shows no explicit focus on AI safety or alignment research, instead emphasizing rapid capa...Quality: 50/100 |
| Notable Positions | AI Safety Advisor at Lionheart VenturesLionheart VenturesLionheart Ventures is a small venture capital firm ($25M inaugural fund) focused on AI safety and mental health investments, notable for its investment in Anthropic and integration with the EA comm...Quality: 50/100; Treasurer of Foresight Institute |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | convergenceanalysis.org |
| EA Forum | forum.effectivealtruism.org |
| LessWrong | lesswrong.com |
Overview
Section titled “Overview”Justin Shovelain is a researcher, strategist, and entrepreneur focused on reducing existential risks from artificial intelligence. As co-founder and Chief Strategist of Convergence AnalysisConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100, he leads research on quantitative long-term strategy, cause prioritization within existential risk domains, and AI safety governance.1 His work emphasizes building foundational strategy research to clarify paths forward amid shortening AI timelines.
Shovelain has been active in the effective altruism and AI safety communities since 2009, working with organizations including the Machine Intelligence Research InstituteOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, Center for Applied Rationality (CFAR), and EA Global.2 He advised Founders Fund on their original investment in DeepMindLabGoogle DeepMindComprehensive overview of DeepMind's history, achievements (AlphaGo, AlphaFold with 200M+ protein structures), and 2023 merger with Google Brain. Documents racing dynamics with OpenAI and new Front...Quality: 37/100 and currently serves as AI Safety Advisor to Lionheart Ventures.3 Beyond his x-risk work, Shovelain has significant technology industry experience, including roles at Blockfolio (acquired by FTX for $150 million in 2020), Box, and as co-founder of FreshPay, an early venture-backed cryptocurrency startup.3
His research contributions span AI alignment theory, causal modeling of catastrophic risk pathways, and strategy research methodologies. He has published extensively on the EA Forum and LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100, co-authoring influential posts on topics including keeping humans in AI development loops, optimizing AI for wisdom, and improving the future through influencing actors’ benevolence, intelligence, and power.45
Background and Education
Section titled “Background and Education”Shovelain holds an MS in Computer Science and BS degrees in Computer Science, Mathematics, and Physics, all from the University of Minnesota.23 This interdisciplinary background has informed his approach to existential risk research, which combines technical understanding of AI systems with mathematical modeling of long-term outcomes and strategic analysis.
Based in the San Francisco Bay Area, Shovelain entered the existential risk field in 2009, during the early period of organized AI safety research.2 His early work involved collaboration with MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, which was then pioneering technical approaches to the AI alignment problem, and CFAR, which focused on improving reasoning and decision-making within the rationalist and effective altruist communities.
Career in Technology and AI Safety
Section titled “Career in Technology and AI Safety”Early Technology Roles
Section titled “Early Technology Roles”Before focusing full-time on existential risk research, Shovelain built substantial experience in the technology industry. He was among the first dozen employees at Box, the enterprise cloud storage company that later went public on the NYSE.3 He subsequently served as VP of Engineering at Miso Media, a music education technology company featured on Shark Tank.3
Shovelain co-founded FreshPay, described as one of the first venture capital-backed cryptocurrency startups, demonstrating early involvement in blockchain technology.3 He later worked as Lead Software Architect at Blockfolio, a cryptocurrency portfolio tracking application that FTX acquired for $150 million in 2020.3 This combination of startup experience and technical leadership roles provided practical insights into how emerging technologies develop and scale.
Convergence Analysis
Section titled “Convergence Analysis”Convergence AnalysisConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100 emerged from a research collaboration between Shovelain and David Kristoffersson that began in 2017. During the 2017-2021 period, they conducted foundational research on existential risk strategy, publishing on EA Forum and LessWrong while advising groups like Lionheart Ventures.6 Topics included strategy research methodologies, cause prioritization frameworks, and information hazard policy.
From 2021 to 2023, the collaboration transitioned toward building a formal research institution, expanding the team and developing more structured research programs.6 In 2024, ConvergenceConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100 formally relaunched with a team of ten academics and professionals, motivated by shortened timelines to advanced AI and the urgent need for strategic clarity.6 The organization describes its mission as working toward “a safe and flourishing future” through research that serves as groundwork for cause prioritization within existential risk domains.6
Shovelain serves as Chairman and Senior Strategist at ConvergenceConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100, which focuses on quantitative long-term strategy, AI scenario governance, and existential risk reduction. The organization’s theory of change emphasizes that strategy research—clarifying which interventions to pursue and how—is essential infrastructure for effective x-risk work.6
Advisory Roles
Section titled “Advisory Roles”As AI Safety Advisor to Lionheart Ventures, Shovelain provides strategic guidance on AI-related investments and risks.3 His advisory relationship with Founders Fund on their DeepMind investment predated DeepMind’s acquisition by Google and represented early recognition of the company’s significance for AI development.3
Shovelain also serves as Treasurer of the Foresight Institute, an organization focused on transformative future technologies, and has advised SentienceAI on global catastrophic risks from AI, applied cognitive science, and intelligence amplification.7 These roles position him at the intersection of AI development, investment, and safety research.
Research Contributions
Section titled “Research Contributions”Strategy Research and Cause Prioritization
Section titled “Strategy Research and Cause Prioritization”A central theme in Shovelain’s work is the need for more rigorous strategy research within the effective altruism and existential risk communities. In “A case for strategy research” (co-authored with Siebe Rozendal and David Kristoffersson), he argues that strategy research—understanding which interventions to pursue and how—has been relatively neglected compared to other research types.8 The paper distinguishes strategy research from values research, tactics research, and informational research, positioning it as essential groundwork for effective cause prioritization.
This emphasis on strategic clarity extends to Convergence’sConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100 institutional focus. According to the organization’s stated approach, shortened AI timelines create urgent need for research that clarifies paths forward, rather than pursuing interventions without strong strategic foundations.6
AI Alignment and Safety Concepts
Section titled “AI Alignment and Safety Concepts”Shovelain has contributed several concepts to AI alignment discourse:
Aligning AI by Optimizing for Wisdom: In collaboration with ConvergenceConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100 researchers, Shovelain developed ideas around training AI systems to optimize for wisdom rather than more narrow objectives.9 This approach attempts to address alignment challenges by targeting a more robust optimization target.
Keep Humans in the Loop: Shovelain contributed ideas (later written up by collaborators) advocating for human oversight in AI systems to limit worst emergent effects and maintain alignment with human welfare.10 This framework does not assume pure human selflessness but argues that human involvement constrains misalignment in ways that fully automated systems cannot.
Updating Utility Functions: Co-authored with Joar Skalse, this work advances concepts of “soft corrigibility” in AI systems—the ability for AI systems to appropriately update their objectives based on new information.11
Influencing Actors’ Traits Hierarchically: Shovelain has written on the importance of influencing actors’ benevolence, intelligence, and power in that priority order for robust future improvement.12 The framework suggests that increasing benevolence should take priority over increasing intelligence or power, since empowering poorly-aligned actors creates greater risks.
Causal Modeling and Risk Analysis
Section titled “Causal Modeling and Risk Analysis”Shovelain has developed frameworks for causal modeling of existential risk pathways. His work on Goodhart’s Law causal graphs builds on Scott Alexander’s taxonomy to provide what he describes as a cleaner ontological structure for understanding how optimization processes fail when targets diverge from true objectives.11
He has also worked on causal diagrams for catastrophic pathways and technology trees that map how different technological developments and interventions relate to existential risk outcomes.11 This work aims to make strategic analysis more rigorous by explicitly modeling causal relationships.
With Alexey Turchin, Shovelain co-authored “The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies,” which models how catastrophic risk probabilities change as technologies evolve exponentially.13
COVID-19 Risk Analysis
Section titled “COVID-19 Risk Analysis”In mid-January 2020, Shovelain (along with Matthew Barnett, Dony Christie, and Louis Francini) raised early alarms about COVID-19 within the rationalist and EA communities, highlighting high death rates, contagion rates, long incubation periods, and uncontained spread.14 This was later noted as prescient within these communities, demonstrating the value of serious probabilistic risk analysis for emerging threats.
Shovelain also participated in a public bet on COVID-19 modeling on EA Forum, which he described as intended to encourage serious analysis and counter learned helplessness within the EA community.2 This reflected his broader interest in improving community epistemics and decision-making under uncertainty.
Views and Positions
Section titled “Views and Positions”Skepticism of Some AI Safety Organizations
Section titled “Skepticism of Some AI Safety Organizations”Shovelain has expressed skepticism about the net impact of some AI safety organizations. Regarding AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100, he has stated that he believes the company’s marginal safety contribution does not offset its role in accelerating the AI race, arguing there are “more worlds where it harms via racing than worlds where its safety edge secures a good AGI future.”11 This reflects concern that even safety-focused AI companies may primarily contribute to capabilities advancement, potentially increasing overall risk.
Information Hazards and Publication Norms
Section titled “Information Hazards and Publication Norms”Through Convergence AnalysisConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100, Shovelain has worked on information hazard policy for the EA community, addressing questions about when and how to share potentially dangerous information.6 This includes research on publication norms, responsible disclosure practices, and frameworks for evaluating the tradeoffs between transparency and security.
Work Philosophy
Section titled “Work Philosophy”In describing his typical work at ConvergenceConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100, Shovelain emphasizes “resolving mysteries in AI/x-risk strategy” as a core activity, with one day per week spent on organizational administration.15 He describes enjoying learning new things and achieving impact, while disliking bureaucracy and organizational politics.15
Potential Conflicts of Interest
Section titled “Potential Conflicts of Interest”Shovelain’s roles span existential risk research, venture capital advising, and technology entrepreneurship, creating potential tensions:
VC and AI Advisory Relationships: His advisory work with Founders Fund and Lionheart Ventures involves organizations that invest in AI companies, while his research work at Convergence focuses on AI safety and risk reduction. This creates potential conflicts between profit motives in the AI industry and safety advocacy. However, his advisory role appears to focus specifically on AI safety considerations rather than general investment strategy.3
EA and Risk Research Ecosystem: As a longtime EA community member who consults for MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, publishes on EA platforms, and leads an organization (ConvergenceConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100) that has advised firms investing in AI, Shovelain operates within overlapping networks of risk research and AI development funding.26 These connections could create pressure toward AI development optimism to maintain advisory relationships.
No explicit disclosures of personal financial stakes in AI companies appear in available sources beyond his past roles at companies like Blockfolio.
Influence and Community Reception
Section titled “Influence and Community Reception”Within the effective altruism and rationalist communities, Shovelain is frequently acknowledged in research posts for ideas, feedback, and collaboration. Multiple EA Forum posts thank him for contributions to strategy research, causal modeling frameworks, and conceptual development.812 Convergence’sConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100 announcements and workshops have been well-received within EA circles, and his posts receive engagement through comments and agreement votes.11
His work style at ConvergenceConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100—with a strong emphasis on resolving strategic uncertainties—appears valued within communities that prioritize cause prioritization and research into high-impact interventions. The 2024 relaunch of ConvergenceConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100 with expanded team and institutional structure reflects growing recognition of the need for dedicated strategy research organizations.6
Key Uncertainties
Section titled “Key Uncertainties”Effectiveness of Strategy Research: While Shovelain’s work emphasizes the importance of strategy research for x-risk reduction, the actual impact of this research on reducing existential risk remains difficult to measure. Questions remain about how insights from strategy research translate into concrete risk reduction.
AI Safety Organization Critiques: Shovelain’s skepticism about organizations like AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100 represents one perspective in ongoing debates about whether AI safety work at capabilities-focused companies produces net benefit. The counterfactual impact of such organizations—what would happen in their absence—remains highly uncertain.
Optimal Balance of Roles: The combination of x-risk research, AI safety advising to venture capital firms, and technology entrepreneurship raises questions about optimal allocation of effort and potential conflicts. Whether advisory roles primarily enable safety-focused investments or primarily legitimize risky AI development is unclear.
Strategy Research Timelines: Convergence’sConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100 focus on shortened AI timelines raises questions about whether there is sufficient time for strategy research to inform interventions before transformative AI systems are developed. The relative value of strategy research versus direct technical safety work under time pressure remains debated.