Lionheart Ventures
- QualityRated 50 but structure suggests 80 (underrated by 30 points)
Quick Assessment
Section titled “Quick Assessment”| Aspect | Assessment |
|---|---|
| Type | Venture capital firm |
| Founded | 2019 |
| Focus Areas | AI safety, frontier mental health technologies |
| Stage | Seed, Series A, some Series B |
| Check Size | $500K-$2M (historical average $791.9K) |
| Notable Investments | AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100, Calm, Reprompt AI |
| AI Safety Role | Explicit focus on reducing existential risks from advanced AI |
Key Links
Section titled “Key Links”| Source | Link |
|---|---|
| Official Website | lionheart.vc |
Overview
Section titled “Overview”Lionheart Ventures is a venture capital firm founded in 2019 that focuses on early-stage investments in transformative technologies, with a particular emphasis on artificial intelligence safety and frontier mental health.1 Based in the San Francisco Bay Area (with offices in Bolinas, California), the firm explicitly positions its investments as addressing civilizational risks while aiming to enhance human flourishing, agency, and resilience.2
The firm’s investment thesis draws from Carl Sagan’s philosophy that wisdom must accompany technological power to prevent self-destruction.3 This philosophy manifests in a concentrated focus on two primary sectors: advanced AI systems (with particular attention to safety and alignment) and frontier mental health technologies including psychedelics research, neuromodulation, and digital therapeutics.4 As of 2024, Lionheart Ventures has made 33 investments with a maximum check size of $11.5 million.5
Lionheart Ventures distinguishes itself through its advisory team, which includes prominent figures in AI safety and existential risk reduction, connecting the firm directly to the broader AI safety ecosystem and effective altruism community.6
History
Section titled “History”Founding and Background
Section titled “Founding and Background”Lionheart Ventures was founded in 2019 by David Langer, a two-time technology entrepreneur with over 10 years of CEO experience.7 Langer brought substantial credentials to the venture, having previously founded and led Zesty, a YC-backed healthy corporate catering company that raised $20 million from Founders FundFounders FundFounders Fund is a $17B contrarian VC firm that has backed major AI companies like OpenAI and DeepMind but shows no explicit focus on AI safety or alignment research, instead emphasizing rapid capa...Quality: 50/100 and others, served 7 million meals, and was acquired by Square in 2018.8 Before Zesty, he co-founded GroupSpaces, a UK-based SaaS product for clubs that hosted 5 million memberships across 100+ countries and was backed by Index Ventures.9 Langer holds an MA in Mathematics from the University of Oxford.10
Team Development
Section titled “Team Development”The firm expanded its partnership team to include Shelby Clark, a repeat entrepreneur best known for founding Turo, the car-sharing marketplace that filed for IPO in January 2022.11 After leaving Turo, Clark trained as a yoga and meditation teacher and committed to dedicating his career to mental health, aligning with Lionheart’s focus areas.12 Additional partners include Brandon Goldman and investor Ben Lee, along with Vice President of Finance Carlos López Enríquez and partner Sierra Peterson (focusing on AgTech and ClimateTech).13
Portfolio Growth
Section titled “Portfolio Growth”By August 2024, Lionheart Ventures had deployed capital across 33 investments, with the most recent investment occurring in August 2024.14 The inaugural $25 million fund was nearly fully deployed as of late 2024, with investments spanning companies like Calm, Reconnect Labs, Psylo, Journey Clinical, TRIPP, Sanmai, and AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100.15
Recent Fund Activity
Section titled “Recent Fund Activity”As of early 2026, Lionheart Ventures has two funds currently in market, with the most recent opening in December 2024.16 The firm also closed two previous funds in April 2023 and July 2022, though specific fund sizes have not been publicly disclosed.17
Investment Focus
Section titled “Investment Focus”Artificial Intelligence and AI Safety
Section titled “Artificial Intelligence and AI Safety”Lionheart Ventures has positioned AI as a central investment thesis, viewing the technology as potentially as disruptive as the Industrial Revolution and requiring careful attention to safety and alignment.18 The firm invests in AI systems that “defend and enhance human flourishing” amid disruptive AI emergence, explicitly focusing on reducing existential risks.19
The most prominent example of this focus is the firm’s investment in AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100, an AI company founded by former OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100 members that specializes in developing general AI systems with a focus on responsible AI usage and alignment.20 Anthropic’s work includes research on adversarial robustness, scalable oversightSafety AgendaScalable OversightProcess supervision achieves 78.2% accuracy on MATH benchmarks (vs 72.4% outcome-based) and is deployed in OpenAI's o1 models, while debate shows 60-80% accuracy on factual questions with +4% impro...Quality: 68/100, and mechanistic interpretabilitySafety AgendaInterpretabilityMechanistic interpretability has extracted 34M+ interpretable features from Claude 3 Sonnet with 90% automated labeling accuracy and demonstrated 75-85% success in causal validation, though less th...Quality: 66/100—core AI safety research areas.21
Other AI safety-relevant investments include Reprompt AI, which develops “last mile guardrails” for AI chatbots to prevent violations of business and security policies, representing a practical approach to AI alignmentAlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100 in deployed systems.22
Frontier Mental Health Technologies
Section titled “Frontier Mental Health Technologies”The firm’s second major focus area encompasses psychedelics research, neuromodulation, digital therapeutics, and related mental health innovations.23 This sector is viewed not only as addressing a mental health crisis but also as improving human decision-making capacity—a consideration particularly relevant to navigating transformative technological change.24
Portfolio companies in this category include Calm (a mental health and wellness app for meditation, relaxation, anxiety, depression, insomnia, and stress relief), Mind Ease (a mental health startup providing free or discounted access in low and middle-income countries), and various companies working on psychedelic medicine and neurotech applications.25
Investment Strategy and Criteria
Section titled “Investment Strategy and Criteria”Lionheart Ventures typically invests $500,000 to $2 million per deal in seed and Series A stage companies, with some Series B participation.26 The firm seeks mission-driven founders building companies that address civilizational risks while maintaining potential for strong financial returns.27 Investment decisions emphasize scalability, market size, scientific validation, and alignment with the firm’s thesis of enhancing human resilience.28
AI Safety Ecosystem Connections
Section titled “AI Safety Ecosystem Connections”Advisory Team
Section titled “Advisory Team”Lionheart Ventures has assembled a specialized advisory team with deep connections to AI safety research and existential risk reduction:29
- Richard Mallah: Head of the Center for AI Risk Management & Alignment and Strategist at the Future of Life InstituteFliComprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause ...Quality: 46/100, focusing on managing extreme risks from general AI systems
- Justin ShovelainJustin ShovelainJustin Shovelain is a strategy researcher and co-founder of Convergence Analysis who has worked on AI safety, existential risk reduction, and EA cause prioritization since 2009. While he has contri...Quality: 38/100: Co-founder and CEO of Convergence AnalysisConvergence AnalysisConvergence Analysis is a 2024-launched AI governance research institute focused on scenario planning and policy research, arguing that technical alignment alone is insufficient to prevent AI catas...Quality: 55/100 (an existential risk strategy research group) and AI safety advisor who has worked with MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, CFARCenter For Applied RationalityBerkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-y...Quality: 62/100, and EA GlobalEa GlobalEA Global is a series of selective conferences organized by the Centre for Effective Altruism that connects committed EA practitioners to collaborate on global challenges, with AI safety becoming i...Quality: 38/100 since 2009
- Aaron Tucker: Technical Lead at FAR AILab ResearchFAR AIFAR AI (FAR.AI) is a 2022-founded AI safety research nonprofit led by CEO Adam Gleave and COO Karl Berzins. The organization focuses on technical AI safety research and coordination to ensure safet...Quality: 32/100 with a PhD in Machine Learning from Cornell and prior research at Microsoft, Berkeley’s Center for Human-Compatible AILab AcademicCHAICHAI is UC Berkeley's AI safety research center founded by Stuart Russell in 2016, pioneering cooperative inverse reinforcement learning and human-compatible AI frameworks. The center has trained 3...Quality: 37/100, and the Centre for the Governance of AILab ResearchGovAIGovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and cur...Quality: 43/100
- Jeffrey Ladish: Executive Director of Palisade ResearchPalisade ResearchPalisade Research is a 2023-founded nonprofit conducting empirical research on AI shutdown resistance and autonomous hacking capabilities, with notable findings that some frontier models resist shu...Quality: 65/100 who has consulted on AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100’s information security and advised the White House and Department of Defense on emerging technology risks
- Cyrus Hodes: Venture Partner who co-founded Stability AI and The Future Society (an AI governanceAi GovernanceThis page contains only component imports with no actual content - it displays dynamically loaded data from an external source that cannot be evaluated. nonprofit) and manages AI Safety Connect for international gatherings and policy solutions
- Allison Duettmann: Listed as an AI safety advisor to the firm
This advisory network positions Lionheart Ventures within the broader AI safety ecosystem, providing deal flow, technical evaluation capabilities, and strategic guidance on existential risk considerations.
Effective Altruism Connections
Section titled “Effective Altruism Connections”Milan Griffes serves as Principal at Lionheart Ventures and is an active EA Forum user with 4,566 karma points, indicating significant engagement with the effective altruism community.30 The firm conducted a detailed business analysis of Mind Ease (a mental health startup) that was featured as a case study in EA-aligned impact investing, demonstrating the firm’s integration with EA funding evaluation frameworks.31
The analysis assessed Mind Ease as comparable to other venture-financed startups, projecting user growth up to 1.5 million users in optimistic scenarios over 8 years and evaluating the company’s validated revenue model, product development, unit economics, and large user base.32 This approach reflects a hybrid model combining venture capital financial analysis with impact assessment common in the EA community.
Lionheart Ventures has been recommended in AI alignment forums alongside organizations like Juniper Ventures as a for-profit entity focused on existential risk reduction.33 Broader EA Forum discussions position Lionheart-style impact investing as part of a diversification push in EA funding beyond traditional grantmaking organizations like Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information..34
Portfolio Analysis
Section titled “Portfolio Analysis”Climate and Other Impact Areas
Section titled “Climate and Other Impact Areas”Beyond AI safety and mental health, Lionheart Ventures has invested in climate protection technologies including:
- Charm Industrial: Converts biomass into bio-oil for carbon removal and steelmaking, aiming to return atmospheric CO₂ to 280 ppm
- Beam: Urban mobility solutions
- Various AgTech, FoodTech, and CleanTech companies35
These investments align with the firm’s broader thesis of mitigating civilizational risks, with climate change representing another category of existential or catastrophic risk.
Investment Performance and Scale
Section titled “Investment Performance and Scale”Historical data shows Lionheart Ventures’ average check size of $791,900 with a maximum investment of $11.5 million.36 The firm’s 33 total investments as of August 2024 suggest a concentrated portfolio approach consistent with early-stage venture capital focused on specific theses rather than broad diversification.37
The firm operates primarily in the United States market, though with a stated global scope and investments spanning multiple continents through companies like GroupSpaces (which Langer previously co-founded and served 100+ countries).38
Relationship to AI Safety Field
Section titled “Relationship to AI Safety Field”Positioning Within AI Safety Funding Landscape
Section titled “Positioning Within AI Safety Funding Landscape”Lionheart Ventures occupies a distinctive position in the AI safety funding ecosystem as a for-profit venture capital firm explicitly focused on existential risk reduction. This contrasts with traditional grantmaking organizations that dominate AI safety funding, offering an alternative model that combines financial returns with safety-focused missions.
The firm has been discussed in AI safety entrepreneurship resources as one of the few venture capital firms with an explicit AI safety focus.39 This positioning attracts founders who seek both commercial validation and mission alignment, potentially expanding the pool of entrepreneurs working on safety-relevant problems beyond those willing to operate in purely nonprofit or grant-funded contexts.
Support for Anthropic
Section titled “Support for Anthropic”The firm’s investment in AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100 represents its most significant connection to mainstream AI safety research. Anthropic, founded by former OpenAILabOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...Quality: 46/100 safety team members including Dario AmodeiResearcherDario AmodeiComprehensive biographical profile of Anthropic CEO Dario Amodei documenting his 'race to the top' philosophy, 10-25% catastrophic risk estimate, 2026-2030 AGI timeline, and Constitutional AI appro...Quality: 41/100, focuses on developing safe, steerable AI systems and conducting research on topics like Constitutional AIConstitutional AiConstitutional AI is Anthropic's methodology using explicit principles and AI-generated feedback (RLAIF) to train safer models, achieving 3-10x improvements in harmlessness while maintaining helpfu...Quality: 70/100, AI schemingRiskSchemingScheming—strategic AI deception during training—has transitioned from theoretical concern to observed behavior across all major frontier models (o1: 37% alignment faking, Claude: 14% harmful compli...Quality: 74/100 risks, and mechanistic interpretability.40 Lionheart’s support for the Anthropic Fellows Program, which funds research on adversarial robustness, scalable oversight, and mechanistic interpretability, demonstrates engagement beyond pure capital provision.41
Practical AI Safety Applications
Section titled “Practical AI Safety Applications”The investment in Reprompt AI reflects attention to near-term, practical AI safety challenges. Reprompt develops guardrails for AI chatbots to prevent policy violations, addressing deployment safety issues that arise as AI systems are integrated into business operations.42 This suggests the firm’s AI safety focus encompasses both long-term alignment research (via AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100) and immediate practical safety tooling.
Criticisms and Limitations
Section titled “Criticisms and Limitations”Limited Public Information
Section titled “Limited Public Information”Lionheart Ventures has not publicly disclosed specific fund sizes, assets under management, internal rate of return, or detailed performance metrics.43 This opacity makes it difficult for external observers to assess the firm’s financial performance or the success of its hybrid mission-financial model. The lack of transparency about funding sources also limits understanding of whether limited partners share the firm’s existential risk reduction thesis or are primarily motivated by financial returns.
Concentration Risk
Section titled “Concentration Risk”With only 33 investments as of August 2024 and a $25 million inaugural fund, Lionheart Ventures operates at a relatively small scale compared to major AI safety funders like Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. or mainstream venture capital firms investing in AI.44 This limited scale constrains the firm’s ability to support a broad portfolio of safety-relevant companies or to lead large funding rounds for mature startups.
Portfolio Diversification Questions
Section titled “Portfolio Diversification Questions”While the firm articulates a clear focus on AI safety and mental health, the portfolio also includes companies in climate tech, urban mobility, AgTech, and other sectors.45 This diversification may reflect practical considerations (deal flow, financial returns, limited partners’ preferences) but could dilute focus on the firm’s stated core mission of addressing existential risks from AI specifically.
Venture Capital Model Limitations
Section titled “Venture Capital Model Limitations”The venture capital funding model requires financial returns, potentially creating tension with safety-focused missions. Companies developing AI safety solutions may face slower commercialization timelines, smaller addressable markets, or business models less compatible with venture-scale outcomes than general AI capabilities companies. This structural tension is not unique to Lionheart Ventures but applies to any for-profit entity attempting to combine safety work with investor return requirements.
EA Community Funding Dynamics
Section titled “EA Community Funding Dynamics”Lionheart Ventures operates within the effective altruism funding ecosystem, which faced significant disruption following the FTX collapse. EA Forum discussions highlight community concerns about funding concentration, nepotism risks, perception issues with abundant funding, and the need for diversification.46 While Lionheart represents one form of diversification (for-profit impact investing versus traditional grantmaking), the firm remains embedded in EA networks through its advisory team and personnel, potentially subject to similar ecosystem risks.
Key Uncertainties
Section titled “Key Uncertainties”Several important questions remain unresolved about Lionheart Ventures’ role and impact:
-
Financial Performance: Without disclosed fund returns or portfolio company outcomes, it remains unclear whether the firm’s hybrid mission-financial model achieves competitive venture capital returns, which would be necessary to attract future capital and demonstrate model viability.
-
Impact Measurement: The firm has not published frameworks or metrics for assessing existential risk reduction impact from portfolio companies. The relationship between commercial success of investments like Calm (a meditation app) and AI safety outcomes is indirect and difficult to quantify.
-
Counterfactual Impact: It is unclear whether Lionheart Ventures’ investments enable safety work that would not otherwise occur, or whether the firm primarily funds companies that would have received funding from other sources. The counterfactual impact question is particularly relevant for investments like AnthropicLabAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...Quality: 51/100, which has raised over $7 billion from multiple sources.
-
Scale and Influence: Whether a $25 million inaugural fund and 33 investments can meaningfully influence AI safety outcomes at the scale of the broader AI industry (which involves hundreds of billions in capital deployment) remains uncertain.
-
Mental Health Connection: The strategic relationship between frontier mental health investments and AI safety is not fully articulated. While improved mental health and decision-making capacity could plausibly contribute to better navigation of AI risks, the causal pathway and magnitude of impact are speculative.
-
Advisory Team Engagement: The depth of engagement between the firm’s prominent AI safety advisors and portfolio companies is not publicly documented. Whether advisory relationships translate into substantive safety improvements in portfolio companies beyond capital allocation is unclear.
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
Partner, Mental Health/Psychedelic Focus - Lionheart Ventures Job Posting ↩
-
Anthropic Fellows Program Lead, Alignment Science - Job Posting ↩
-
EA-Aligned Impact Investing: Mind Ease Case Study - EA Forum ↩
-
EA-Aligned Impact Investing: Mind Ease Case Study - EA Forum ↩
-
EA-Aligned Impact Investing: Mind Ease Case Study - EA Forum ↩
-
Anthropic Fellows Program Lead, Alignment Science - Job Posting ↩
-
Partner, Mental Health/Psychedelic Focus - Lionheart Ventures Job Posting ↩