Skip to content

Lionheart Ventures

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:50 (Adequate)⚠️
Importance:45 (Reference)
Last edited:2026-02-03 (3 days ago)
Words:2.5k
Structure:
📊 2📈 0🔗 29📚 4711%Score: 12/15
LLM Summary:Lionheart Ventures is a small venture capital firm ($25M inaugural fund) focused on AI safety and mental health investments, notable for its investment in Anthropic and integration with the EA community through advisors and personnel. The firm represents an interesting model of for-profit AI safety funding, though its actual impact and financial performance remain unclear due to limited disclosure.
Issues (1):
  • QualityRated 50 but structure suggests 80 (underrated by 30 points)
AspectAssessment
TypeVenture capital firm
Founded2019
Focus AreasAI safety, frontier mental health technologies
StageSeed, Series A, some Series B
Check Size$500K-$2M (historical average $791.9K)
Notable InvestmentsAnthropic, Calm, Reprompt AI
AI Safety RoleExplicit focus on reducing existential risks from advanced AI
SourceLink
Official Websitelionheart.vc

Lionheart Ventures is a venture capital firm founded in 2019 that focuses on early-stage investments in transformative technologies, with a particular emphasis on artificial intelligence safety and frontier mental health.1 Based in the San Francisco Bay Area (with offices in Bolinas, California), the firm explicitly positions its investments as addressing civilizational risks while aiming to enhance human flourishing, agency, and resilience.2

The firm’s investment thesis draws from Carl Sagan’s philosophy that wisdom must accompany technological power to prevent self-destruction.3 This philosophy manifests in a concentrated focus on two primary sectors: advanced AI systems (with particular attention to safety and alignment) and frontier mental health technologies including psychedelics research, neuromodulation, and digital therapeutics.4 As of 2024, Lionheart Ventures has made 33 investments with a maximum check size of $11.5 million.5

Lionheart Ventures distinguishes itself through its advisory team, which includes prominent figures in AI safety and existential risk reduction, connecting the firm directly to the broader AI safety ecosystem and effective altruism community.6

Lionheart Ventures was founded in 2019 by David Langer, a two-time technology entrepreneur with over 10 years of CEO experience.7 Langer brought substantial credentials to the venture, having previously founded and led Zesty, a YC-backed healthy corporate catering company that raised $20 million from Founders Fund and others, served 7 million meals, and was acquired by Square in 2018.8 Before Zesty, he co-founded GroupSpaces, a UK-based SaaS product for clubs that hosted 5 million memberships across 100+ countries and was backed by Index Ventures.9 Langer holds an MA in Mathematics from the University of Oxford.10

The firm expanded its partnership team to include Shelby Clark, a repeat entrepreneur best known for founding Turo, the car-sharing marketplace that filed for IPO in January 2022.11 After leaving Turo, Clark trained as a yoga and meditation teacher and committed to dedicating his career to mental health, aligning with Lionheart’s focus areas.12 Additional partners include Brandon Goldman and investor Ben Lee, along with Vice President of Finance Carlos López Enríquez and partner Sierra Peterson (focusing on AgTech and ClimateTech).13

By August 2024, Lionheart Ventures had deployed capital across 33 investments, with the most recent investment occurring in August 2024.14 The inaugural $25 million fund was nearly fully deployed as of late 2024, with investments spanning companies like Calm, Reconnect Labs, Psylo, Journey Clinical, TRIPP, Sanmai, and Anthropic.15

As of early 2026, Lionheart Ventures has two funds currently in market, with the most recent opening in December 2024.16 The firm also closed two previous funds in April 2023 and July 2022, though specific fund sizes have not been publicly disclosed.17

Lionheart Ventures has positioned AI as a central investment thesis, viewing the technology as potentially as disruptive as the Industrial Revolution and requiring careful attention to safety and alignment.18 The firm invests in AI systems that “defend and enhance human flourishing” amid disruptive AI emergence, explicitly focusing on reducing existential risks.19

The most prominent example of this focus is the firm’s investment in Anthropic, an AI company founded by former OpenAI members that specializes in developing general AI systems with a focus on responsible AI usage and alignment.20 Anthropic’s work includes research on adversarial robustness, scalable oversight, and mechanistic interpretability—core AI safety research areas.21

Other AI safety-relevant investments include Reprompt AI, which develops “last mile guardrails” for AI chatbots to prevent violations of business and security policies, representing a practical approach to AI alignment in deployed systems.22

The firm’s second major focus area encompasses psychedelics research, neuromodulation, digital therapeutics, and related mental health innovations.23 This sector is viewed not only as addressing a mental health crisis but also as improving human decision-making capacity—a consideration particularly relevant to navigating transformative technological change.24

Portfolio companies in this category include Calm (a mental health and wellness app for meditation, relaxation, anxiety, depression, insomnia, and stress relief), Mind Ease (a mental health startup providing free or discounted access in low and middle-income countries), and various companies working on psychedelic medicine and neurotech applications.25

Lionheart Ventures typically invests $500,000 to $2 million per deal in seed and Series A stage companies, with some Series B participation.26 The firm seeks mission-driven founders building companies that address civilizational risks while maintaining potential for strong financial returns.27 Investment decisions emphasize scalability, market size, scientific validation, and alignment with the firm’s thesis of enhancing human resilience.28

Lionheart Ventures has assembled a specialized advisory team with deep connections to AI safety research and existential risk reduction:29

  • Richard Mallah: Head of the Center for AI Risk Management & Alignment and Strategist at the Future of Life Institute, focusing on managing extreme risks from general AI systems
  • Justin Shovelain: Co-founder and CEO of Convergence Analysis (an existential risk strategy research group) and AI safety advisor who has worked with MIRI, CFAR, and EA Global since 2009
  • Aaron Tucker: Technical Lead at FAR AI with a PhD in Machine Learning from Cornell and prior research at Microsoft, Berkeley’s Center for Human-Compatible AI, and the Centre for the Governance of AI
  • Jeffrey Ladish: Executive Director of Palisade Research who has consulted on Anthropic’s information security and advised the White House and Department of Defense on emerging technology risks
  • Cyrus Hodes: Venture Partner who co-founded Stability AI and The Future Society (an AI governance nonprofit) and manages AI Safety Connect for international gatherings and policy solutions
  • Allison Duettmann: Listed as an AI safety advisor to the firm

This advisory network positions Lionheart Ventures within the broader AI safety ecosystem, providing deal flow, technical evaluation capabilities, and strategic guidance on existential risk considerations.

Milan Griffes serves as Principal at Lionheart Ventures and is an active EA Forum user with 4,566 karma points, indicating significant engagement with the effective altruism community.30 The firm conducted a detailed business analysis of Mind Ease (a mental health startup) that was featured as a case study in EA-aligned impact investing, demonstrating the firm’s integration with EA funding evaluation frameworks.31

The analysis assessed Mind Ease as comparable to other venture-financed startups, projecting user growth up to 1.5 million users in optimistic scenarios over 8 years and evaluating the company’s validated revenue model, product development, unit economics, and large user base.32 This approach reflects a hybrid model combining venture capital financial analysis with impact assessment common in the EA community.

Lionheart Ventures has been recommended in AI alignment forums alongside organizations like Juniper Ventures as a for-profit entity focused on existential risk reduction.33 Broader EA Forum discussions position Lionheart-style impact investing as part of a diversification push in EA funding beyond traditional grantmaking organizations like Open Philanthropy.34

Beyond AI safety and mental health, Lionheart Ventures has invested in climate protection technologies including:

  • Charm Industrial: Converts biomass into bio-oil for carbon removal and steelmaking, aiming to return atmospheric CO₂ to 280 ppm
  • Beam: Urban mobility solutions
  • Various AgTech, FoodTech, and CleanTech companies35

These investments align with the firm’s broader thesis of mitigating civilizational risks, with climate change representing another category of existential or catastrophic risk.

Historical data shows Lionheart Ventures’ average check size of $791,900 with a maximum investment of $11.5 million.36 The firm’s 33 total investments as of August 2024 suggest a concentrated portfolio approach consistent with early-stage venture capital focused on specific theses rather than broad diversification.37

The firm operates primarily in the United States market, though with a stated global scope and investments spanning multiple continents through companies like GroupSpaces (which Langer previously co-founded and served 100+ countries).38

Positioning Within AI Safety Funding Landscape

Section titled “Positioning Within AI Safety Funding Landscape”

Lionheart Ventures occupies a distinctive position in the AI safety funding ecosystem as a for-profit venture capital firm explicitly focused on existential risk reduction. This contrasts with traditional grantmaking organizations that dominate AI safety funding, offering an alternative model that combines financial returns with safety-focused missions.

The firm has been discussed in AI safety entrepreneurship resources as one of the few venture capital firms with an explicit AI safety focus.39 This positioning attracts founders who seek both commercial validation and mission alignment, potentially expanding the pool of entrepreneurs working on safety-relevant problems beyond those willing to operate in purely nonprofit or grant-funded contexts.

The firm’s investment in Anthropic represents its most significant connection to mainstream AI safety research. Anthropic, founded by former OpenAI safety team members including Dario Amodei, focuses on developing safe, steerable AI systems and conducting research on topics like Constitutional AI, AI scheming risks, and mechanistic interpretability.40 Lionheart’s support for the Anthropic Fellows Program, which funds research on adversarial robustness, scalable oversight, and mechanistic interpretability, demonstrates engagement beyond pure capital provision.41

The investment in Reprompt AI reflects attention to near-term, practical AI safety challenges. Reprompt develops guardrails for AI chatbots to prevent policy violations, addressing deployment safety issues that arise as AI systems are integrated into business operations.42 This suggests the firm’s AI safety focus encompasses both long-term alignment research (via Anthropic) and immediate practical safety tooling.

Lionheart Ventures has not publicly disclosed specific fund sizes, assets under management, internal rate of return, or detailed performance metrics.43 This opacity makes it difficult for external observers to assess the firm’s financial performance or the success of its hybrid mission-financial model. The lack of transparency about funding sources also limits understanding of whether limited partners share the firm’s existential risk reduction thesis or are primarily motivated by financial returns.

With only 33 investments as of August 2024 and a $25 million inaugural fund, Lionheart Ventures operates at a relatively small scale compared to major AI safety funders like Open Philanthropy or mainstream venture capital firms investing in AI.44 This limited scale constrains the firm’s ability to support a broad portfolio of safety-relevant companies or to lead large funding rounds for mature startups.

While the firm articulates a clear focus on AI safety and mental health, the portfolio also includes companies in climate tech, urban mobility, AgTech, and other sectors.45 This diversification may reflect practical considerations (deal flow, financial returns, limited partners’ preferences) but could dilute focus on the firm’s stated core mission of addressing existential risks from AI specifically.

The venture capital funding model requires financial returns, potentially creating tension with safety-focused missions. Companies developing AI safety solutions may face slower commercialization timelines, smaller addressable markets, or business models less compatible with venture-scale outcomes than general AI capabilities companies. This structural tension is not unique to Lionheart Ventures but applies to any for-profit entity attempting to combine safety work with investor return requirements.

Lionheart Ventures operates within the effective altruism funding ecosystem, which faced significant disruption following the FTX collapse. EA Forum discussions highlight community concerns about funding concentration, nepotism risks, perception issues with abundant funding, and the need for diversification.46 While Lionheart represents one form of diversification (for-profit impact investing versus traditional grantmaking), the firm remains embedded in EA networks through its advisory team and personnel, potentially subject to similar ecosystem risks.

Several important questions remain unresolved about Lionheart Ventures’ role and impact:

  1. Financial Performance: Without disclosed fund returns or portfolio company outcomes, it remains unclear whether the firm’s hybrid mission-financial model achieves competitive venture capital returns, which would be necessary to attract future capital and demonstrate model viability.

  2. Impact Measurement: The firm has not published frameworks or metrics for assessing existential risk reduction impact from portfolio companies. The relationship between commercial success of investments like Calm (a meditation app) and AI safety outcomes is indirect and difficult to quantify.

  3. Counterfactual Impact: It is unclear whether Lionheart Ventures’ investments enable safety work that would not otherwise occur, or whether the firm primarily funds companies that would have received funding from other sources. The counterfactual impact question is particularly relevant for investments like Anthropic, which has raised over $7 billion from multiple sources.

  4. Scale and Influence: Whether a $25 million inaugural fund and 33 investments can meaningfully influence AI safety outcomes at the scale of the broader AI industry (which involves hundreds of billions in capital deployment) remains uncertain.

  5. Mental Health Connection: The strategic relationship between frontier mental health investments and AI safety is not fully articulated. While improved mental health and decision-making capacity could plausibly contribute to better navigation of AI risks, the causal pathway and magnitude of impact are speculative.

  6. Advisory Team Engagement: The depth of engagement between the firm’s prominent AI safety advisors and portfolio companies is not publicly documented. Whether advisory relationships translate into substantive safety improvements in portfolio companies beyond capital allocation is unclear.

  1. Lionheart Ventures - WaveUp Hub

  2. Lionheart Ventures - F6S Profile

  3. Lionheart Ventures Official Website

  4. Lionheart Ventures - WaveUp Hub

  5. Lionheart Ventures - WaveUp Hub

  6. Lionheart Ventures Team Page

  7. Lionheart Ventures Team Page

  8. Lionheart Ventures Team Page

  9. Lionheart Ventures Team Page

  10. Lionheart Ventures Team Page

  11. Lionheart Ventures Team Page

  12. Lionheart Ventures Team Page

  13. Lionheart Ventures - Private Equity International

  14. Lionheart Ventures - WaveUp Hub

  15. Partner, Mental Health/Psychedelic Focus - Lionheart Ventures Job Posting

  16. Lionheart Ventures - Private Equity International

  17. Lionheart Ventures - Private Equity International

  18. Lionheart Ventures Official Website

  19. Some for-profit AI alignment org ideas - LessWrong

  20. Lionheart Ventures - WaveUp Hub

  21. Anthropic Fellows Program Lead, Alignment Science - Job Posting

  22. Lionheart Ventures Portfolio

  23. Lionheart Ventures - WaveUp Hub

  24. Lionheart Ventures Official Website

  25. Lionheart Ventures - WaveUp Hub

  26. Lionheart Ventures - Octa

  27. Lionheart Ventures - Venture Capital Archive

  28. EA-Aligned Impact Investing: Mind Ease Case Study - EA Forum

  29. Lionheart Ventures Team Page

  30. Milan Griffes - EA Forum Profile

  31. EA-Aligned Impact Investing: Mind Ease Case Study - EA Forum

  32. EA-Aligned Impact Investing: Mind Ease Case Study - EA Forum

  33. AI Safety and Entrepreneurship v1.0 - EA Forum

  34. Linkpost: An update from Good Ventures - EA Forum

  35. Lionheart Ventures Portfolio

  36. Lionheart Ventures - WaveUp Hub

  37. Lionheart Ventures - WaveUp Hub

  38. Lionheart Ventures - F6S Profile

  39. AI Safety and Entrepreneurship - Alignment Forum

  40. Lionheart Ventures - WaveUp Hub

  41. Anthropic Fellows Program Lead, Alignment Science - Job Posting

  42. Lionheart Ventures Portfolio

  43. Lionheart Ventures - Private Equity International

  44. Partner, Mental Health/Psychedelic Focus - Lionheart Ventures Job Posting

  45. Lionheart Ventures Portfolio

  46. The Funding Conversation We Left Unfinished - EA Forum