Skip to content

Seldon Lab

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:45 (Adequate)⚠️
Importance:45 (Reference)
Last edited:2026-02-01 (5 days ago)
Words:3.2k
Structure:
📊 2📈 0🔗 11📚 8819%Score: 12/15
LLM Summary:Seldon Lab is a San Francisco-based AI safety accelerator founded in early 2025 that combines research publication with startup investment, claiming early success with portfolio companies raising $10M+ and selling to major AI companies. The article provides comprehensive documentation of a new organization but lacks independent verification and relies heavily on self-reported achievements.
Issues (1):
  • QualityRated 45 but structure suggests 80 (underrated by 35 points)
AspectAssessment
TypeAI security accelerator and research lab
FoundedEarly 2025
LocationSan Francisco, California
FocusAGI security infrastructure, hardware governance, AI safety startups
Key InnovationFirst AI security-focused startup accelerator
Funding ModelInvests up to $500k per startup, 5-10 companies per batch
Notable AchievementBatch 1 companies raised $10M+ and sold to xAI and Anthropic
SourceLink
Official Websiteseldonlab.com
GitHubgithub.com

Seldon Lab is a San Francisco-based organization functioning as both an AI security research lab and the first startup accelerator specifically focused on AGI security technologies.1 Founded in early 2025 by Esben Kran and Finn Metz, Seldon Lab positions AGI security as what will become “the second largest industry in history” and works to develop existential security technologies for humanity’s coexistence with superintelligence.23

The organization operates as a full-stack entity that simultaneously publishes research papers, builds technical systems, funds and accelerates startups, and shapes the AI safety field through direct work and portfolio companies.4 Seldon Lab’s approach combines academic research (with founders who have published 25+ papers at conferences like NeurIPS, ICLR, and ICML) with practical entrepreneurship, investing in companies building infrastructure for safe AGI deployment at society scale.5

Seldon Lab runs a 3-month accelerator program in San Francisco, investing in frontier AI safety and security startups focused on areas like hardware-level AI governance, export controls for chips, long-term agent coherence, and technologies to train billions of personalized AI models.6 The organization emphasizes technical solutions over surveillance-based approaches, developing infrastructure including supply chain verification, air-gap capabilities, manipulation detection, and silicon-level security.7

Seldon Lab was founded in early 2025 by Esben Kran and Finn Metz, who brought complementary expertise in AI security research and venture funding.8 Kran had previously founded an AI security lab with over 25 publications at major machine learning conferences (NeurIPS, ICLR, ICML) and worked as a lead data scientist.9 Metz brought experience supporting over $20 million in funding rounds, with a background in private equity and venture capital, having raised seven figures specifically for AI security research.10

The founders describe themselves as part of a group of global experts who realized the world was unprepared for superintelligence and decided to accelerate AI research and build ventures rather than wait for others to act.11 The initial structure was considered as a for-profit company but evolved to emphasize mission-driven work, with portfolio companies structured as Public Benefit Corporations.12

Seldon Lab began operations in early 2025 by gathering founders in a San Francisco warehouse to build AGI security infrastructure.13 The organization launched its first 3-month pilot program (Batch 1) in mid-2025 with four initial companies:

  • Andon Labs: Building safe autonomous AI organizations with alignment and safety guarantees14
  • Lucid Computing: Developing a hardware-rooted, zero-trust platform for verifying AI chip usage and data location (CEO: Kristian Rönn)15
  • Workshop Labs: Enabling personal sovereignty through per-person AI model training and ownership16
  • DeepResponse: Creating autonomous cyber defense systems for the AGI era17

The pilot program achieved significant early traction. Batch 1 companies collectively raised over $10 million in funding, sold security solutions to both xAI and Anthropic, and patented verifiable compute inventions.18 The companies were featured in Time magazine and converted to Public Benefit Corporation structures.19

Following the success of the pilot program, Seldon Lab announced Batch 2 in late 2025 and early 2026, continuing to recruit founders focused on AI security infrastructure.20 The organization has received $53,000 in non-dilutive funding from the Survival and Flourishing Fund (SFF) over 12 months, along with in-kind contributions and investments from private investors.21

In December 2025, Seldon Lab released an “Inside Seldon Lab” video highlighting their dual model of publishing research while building companies and systems.22 The organization emphasizes that Batch 2 funding is routed through Manifund as dilutive investment using YC SAFE agreements, with returns flowing to funders’ balances.23

Seldon Lab’s research and development work centers on several key technical areas for AGI security:

Hardware-Level Governance: The organization prioritizes technologies that provide security guarantees at the hardware and infrastructure level, including supply chain verification with privacy preservation, air-gap capabilities enabling millisecond-level isolation and shutdown, redundant sovereignty across geopolitical boundaries, and attack-resilient design with runtime monitoring that assumes breaches will occur.24

Long-Term Agent Coherence: A repeated focus project involves demonstrating and measuring long-term coherence in AI agents, examining how AI systems maintain consistent task pursuit over time. This work includes dataset development in collaboration with organizations like Meter and the Security Institute.25

Export Controls and Chip Security: Seldon Lab works on improving export controls and security mechanisms for AI compute infrastructure, enhancing chip-level security and export compliance.26

Societal Autonomy Protection: Research includes manipulation detection (distinguishing influence from information), authenticity verification to differentiate human from AI-generated content, cognitive firewalls to filter AI access to human attention, and autonomy audits measuring reductions in human autonomy.27

The organization has accelerated the production of over 20 research papers through what they describe as “50 hackathons” involving more than 5,000 participants.28 This research accelerator model combines intensive collaborative events with startup development to rapidly advance both academic knowledge and practical systems.

Seldon Lab researchers have contributed to ML monitoring research, including work on two-sample tests for drift detection conditional on context, applied to diverse data types including images (CelebA dataset), time series (ECG data), and tabular data (Adult Census), with applications to GANs and autoencoders for classification and regression tasks.29

Beyond research, Seldon Lab emphasizes building practical deployment infrastructure including:

  • Capability attestation and verification systems
  • Rollback mechanisms for AI systems
  • Silicon-level security measures
  • AI network defenses
  • Verification-as-a-service platforms30

The organization advocates for transparent, cryptography-based governance alternatives rather than repurposing surveillance systems from entities like the NSA or authoritarian governments for AI monitoring.31

Seldon Lab operates as the first AI security-focused startup accelerator, running 3-month programs in San Francisco that require full-time, in-person participation from founders.32 The accelerator invests up to $500,000 per startup in 5-10 companies per batch, focusing on early-stage ventures developing AI security and assurance technologies.33

The program combines several elements:

  • Capital Investment: Direct funding for pre-seed AI safety infrastructure companies34
  • Technical Guidance: Access to founders and researchers with deep AI security expertise35
  • Network Effects: Connections to leading startups, investors, and AI safety trailblazers36
  • Event Programming: Seldon Founder Dinners and culminating events like the Seldon Grande Finale37

Seldon Lab positions itself as backing “fundable defensive tech” that can scale to address existential AI risks while building viable businesses.38 The organization’s Request for Startups (RFS) emphasizes underrepresented areas in AI safety infrastructure such as:

  • GPU monitoring and verification
  • AI insurance and risk assessment
  • Supply chain security
  • Hardware-level governance mechanisms
  • Resilience and redundancy systems39

The accelerator model explicitly aims to prove that AI security work can be simultaneous with paper publication, system-building, and field-shaping, rather than requiring sequential focus on these activities.40

Esben Kran (Co-Founder): Kran founded an AI security lab that produced over 25 publications before starting Seldon Lab. He has published research at major machine learning conferences including NeurIPS, ICLR, and ICML, and previously worked as a lead data scientist. He also co-founded Apart Research, contributing to his extensive connections in AI safety labs, among researchers, and with funders.41

Finn Metz (Co-Founder): Metz brings experience from private equity and venture capital, having supported funding rounds totaling over $20 million. He raised seven figures specifically for AI security research and co-founded the AI Safety Founders Community before launching Seldon Lab.42

Kristian Rönn (CEO, Lucid Computing): Rönn leads one of Batch 1’s most prominent companies, developing hardware-rooted verification for AI compute. He has publicly praised Seldon Lab as a top hub for AI security work.43

Axel Backlund (Co-founder, Andon Labs): Backlund describes Seldon as a “spiritual home” for his work on safe autonomous AI organizations, participating in the inaugural cohort.44

Seldon Lab has engaged prominent figures from AI safety and venture capital as guests and advisors, including:

  • Geoff Ralston: Former Y Combinator president and founder of SAIF.vc45
  • Joe Allen: Transhumanism editor at Steve Bannon’s Warroom and author of Dark Aeon46
  • Eric Ho: CEO of Goodfire47
  • Buck Shlegeris: CEO of Redwood Research48
  • Jeremie Harris: Gladstone AI49
  • Nitarshan Rajkumar: UK AISI founder50

The four companies from Seldon Lab’s pilot program achieved significant early milestones:

Lucid Computing developed hardware-level guarantees via remote attestation, enabling cryptographic verification that AI compute is used as specified and data remains in designated locations.51 This addresses compliance and security concerns for organizations deploying AI systems.

Andon Labs created automated real-world incidence systems, conducting physical agent evaluations for Anthropic and xAI.52 Their work focuses on ensuring AI-run autonomous organizations maintain alignment and safety properties.

DeepResponse built systems for real-time monitoring and response speed-up in cyber defense, preparing for autonomous threats in the AGI era.53

Workshop Labs developed cryptographic verification for private data and model pipelines, enabling billions of personalized AI models with individual ownership and sovereignty.54

By the conclusion of Batch 1, the portfolio companies had collectively:

  • Raised over $10 million in follow-on funding55
  • Closed hundreds of thousands in annual recurring revenue56
  • Secured customer contracts with xAI and Anthropic57
  • Patented new inventions for verifiable compute58
  • Converted to Public Benefit Corporation structures59
  • Been featured in Time magazine60

The organization describes achieving over $50 million in value by 2025 and securing a 10x return on an investment in Goodfire via Juniper Ventures.61

Seldon Lab operates with a mixed funding model. The organization has received $53,000 in non-dilutive grant funding from the Survival and Flourishing Fund (SFF) over 12 months, along with undisclosed in-kind contributions and investments from private investors.62 The accelerator itself is structured as Seldon Labs PBC (Public Benefit Corporation).63

For Batch 2, Seldon Lab routes funding through Manifund using dilutive investment structures (YC SAFE agreements), with returns flowing to funders’ balances rather than being treated as pure grants.64 This hybrid model allows the organization to maintain mission focus while creating potential financial returns.

Seldon Lab invests up to $500,000 per startup in 5-10 companies per batch.65 The organization emphasizes early-stage, pre-seed investment in AI safety infrastructure, targeting companies that can scale to “unicorn-scale” valuations within 12 months.66

Finn Metz’s background includes supporting funding rounds totaling over $20 million, though these figures represent his broader professional experience rather than Seldon Lab’s specific capital deployment.67

It’s important to note that Seldon Lab is entirely distinct from Seldon.io (also known as Seldon Technologies Limited), a British MLOps firm founded in 2014. Seldon.io has raised significantly more venture capital—including €3 million in seed funding (2019, led by Amadeus Capital Partners), £7.1 million in Series A (November 2020, led by AlbionVC and Cambridge Innovation Capital), and $20 million in Series B (led by Bright Pixel Capital)—but operates in the machine learning deployment space rather than AI safety.68 These are separate organizations with no connection beyond similar names.

Alignment and Existential Risk Connections

Section titled “Alignment and Existential Risk Connections”

Seldon Lab explicitly positions its work within the context of existential risk from advanced AI systems. The organization’s stated mission is to develop “existential security technologies” for humanity’s coexistence with superintelligence.69 This framing connects directly to concerns about AI alignment and the potential for catastrophic outcomes from misaligned AGI systems.

The organization’s focus areas address several key alignment challenges:

  • Long-term agent coherence: Understanding whether AI systems maintain consistent goals over extended periods relates directly to instrumental convergence and goal stability concerns in alignment research.70
  • Hardware-level verification: Cryptographic and physical guarantees about AI system behavior provide potential solutions to monitoring and containment challenges.71
  • Privacy-preserving capability evaluation: Extending frameworks like AISI’s inspect to allow AI systems to prove safety properties without revealing internal details addresses transparency-security tradeoffs.72

Seldon Lab emphasizes risks including AI systems “chasing human performance leading to catastrophe” and cyber offense capabilities demonstrated by incidents like Chinese state hackers compromising Anthropic’s Claude systems.73

Within the AI safety field-building landscape, Seldon Lab occupies a distinctive position as the first accelerator specifically focused on AI security startups. Analysis from the Effective Altruism community notes that AI safety incubators like Seldon Lab arrived “late relative to need” and often occupy “low status positions” compared to research organizations, despite their contributions to building the field.74

The organization addresses what some observers identify as systematic undervaluing of founders and field-builders in AI safety, where research and writing receive more status than entrepreneurship and organization-building.75 Seldon Lab’s model explicitly combines research output (20+ papers) with company creation and systems deployment, challenging assumptions that these activities must be sequential.76

Seldon Lab contributes to and collaborates on several open-source AI safety initiatives:

  • Model Context Protocol (MCP): An open standard for connecting AI systems to data sources77
  • Inspect evaluation framework: Open-source AI testing tools78
  • OpenMined: Privacy-preserving machine learning community79
  • Differential Privacy Library: Production-ready privacy tools80
  • SOLID Web Decentralization Project: Tim Berners-Lee’s initiative for personal data repository decentralization81

Seldon Lab acknowledges the risk that accelerator companies may shift away from safety-critical problems despite guidance, leading to “lower impact density” while still producing AI-security-adjacent tools.82 This represents a fundamental tension in the accelerator model: companies need product-market fit and revenue to survive, which may pull them toward commercially viable but less impactful directions.

Critics note that AI safety incubators like Seldon Lab emerged “late relative to need,” with the community’s hesitation about “rapid org scaling” and “mass movement building” potentially delaying the creation of such programs.83 The 2022 memo opposing mass movement building to avoid dilution occurred while frontier AI firms were scaling staff 2-3x per year, suggesting possible strategic miscalculations in the AI safety ecosystem.84

As of early 2026, Seldon Lab has completed only one full batch, with Batch 2 recently launched. While Batch 1 achieved impressive early metrics ($10M+ raised, sales to xAI and Anthropic), the long-term impact of portfolio companies on AGI security remains uncertain. The organization’s claim that AGI security will become “the second largest industry in history” represents an ambitious prediction rather than an established fact.85

Commentary on the AI safety field suggests that programs like Seldon Lab help address gaps for founders lacking access to elite networks (organizations like Constellation and LISA are mentioned as comparison points).86 However, with only 4 companies in Batch 1 and 5-10 planned per batch, the accelerator can only serve a small fraction of potential AI safety entrepreneurs, raising questions about selection processes and accessibility.

Several important questions remain about Seldon Lab’s approach and potential impact:

  1. Theory of Change: How does funding AI security startups translate to reduced existential risk from AGI? The causal pathway from profitable security companies to existential safety remains underspecified.

  2. Commercial Viability: Can AI security infrastructure companies achieve sustainable business models while maintaining focus on the most critical safety challenges, or will market incentives inevitably pull them toward less impactful but more profitable directions?

  3. Timeline Assumptions: Seldon Lab’s emphasis on deploying “world-scale AGI deployment tech” implies assumptions about AGI timelines and deployment scenarios. The optimal allocation of resources depends heavily on whether transformative AI arrives in 5, 10, or 20+ years.

  4. Competitive Landscape: How will Seldon Lab-backed startups compete with well-funded incumbents (major AI labs, cybersecurity firms, cloud providers) who may develop similar infrastructure with vastly more resources?

  5. Technical Feasibility: Several focus areas (hardware-level verification, cryptographic safety proofs, manipulation detection) involve unsolved technical problems. The feasibility of developing effective solutions on startup timelines and budgets remains uncertain.

  6. Regulatory Interaction: How will AI security infrastructure interact with emerging regulatory frameworks? Companies may need to navigate complex compliance requirements while maintaining security properties.

  7. Scale and Coverage: With 5-10 companies per batch and $500k investments, can Seldon Lab achieve sufficient scale and coverage across the many dimensions of AGI security to meaningfully reduce risk?

  1. Seldon Lab

  2. Seldon Lab Mission

  3. Inside Seldon Lab - Luma

  4. Inside Seldon Lab - YouTube

  5. Inside Seldon Lab - Luma

  6. Seldon Lab - OpenVC

  7. Seldon Lab Mission

  8. Inside Seldon Lab - Luma

  9. Inside Seldon Lab - Luma

  10. Inside Seldon Lab - Luma

  11. Seldon Lab Mission

  12. Seldon Lab Mission

  13. Seldon Lab Mission

  14. Seldon Lab Mission

  15. Seldon Lab

  16. Seldon Lab Mission

  17. Seldon Lab Mission

  18. Seldon Lab Mission

  19. Seldon Lab Mission

  20. Seldon Lab

  21. AI Security Startup Accelerator Batch 2 - Manifund

  22. Inside Seldon Lab - YouTube

  23. AI Security Startup Accelerator Batch 2 - Manifund

  24. Seldon Lab Mission

  25. Inside Seldon Lab - YouTube

  26. Seldon Lab

  27. Seldon Lab Mission

  28. Inside Seldon Lab - YouTube

  29. Seldon.io Research

  30. Seldon Lab Mission

  31. Seldon Lab Mission

  32. Inside Seldon Lab - Luma

  33. AI Security Startup Accelerator Batch 2 - Manifund

  34. AI Security Startup Accelerator - Effective Altruism

  35. Seldon Lab

  36. Seldon Lab

  37. Seldon Lab

  38. Inside Seldon Lab - Luma

  39. AI Security Startup Accelerator Batch 2 - Manifund

  40. Inside Seldon Lab - YouTube

  41. AI Security Startup Accelerator Batch 2 - Manifund

  42. AI Security Startup Accelerator Batch 2 - Manifund

  43. Seldon Lab

  44. Seldon Lab Mission

  45. Seldon Lab Mission

  46. Seldon Lab Mission

  47. Seldon Lab Mission

  48. Seldon Lab Mission

  49. Inside Seldon Lab - YouTube

  50. Inside Seldon Lab - YouTube

  51. Seldon Lab Mission

  52. Seldon Lab Mission

  53. Seldon Lab Mission

  54. Seldon Lab Mission

  55. Seldon Lab Mission

  56. AI Security Startup Accelerator Batch 2 - Manifund

  57. Seldon Lab Mission

  58. Seldon Lab Mission

  59. Seldon Lab Mission

  60. Seldon Lab Mission

  61. Seldon Summer 2025 Batch

  62. AI Security Startup Accelerator Batch 2 - Manifund

  63. AI Security Startup Accelerator Batch 2 - Manifund

  64. AI Security Startup Accelerator Batch 2 - Manifund

  65. AI Security Startup Accelerator Batch 2 - Manifund

  66. Seldon Summer 2025 Batch

  67. Inside Seldon Lab - Luma

  68. Seldon Series B Announcement

  69. Seldon Lab Mission

  70. Seldon Lab

  71. Seldon Lab Mission

  72. Seldon Lab Mission

  73. Inside Seldon Lab - YouTube

  74. AI Safety Undervalues Founders - EA Forum

  75. AI Safety Undervalues Founders - EA Forum

  76. Inside Seldon Lab - YouTube

  77. Seldon Lab Mission

  78. Seldon Lab Mission

  79. Seldon Lab Mission

  80. Seldon Lab Mission

  81. Seldon Lab Mission

  82. AI Security Startup Accelerator Batch 2 - Manifund

  83. AI Safety Undervalues Founders - EA Forum

  84. AI Safety Undervalues Founders - EA Forum

  85. Inside Seldon Lab - YouTube

  86. AI Safety Undervalues Founders - EA Forum