Secure AI Project
- Links1 link could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Attribute | Assessment |
|---|---|
| Type | Policy advocacy organization |
| Founded | ≈2022-2023 (exact date unspecified) |
| Leadership | Nick Beckstead (Co-Founder & CEO) |
| Focus | Legislative AI safety and security requirements |
| Funding | Individual donors and nonprofit institutions (no corporate funding) |
| Key Approach | Mandatory safety protocols, whistleblower protections, legal incentives |
| Impact | High-rated by evaluators; achieved confidential safety improvements at major AI lab |
Overview
Section titled “Overview”The Secure AI Project is a San Francisco-based organization that develops and advocates for pragmatic policies to reduce risks from advanced AI systems. Co-founded and led by Nick Beckstead as CEO, the organization distinguishes itself by focusing on legislative and regulatory interventions rather than purely voluntary industry commitments.1
The organization operates on the premise that the AI ecosystem will be stronger and more secure if large AI developers are legally required to publish safety and security protocols, if whistleblowers are protected from retaliation, and if developers have clear incentives to mitigate risk in accordance with industry best practices.2 Rather than relying on voluntary compliance—which some major AI developers have already pledged—Secure AI Project pushes for these principles to be codified in state and federal law.
Secure AI Project explicitly does not accept corporate funding or funds from foreign governments, instead relying on individual donors and nonprofit institutions aligned with its mission.2 According to a 2025 nonprofit review by Zvi Mowshowitz, the organization has achieved “big wins” including enhancing safety practices at a major AI lab, with details remaining confidential.3
Nick Beckstead: Background and Path to Founding
Section titled “Nick Beckstead: Background and Path to Founding”Nick Beckstead brings extensive experience in AI safety, governance, and effective altruism philanthropy to his role as Secure AI Project’s co-founder and CEO. Born in 1985, he earned a bachelor’s degree in mathematics and philosophy from the University of Minnesota before completing a Ph.D. in Philosophy at Rutgers University.1 His doctoral dissertation made important early contributions to longtermism, focusing on existential risk, population ethics, space colonization, and differential progress.4
As a graduate student, Beckstead co-founded the first US chapter of Giving What We Can, pledging to donate half of his post-tax income to cost-effective organizations fighting global poverty.4 This early commitment to effective altruism would shape his subsequent career trajectory.
Beckstead served as a Research Fellow at Oxford University’s Future of Humanity Institute before joining Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. as a Program Officer in 2014.1 At Open Philanthropy, he oversaw research and grantmaking related to global catastrophic risk reduction, with particular focus on advanced AI risks. His work included grants such as $590,000 to the University of Tübingen for robustness research, $11.35 million to the Center for Human-Compatible AI for organizational support, and $265,000 to UC Santa Cruz for adversarial robustness research.5
During his time at Open Philanthropy, Beckstead co-authored sections on AI alignment risks and noted ongoing challenges in finding qualified people to work on “the strategic aspect of potential risks from advanced AI,” indicating significant funding availability but limited talent pools.6 He emphasized two key risk categories: alignment problems (misaligned powerful AI) and power concentration (bad actors gaining AI advantage).7
After Open Philanthropy, Beckstead served as Policy Lead at the Center for AI Safety and became CEO of the Future Fund (part of the FTX Foundation) in November 2021.1 He resigned from the Future Fund in November 2022 when FTX collapsed.4 Following this, he co-founded the Secure AI Project, applying his experience in AI safety grantmaking, policy, and governance to legislative advocacy work.
Mission and Policy Approach
Section titled “Mission and Policy Approach”Secure AI Project’s core mission centers on three legislative priorities:
1. Mandatory Safety and Security Protocols (SSPs): The organization advocates for legal requirements that large AI developers must publish and implement protocols to assess, test, and mitigate severe risks from their systems. This goes beyond voluntary commitments by making such protocols legally enforceable.2
2. Whistleblower Protections: Recognizing that internal voices may be critical for identifying safety issues, the organization pushes for legal protections against retaliation for those who report AI safety concerns.2
3. Risk Mitigation Incentives: Rather than relying purely on compliance, Secure AI Project advocates for creating clear legal and economic incentives that reward developers for implementing industry best practices in risk mitigation.2
The organization acknowledges that current AI systems offer substantial societal benefits while also creating risks—some well understood and others still being discovered as technology advances.2 This balanced perspective informs their pragmatic approach to policy advocacy.
The organization’s work aligns with broader AI governance discussions about model weight security, access controls, and screening procedures to prevent bad actors from exploiting powerful AI systems.8 Their 2024 reports and scenario planning work have been highlighted as examples of high-quality strategic thinking in this space.3
Recent Developments and Impact
Section titled “Recent Developments and Impact”According to a 2025 nonprofit assessment, Secure AI Project was rated as deserving “high” funding priority, with evaluators expressing high confidence in the organization’s continued leverage and impact.3 The assessment verified “big wins” including private improvements to safety practices at a major AI lab, though specific details remain confidential to protect ongoing relationships.
The organization’s scenario planning work has been noted as particularly strong, with 2024 reports cited as evidence of quality strategic analysis.3 Evaluators praised the detail-oriented approach and results achieved by the team.
Secure AI Project’s advocacy appears to have influenced legislative developments. California approved an AI safety law in 2025, effective January 1, 2026, that requires AI developers to implement safeguards—directly aligned with the organization’s push for mandatory safety and security protocols, though no explicit connection has been confirmed in available sources.9
The organization continues to pursue state and federal legislative advocacy while maintaining its independence through selective funding sources. Interested parties can contact the organization at info@secureaiproject.org for potential partnerships.2
Relationship to Broader AI Safety Ecosystem
Section titled “Relationship to Broader AI Safety Ecosystem”Beckstead’s career trajectory places Secure AI Project at the intersection of several key AI safety institutions. His prior roles at Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information., the Center for AI Safety, and the Future of Humanity Institute connect the organization to major nodes in the AI safety research and funding landscape.1
At Open Philanthropy, Beckstead co-authored views on AI alignment risks and loss of control scenarios, citing work by Nick Bostrom on potential large-scale harms from advanced AI.10 He advocated for “AI for AI safety” approaches to strengthen safety progress, risk evaluation, and capability restraint.7
In public discussions, Beckstead has acknowledged challenges in mainstream AI researcher engagement with technical safety work, noting “Kuhnian barriers” in the machine learning field that favor empirical results over philosophical safety motivations.11 He described how work that doesn’t resemble traditional ML papers can trigger “pseudoscience alarms” that hinder progress.11
Within the effective altruism community, Beckstead has been featured positively on platforms like the EA Forum and in 80,000 Hours podcasts discussing high-impact career paths in AI safety.4 He has recommended paths such as deep learning residencies (like the Google Brain Residency) over PhDs for quick industry entry into AI safety work.12
Criticisms and Concerns
Section titled “Criticisms and Concerns”While no direct criticisms of Secure AI Project itself appear in available sources, Beckstead’s career has intersected with some controversial areas within effective altruism. In 2017, while serving as a trustee of the Centre for Effective Altruism (CEA), he approved Open Philanthropy grants to CEA, creating an unacknowledged conflict of interest; he later stepped down but rejoined in 2021.13
More significantly, Beckstead’s role as CEO of the FTX Future Fund ended with his resignation in November 2022 when FTX collapsed.4 While the collapse was due to FTX’s broader financial misconduct rather than issues with the Future Fund’s operations, the association represents a period of disruption in his career trajectory.
Some critics of effective altruism have questioned the movement’s shift toward longtermism and existential risk focus, including AI risk, as a core priority.14 Beckstead’s defense of AI and biosecurity funding as cost-effective drew scrutiny in 2018 discussions about whether such focus might harm EA’s broader public image.11
Additionally, during his time managing the Long-Term Future Fund, Beckstead made one small AI safety grant but conserved 96% of available funds, drawing indirect critique in AI safety funding reviews.15
Funding and Operations
Section titled “Funding and Operations”Secure AI Project is funded exclusively by individual donors and nonprofit institutions aligned with its mission.2 The organization explicitly states it does not accept corporate funding or funds from foreign governments—a policy designed to maintain independence in its advocacy work.
Evaluators have recommended the organization for substantial donations, with large grants directed through tyler@lasst.org and smaller donations available through designated links.3 No specific funding amounts have been publicly disclosed.
This selective funding approach contrasts with Beckstead’s prior work at Open Philanthropy, where he oversaw grants totaling millions of dollars, including a £13.4 million (≈$16.2 million USD) portfolio for AI risks, biosecurity, and macrostrategy with partial allocation of $12 million by September 2019.5
Key Uncertainties
Section titled “Key Uncertainties”Several important aspects of Secure AI Project remain uncertain or undisclosed:
Founding details: The exact founding date and full list of co-founders beyond Nick Beckstead are not specified in available sources.1
Team composition: Beyond Beckstead as CEO, the organization’s staff size, key personnel, and board members are not publicly detailed.2
Specific achievements: While evaluators cite “big wins” including safety improvements at a major AI lab, these details remain confidential.3
Legislative success: The extent to which the organization’s advocacy directly influenced specific legislation, such as California’s 2025 AI safety law, has not been explicitly confirmed.9
Future strategy: How the organization will scale its efforts and whether it will expand beyond California and federal advocacy remains unclear.
Measurement: The organization has not published public impact assessments or metrics for evaluating policy influence.