Skip to content

Helen Toner

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:43 (Adequate)⚠️
Importance:18 (Peripheral)
Last edited:2026-01-29 (3 days ago)
Words:5.5k
Structure:
📊 46📈 1🔗 1📚 223%Score: 13/15
LLM Summary:Comprehensive biographical profile of Helen Toner documenting her career from EA Melbourne founder to CSET Interim Executive Director, with detailed timeline of the November 2023 OpenAI board crisis where she voted to remove Sam Altman. Compiles public testimony, publications, and media appearances but offers minimal original analysis beyond chronicling events and her policy positions favoring government AI regulation.
Issues (1):
  • QualityRated 43 but structure suggests 87 (underrated by 44 points)
DimensionAssessmentNotes
Primary RoleAI Governance ResearcherGeorgetown CSET Interim Executive Director
Global RecognitionTIME 100 AI 2024Listed among most influential people in AI
OpenAI Board2021-2023Voted to remove Sam Altman; resigned after his reinstatement
Policy InfluenceHighCongressional testimony, Foreign Affairs, The Economist
Research FocusU.S.-China AI competition, AI safety, governanceCSET publications and grants
Academic CredentialsMA Security Studies (Georgetown), BSc Chemical Engineering (Melbourne)Strong interdisciplinary background
EA MovementEarly leaderFounded EA Melbourne chapter, worked at GiveWell and Coefficient Giving
AttributeInformation
Birth Year1992
BirthplaceMelbourne, Victoria, Australia
NationalityAustralian
EducationBSc Chemical Engineering, University of Melbourne (2014); Diploma in Languages, University of Melbourne; MA Security Studies, Georgetown University (2021)
High SchoolMelbourne Girls Grammar School
University Score99.95 (Australian university admission)
Current PositionInterim Executive Director, Georgetown CSET (September 2025-present)
Previous PositionsDirector of Strategy and Foundational Research Grants, CSET; Senior Research Analyst, Coefficient Giving; OpenAI Board Member
LanguagesEnglish, Mandarin Chinese (studied in Beijing)

Helen Toner is an Australian AI governance researcher who became one of the most prominent figures in AI policy after her role in the November 2023 removal of Sam Altman as OpenAI’s CEO. She serves as Interim Executive Director of Georgetown University’s Center for Security and Emerging Technology (CSET), a think tank she helped establish in 2019 with $55 million in funding from Coefficient Giving (then Open Philanthropy).

Her career trajectory represents one of the most successful examples of effective altruism’s strategy of placing safety-focused individuals in positions of influence over AI development. From leading a student effective altruism group in Melbourne to sitting on the board of one of the world’s most powerful AI companies, Toner’s path demonstrates both the opportunities and limitations of this approach.

Toner’s expertise spans U.S.-China AI competition, AI safety research, and technology governance. She has testified before multiple Congressional committees, written for Foreign Affairs and The Economist, and was named to TIME’s 100 Most Influential People in AI in 2024. Her work emphasizes that AI governance requires active government intervention rather than relying on industry self-regulation.

Loading diagram...
PeriodRoleOrganizationKey Activities
2014Chapter Founder/LeaderEffective Altruism MelbourneIntroduced to EA movement as university student; became skeptical-turned-believer on AI risk
2015-2016Research AnalystGiveWellResearched AI policy issues including military applications and geopolitics
2016-2017Senior Research AnalystCoefficient Giving (then Open Philanthropy)Advised policymakers on AI policy; recommended $1.75M+ in grants for AI governance
2018Research AffiliateOxford Center for the Governance of AISpent 9 months in Beijing studying Chinese AI ecosystem and Mandarin
Jan 2019Director of StrategyGeorgetown CSETHelped found and shape CSET’s research agenda
Mar 2022Director of Strategy & Foundational Research GrantsGeorgetown CSETLed multimillion-dollar technical grantmaking function
2021-2023Board MemberOpenAIInvited by Holden Karnofsky to replace him on board
Sep 2025Interim Executive DirectorGeorgetown CSETAppointed to lead the center

The most consequential moment of Toner’s career came on November 17, 2023, when she and three other OpenAI board members voted to remove Sam Altman as CEO. The five-day crisis that followed revealed deep tensions between AI safety governance and commercial AI development.

DateTimeEventDetails
Nov 17, 2023≈12:00 PM PSTBoard votes to remove Altman4 board members (Toner, McCauley, D’Angelo, Sutskever) vote to fire Altman
Nov 17, 2023≈12:05 PMAltman learns of removalInformed on Google Meet while watching Las Vegas Grand Prix; told 5-10 minutes before announcement
Nov 17, 2023AfternoonPublic announcementBoard cites Altman “not consistently candid in his communications”
Nov 18, 2023Anthropic merger discussionsActive discussions about merging OpenAI with Anthropic; Toner “most supportive” per Sutskever testimony
Nov 18-21Pressure campaignMicrosoft, VCs, 95% of OpenAI employees threaten to leave
Nov 21, 2023Altman reinstatedReturns as CEO; Toner, McCauley resign from board

The board’s official statement said Altman had “not been consistently candid in his communications.” In her May 2024 TED AI Show interview, Toner provided more detailed allegations:

AllegationToner’s ClaimOpenAI Response
ChatGPT launchBoard learned about ChatGPT release from Twitter in November 2022, not informed in advanceChatGPT was “released as a research project” built on GPT-3.5 already available for 8 months
Startup Fund ownershipAltman did not disclose he owned the OpenAI Startup Fund while claiming to be an independent board memberNot addressed
Safety processesAltman gave “inaccurate information” about company’s safety processesIndependent review found firing “not based on concerns regarding product safety”
Executive complaintsTwo executives reported “psychological abuse” from Altman with screenshots and documentationTaylor: Review concluded decision not based on safety concerns
Pattern of behavior”For years, Sam had made it really difficult for the board… withholding information, misrepresenting things… in some cases outright lying”Disputed by OpenAI current leadership

In October 2025, Ilya Sutskever’s deposition in the Musk v. Altman lawsuit revealed additional details:

  • Sutskever prepared a 52-page memo for independent board members (Toner, McCauley, D’Angelo) weeks before the removal
  • The memo stated: “Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another”
  • “Most or all” supporting material came from OpenAI CTO Mira Murati
  • Altman was not shown the memo because Sutskever “felt that, had he become aware of these discussions, he would just find a way to make them disappear”

One of the most striking revelations was that within 48 hours of Altman’s firing, discussions were underway to potentially merge OpenAI with Anthropic:

AspectDetails
TimingSaturday, November 18, 2023
Toner’s PositionAccording to Sutskever, Toner was “most supportive” of merger direction
Sutskever’s Position”Very unhappy” about it; “really did not want OpenAI to merge with Anthropic”
RationaleWhen warned company would collapse without Altman, Toner allegedly responded that destroying OpenAI “could be consistent with its safety mission”
Toner’s ResponseDisputed Sutskever’s account on social media after deposition release
OutcomeDescription
ImmediateToner and McCauley resigned from board; Altman reinstated
Governance changesOpenAI reformed board structure; added new independent directors
SEC investigationFebruary 2024: SEC reportedly investigating whether Altman misled investors
Toner’s influenceNamed to TIME 100 AI 2024; increased requests from policymakers worldwide
Policy impactCrisis highlighted tensions between AI safety governance and commercial interests

Toner’s research at CSET spans three primary domains:

Research AreaDescriptionKey Publications
U.S.-China AI CompetitionAnalysis of Chinese AI capabilities, military applications, and competitive dynamicsCongressional testimony, Foreign Affairs articles
AI Safety ResearchRobustness, interpretability, reward learning, uncertainty quantificationCSET AI Safety series
AI GovernanceStandards, testing, safety processes, accident preventionPolicy briefs, congressional testimony
YearTypePublication/OutletTopic
2019TestimonyU.S.-China Economic and Security Review CommissionChina’s Pursuit of AI
2023Research PaperCSET”Artificial Intelligence and Costly Signals” (co-authored with Andrew Imbrie, Owen Daniels)
2024Op-EdForeign Affairs”The Illusion of China’s AI Prowess”
2024Op-EdThe EconomistU.S.-China bilateral meetings on AI
2024TestimonySenate Judiciary SubcommitteeAI Oversight: Insider Perspectives
2024TalkTED2024”How to Govern AI, Even if it’s Hard to Predict”
2025TestimonyHouse Judiciary SubcommitteeTrade Secrets and the Global AI Arms Race

Toner has authored or contributed to multiple papers examining AI safety:

TopicKey Findings
RobustnessResearch tracking how ML systems behave under distribution shift and adversarial conditions
InterpretabilityAnalysis of research trends in understanding ML system decision-making
Reward LearningStudy of how systems can be trained to align with human intentions
Uncertainty QuantificationWork introducing the concept to non-technical audiences

She has stated: “Building AI systems that are safe, reliable, fair, and interpretable is an enormous open problem. Research into these areas has grown over the past few years, but still only makes up a fraction of the total effort poured into building and deploying AI systems.”

According to Google Scholar, Toner’s research has been cited over 3,286 times, indicating significant academic influence in the AI governance field.

Toner has testified before multiple Congressional committees on AI policy and U.S.-China competition.

DateCommitteeTopicKey Arguments
June 2019U.S.-China Economic and Security Review CommissionChina’s Pursuit of AIAI research is unusually open/collaborative; strategic immigration policy critical; China’s approach to data privacy differs
September 2024Senate Judiciary SubcommitteeAI OversightConcerns about regulation slowing U.S. innovation are “not nearly as strong as it seems”; China “far from being poised to overtake the United States”
May 2025House Judiciary SubcommitteeTrade Secrets and AI Arms Race”AI IP is as core to U.S. competitiveness as rapid innovation”; adversaries cannot have easy access to U.S. technology

Based on her testimony and public statements, Toner advocates for:

Policy AreaPosition
ImmigrationAccess to skilled researchers and engineers is key; U.S. ability to attract foreign talent is critical advantage
Federal ResearchNo major federal effort has strengthened fundamental AI research during current deep learning wave, unlike China
RegulationGovernment must actively regulate AI; self-governance by companies “doesn’t actually work”
Safety RequirementsSupports mandatory safety testing and oversight for advanced AI systems
International Coordination”Laboratory of democracy” approach: different jurisdictions should try different approaches and learn from experiments

Toner takes a nuanced position on AI existential risk:

AspectHer View
Existential scenariosAcknowledges “whole discourse around existential risk from AI” while noting “people who are being directly impacted by algorithmic systems and AI in really serious ways” already
Polarization concernWorried about polarization where some want to “keep those existential or catastrophic issues totally off the table” while others are easily “freaked out about the more cataclysmic possibilities”
Industry concentrationNotes “natural tension” between view that fewer AI players helps coordination/regulation vs. concerns about power concentration
Government roleBelieves government regulation is necessary; industry self-governance insufficient

Based on her TED2024 talk and public statements:

PrincipleExplanation
Adaptive Regulation”Different experiments that are being run in how to govern this technology are treated as experiments, and can be adjusted and improved along the way”
Epistemic HumilityPolicy should be developed despite uncertainty about AI capabilities and timelines
International Learning”Laboratory of democracy has always seemed pretty valuable to me” - countries should try different approaches
Implementation Focus”We’re shifting from a year of initial excitement to a year more of implementation, and coming back to earth”

In her Foreign Affairs article “The Illusion of China’s AI Prowess,” Toner argued:

PointAssessment
Regulation ImpactConcerns about U.S. regulation enabling Chinese dominance are “overblown”
Chinese CapabilitiesChinese AI development “lags behind” U.S.; Chinese LLMs “heavily rely on American research and technology”
Chinese RegulationChina is already imposing AI regulations of its own
Macro HeadwindsChina faces significant economic and demographic challenges
U.S. AdvantageStrength in fundamental research is “backbone of American advantage”
PeriodRoleActivities
2014University studentIntroduced to EA movement by organizers of EA Melbourne
2014Initial skepticism”Initially skeptical, dismissed them as philosophically confused and overly enthusiastic science fiction enthusiasts”
2014Conversion”Eventually embraced their perspective” and assumed leadership of Melbourne chapter
2015-2017ProfessionalWorked at GiveWell and Coefficient Giving (then Open Philanthropy), both EA-aligned organizations
2019-PresentCSETCSET was established through $55 million grant from Coefficient Giving

Toner’s career exemplifies the EA approach of:

  • Career capital building: Gaining expertise and credentials in a high-impact area
  • Institutional leverage: Positioning within influential organizations (OpenAI board, CSET)
  • Longtermism: Focus on AI risk as a priority concern for humanity’s future
  • Impact-focused grantmaking: Recommending grants while at Coefficient Giving ($1.5M to UCLA for AI governance fellowship, $260K to CNAS for advanced technology risk research)

Key Grant Recommendations at Coefficient Giving

Section titled “Key Grant Recommendations at Coefficient Giving”
YearAmountRecipientPurpose
May 2017$1,500,000UCLA School of LawFellowship, research, and meetings on AI governance and policy
August 2017$260,000CNAS (Richard Danzig)Publication on potential risks from advanced technologies

Toner’s trajectory from EA student organizer to influential AI governance figure represents a model the EA movement has promoted for “building career capital” in high-impact areas. Her path illustrates several key elements:

Career Capital ElementToner’s Example
Early commitmentJoined EA movement as undergraduate; took leadership role immediately
Skills developmentChemical engineering degree provided analytical foundation; security studies MA added policy expertise
Network buildingGiveWell and Coefficient Giving connected her to funders and researchers
International experienceBeijing research affiliate role built China expertise few Western researchers possess
Institutional positioningCSET founding role and OpenAI board provided influence levers

The CSET founding exemplifies the EA strategy of building institutions: Coefficient Giving (then Open Philanthropy) provided $55 million over five years specifically to create a think tank that would shape AI policy from within Washington’s foreign policy establishment. Toner was positioned as Director of Strategy from the beginning, allowing her to shape the center’s research agenda toward AI safety and governance concerns.

AspectDetails
Funding sourceCoefficient Giving ($55M founding grant)
Mission alignmentCSET focuses on AI safety, security, and governance - core EA longtermist concerns
Staff pipelineMultiple CSET researchers have EA movement connections
Research prioritiesU.S.-China competition, AI accidents, standards/testing align with EA cause areas
Policy influenceGovernment briefings and congressional testimony extend EA ideas into policy

Note: 80,000 Hours, the EA career advice organization that has featured Toner in multiple podcast episodes, is also funded by the same major donor (Coefficient Giving) that funds CSET.

TIME 100 Most Influential People in AI (2024)

Section titled “TIME 100 Most Influential People in AI (2024)”

TIME’s profile noted:

“In mid-November of 2023, Helen Toner made what will likely be the most pivotal decision of her career… One outcome of the drama was that Toner, a formerly obscure expert in AI governance, now has the ear of policymakers around the world trying to regulate AI.”

Recognition AspectDetails
Category100 Most Influential People in AI 2024
Impact”More senior officials have requested her insights than in any previous year”
Stated Mission”Life’s work” is to consult with lawmakers on sensible AI policy
TypeDetails
Podcast Features80,000 Hours (multiple appearances), TED AI Show, Cognitive Revolution, Clearer Thinking
Media PlatformsChinaFile contributor, Sourcelist expert
Government BriefingsHas briefed senior officials across U.S. government
PersonRelationshipContext
Holden KarnofskyMentor/predecessorKarnofsky invited Toner to replace him on OpenAI board in 2021
Tasha McCauleyBoard colleagueCo-voted to remove Altman; co-authored post-crisis Economist piece
Adam D’AngeloBoard colleagueRemained on OpenAI board after crisis; received 52-page memo
Ilya SutskeverBoard colleagueCo-voted to remove Altman; later disputed Toner’s account of events
Sam AltmanAdversaryRemoved as OpenAI CEO by Toner and board colleagues
Jason MathenyCSET colleagueCSET founding director; Toner was early hire
StrengthEvidence
Policy expertiseCongressional testimony, Foreign Affairs publications, TIME 100 recognition
Interdisciplinary backgroundEngineering + security studies + China expertise
Institutional accessBuilt relationships across government, academia, and industry
Research impact3,286+ Google Scholar citations
Risk awarenessEarly EA convert; focused career on AI governance
CriticismContext
OpenAI board outcomeAltman reinstated within 5 days; governance approach failed to achieve lasting change
CommunicationBoard’s initial silence created “information vacuum” that enabled pressure campaign
ProcessIndependent review reportedly found firing not based on product safety or security concerns
Disputed accountsSutskever and Toner have conflicting accounts of merger discussions and other events
QuestionRelevance
Was removal justified?Evidence remains contested; no public resolution
Did safety concerns exist?Toner claims safety process misrepresentations; OpenAI review reportedly found otherwise
What were alternatives?Could board have achieved safety goals through different approaches?
Long-term impact?Did crisis ultimately help or hurt AI safety governance?

As of September 2025, Toner serves as Interim Executive Director of Georgetown CSET, leading a research center with approximately 30 researchers focused on:

Focus AreaDescription
AI Safety ResearchRobustness, interpretability, testing, standards
National SecurityMilitary AI applications, intelligence implications
China AnalysisChinese AI ecosystem, U.S.-China technology competition
Policy DevelopmentCongressional testimony, government briefings, public writing

She continues to advocate for active government regulation of AI, arguing that the “laboratory of democracy” approach of trying different regulatory experiments across jurisdictions is preferable to either inaction or one-size-fits-all approaches.

InitiativeDescriptionStatus
AI Safety SeriesPublications on robustness, interpretability, reward learningOngoing
China AI TrackerMonitoring Chinese AI ecosystem developmentsActive
Congressional EngagementRegular testimony and briefingsActive
Foundational Research GrantsMultimillion-dollar grantmaking for technical AI safety researchExpanded since 2022
Government FellowshipsPlacing researchers in policy positionsOngoing

Based on public statements, CSET under Toner’s leadership is expanding focus on:

AreaRationale
AI Standards and TestingNeed for rigorous evaluation before deployment in high-stakes settings
Accident InvestigationLearning from AI failures similar to aviation safety processes
Military AI ApplicationsAutonomous weapons, intelligence analysis, command and control
Compute GovernanceHardware controls as a lever for AI governance
International CoordinationMechanisms for global AI governance despite geopolitical tensions

The “Artificial Intelligence and Costly Signals” Paper Controversy

Section titled “The “Artificial Intelligence and Costly Signals” Paper Controversy”

In October 2023, shortly before the OpenAI board crisis, Toner co-authored a paper with Andrew Imbrie and Owen Daniels that reportedly caused tension with Sam Altman.

AspectDetails
Title”Artificial Intelligence and Costly Signals”
PublicationCSET, October 2023
Co-authorsAndrew Imbrie, Owen Daniels
TopicInternational signaling theory applied to AI development

According to reports, the paper contained analysis that Altman viewed as unfavorable to OpenAI or as potentially undermining the company’s position. While the specific nature of the disagreement has not been fully disclosed, it illustrates the inherent tensions of having safety-focused researchers on commercial AI company boards:

TensionDescription
Academic freedomResearchers expect to publish without corporate approval
Fiduciary dutyBoard members owe duty to the organization
Competitive concernsAnalysis may affect company’s competitive position
Governance roleBoard members need to maintain independence for effective oversight

Toner’s experience on the OpenAI board, while ending in resignation, offers several lessons for AI governance:

ChallengeDescriptionToner’s Experience
Information asymmetryBoards depend on management for informationBoard allegedly not informed of ChatGPT launch or other key developments
Resource imbalanceManagement has full-time staff; board members serve part-timeBoard lacked resources to verify management claims
Stakeholder pressureEmployees, investors, customers may oppose board actions95% employee letter, Microsoft pressure reversed board decision
Nonprofit/for-profit tensionOpenAI’s unusual structure created conflictsSafety mission vs. commercial success difficult to balance

Based on Toner’s public statements and the crisis outcome:

LessonImplication
Communication mattersBoard’s silence created vacuum filled by critics
Coalition buildingSafety-focused board members were isolated when crisis hit
Structural powerLegal and financial structures determine who wins disputes
Transparency normsAI companies may need new norms around board-management communication

In her September 2024 Senate testimony, Toner stated:

“This technology would be enormously consequential, potentially extremely dangerous, and should only be developed with careful forethought and oversight.”

She has advocated for:

RecommendationRationale
External oversightCompany self-governance insufficient
Mandatory safety testingPrevent deployment of dangerous systems
Whistleblower protectionsEnable internal critics to raise concerns
Regulatory experimentationDifferent approaches across jurisdictions to learn what works

Comparative Analysis: Toner vs. Other AI Safety Figures

Section titled “Comparative Analysis: Toner vs. Other AI Safety Figures”
FigureBackgroundCurrent RolePrimary Focus
Helen TonerChemical engineering + security studiesGeorgetown CSET Interim EDGovernance, U.S.-China
Holden KarnofskyEconomics (Harvard)Former Coefficient Giving co-CEOFunding strategy, risk prioritization
Dario AmodeiPhysics PhD (Princeton)Anthropic CEOTechnical safety, constitutional AI
Jan LeikeML PhD (Toronto)Anthropic Alignment LeadTechnical alignment research
Paul ChristianoCS PhD (UC Berkeley)ARC founderAI alignment, evaluation
ApproachTonerKarnofskyAmodei
Primary leverPolicy/governanceGrantmakingLab leadership
Technical focusLow (policy-oriented)Medium (strategy)High (research)
China focusHighLowLow
Government engagementVery highMediumMedium
Public communicationHighHighMedium
FigureMechanismEstimated Impact
TonerCongressional testimony, CSET research, mediaModerate policy influence; limited on technical development
Karnofsky$300M+ in grantsHigh influence on field direction and funding
AmodeiControls Anthropic resourcesVery high on one major lab’s approach
PodcastHostDateTopic
80,000 HoursRob Wiblin2019CSET founding and AI policy careers
80,000 HoursRob Wiblin2024Geopolitics of AI in China and Middle East
TED AI ShowBilawal SidhuMay 2024OpenAI board crisis, AI regulation
Cognitive RevolutionNathan Labenz2024AI safety, regulatory approaches
Clearer ThinkingSpencer Greenberg2024AI, U.S.-China relations, OpenAI board
Foresight Institute2024”Who gets to decide AI’s future?”
PublicationTypeTopics
Foreign AffairsOp-edsU.S.-China competition, Chinese AI
The EconomistOp-edsU.S.-China bilateral relations
TIMEOp-edsAI governance
GiveWell BlogAnalysisAI policy research (2015-2016)
CSET PublicationsResearchAI safety, China, standards

Toner maintains active presence on X (formerly Twitter) at @hlntnr, where she shares research, responds to coverage, and occasionally disputes inaccurate reporting about her role in the OpenAI crisis.

AspectDetails
Duration9 months
AffiliationOxford University’s Center for the Governance of AI (Research Affiliate)
FocusChinese AI ecosystem, AI and defense
Language StudyMandarin Chinese
OutcomeBuilt rare firsthand expertise on Chinese AI among Western researchers
AreaKey Findings
AI CapabilitiesChinese AI lags U.S.; heavily relies on American research/technology
Data GovernanceDifferent approach to privacy; potential training data advantages
Military AIMilitary-civil fusion creates different development dynamics
TalentCompetition for researchers is key variable
RegulationChina is implementing AI regulations despite perception otherwise

Toner’s China expertise shapes her policy recommendations:

Policy AreaToner’s Position Based on China Research
Export ControlsSupports protecting AI IP; “adversaries cannot have easy access”
ImmigrationU.S. must maintain talent advantage; China competes for researchers
RegulationU.S. regulation won’t cede leadership to China; concerns “overblown”
Research FundingU.S. needs major federal investment in fundamental AI research
TypeSourceDescription
ProfileCSET Staff PageOfficial biography and publication list
ProfileTIME 100 AI 2024TIME’s profile on Toner’s influence
InterviewTED AI Show (May 2024)First longform interview after OpenAI investigation
Interview80,000 Hours PodcastIn-depth discussion of AI geopolitics
TestimonySenate Judiciary (Sep 2024)Written testimony on AI oversight
TestimonyUSCC (June 2019)Testimony on China’s pursuit of AI
TypeSourceDescription
WikipediaHelen TonerComprehensive biographical article
NewsCNBC (May 2024)Coverage of Toner’s TED AI Show revelations
NewsFortune (May 2024)Details on ChatGPT launch disclosure
AnalysisDecrypt (2025)Coverage of Sutskever deposition revelations
ArticleForeign AffairsToner’s op-ed on China’s AI capabilities
NewsFast CompanyProfile on Toner’s AI safety advocacy
InterviewJournal of Political RiskIn-depth interview on AI risks
AnnouncementCSET Interim EDAppointment announcement
  • Holden Karnofsky - Former Coefficient Giving co-CEO who invited Toner to OpenAI board
  • Ilya Sutskever - OpenAI co-founder and board member who co-voted to remove Altman
  • Sam Altman - OpenAI CEO removed and reinstated in November 2023
  • Dario Amodei - Anthropic CEO; Anthropic was discussed as potential merger partner
TypeCitation
Scholar ProfileGoogle Scholar - 3,286+ citations
EA ForumHelen Toner: Building Organizations
EA ForumHelen Toner: Sustainable Motivation
OutletArticleDate
BloombergEx-OpenAI Director Says Board Learned of ChatGPT Launch on TwitterMay 2024
South China Morning PostFormer OpenAI director details ousting of CEO Sam AltmanMay 2024
EngadgetOpenAI’s board allegedly learned about ChatGPT launch on TwitterMay 2024
The Wire ChinaHelen Toner on Setting the Rules for AIOctober 2024
AxiosHelen Toner on the AI risk “you could not really talk about”September 2025
YearEvent
1992Born in Melbourne, Victoria, Australia
2014BSc Chemical Engineering, University of Melbourne; founded EA Melbourne chapter
2015-2016Research Analyst at GiveWell
2016-2017Senior Research Analyst at Coefficient Giving (then Open Philanthropy)
2017Recommended $1.76M in AI governance grants
2018Research Affiliate at Oxford GovAI; lived in Beijing studying Chinese AI
Jan 2019Joined Georgetown CSET as Director of Strategy at founding
2021MA Security Studies, Georgetown University; joined OpenAI board
Mar 2022Became CSET Director of Strategy and Foundational Research Grants
Oct 2023Co-authored “AI and Costly Signals” paper creating reported tension with Altman
Nov 17, 2023Voted to remove Sam Altman as OpenAI CEO
Nov 21, 2023Resigned from OpenAI board after Altman’s reinstatement
May 2024First public interview about OpenAI crisis (TED AI Show)
Sep 2024Testified before Senate Judiciary Subcommittee
2024Named to TIME 100 Most Influential People in AI
Sep 2025Appointed CSET Interim Executive Director
May 2025Testified before House Judiciary Subcommittee

“Building AI systems that are safe, reliable, fair, and interpretable is an enormous open problem. Research into these areas has grown over the past few years, but still only makes up a fraction of the total effort poured into building and deploying AI systems. If we’re going to end up with trustworthy AI systems, we’ll need far greater investment and research progress in these areas.”

“The laboratory of democracy has always seemed pretty valuable to me. I hope that these different experiments that are being run in how to govern this technology are treated as experiments, and can be adjusted and improved along the way.”

“For years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.”

According to Sutskever’s deposition testimony, when warned that OpenAI would collapse without Altman, Toner allegedly responded that destroying OpenAI “could be consistent with its safety mission.” Toner has disputed this characterization.

“Looking at Chinese AI development, the AI regulations they are already imposing, and the macro headwinds they face leads her to conclude they are far from being poised to overtake the United States.”

“My life’s work is to consult with lawmakers to help them design AI policy that is sensible and connected to the realities of the technology.”