Page Type:ContentStyle Guide →Standard knowledge base article Quality:43 (Adequate)⚠️
Importance:18 (Peripheral)
Last edited:2026-01-29 (3 days ago)
Words:5.5k
Structure:📊 46📈 1🔗 1📚 22•3%Score: 13/15
LLM Summary:Comprehensive biographical profile of Helen Toner documenting her career from EA Melbourne founder to CSET Interim Executive Director, with detailed timeline of the November 2023 OpenAI board crisis where she voted to remove Sam Altman. Compiles public testimony, publications, and media appearances but offers minimal original analysis beyond chronicling events and her policy positions favoring government AI regulation.
Issues (1):- QualityRated 43 but structure suggests 87 (underrated by 44 points)
| Dimension | Assessment | Notes |
|---|
| Primary Role | AI Governance Researcher | Georgetown CSET Interim Executive Director |
| Global Recognition | TIME 100 AI 2024 | Listed among most influential people in AI |
| OpenAI Board | 2021-2023 | Voted to remove Sam Altman; resigned after his reinstatement |
| Policy Influence | High | Congressional testimony, Foreign Affairs, The Economist |
| Research Focus | U.S.-China AI competition, AI safety, governance | CSET publications and grants |
| Academic Credentials | MA Security Studies (Georgetown), BSc Chemical Engineering (Melbourne) | Strong interdisciplinary background |
| EA Movement | Early leader | Founded EA Melbourne chapter, worked at GiveWell and Coefficient Giving |
| Attribute | Information |
|---|
| Birth Year | 1992 |
| Birthplace | Melbourne, Victoria, Australia |
| Nationality | Australian |
| Education | BSc Chemical Engineering, University of Melbourne (2014); Diploma in Languages, University of Melbourne; MA Security Studies, Georgetown University (2021) |
| High School | Melbourne Girls Grammar School |
| University Score | 99.95 (Australian university admission) |
| Current Position | Interim Executive Director, Georgetown CSET (September 2025-present) |
| Previous Positions | Director of Strategy and Foundational Research Grants, CSET; Senior Research Analyst, Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100; OpenAI Board Member |
| Languages | English, Mandarin Chinese (studied in Beijing) |
Helen Toner is an Australian AI governance researcher who became one of the most prominent figures in AI policy after her role in the November 2023 removal of Sam Altman as OpenAI’s CEO. She serves as Interim Executive Director of Georgetown University’s Center for Security and Emerging Technology (CSET), a think tank she helped establish in 2019 with $55 million in funding from Coefficient Giving (then Open Philanthropy).
Her career trajectory represents one of the most successful examples of effective altruism’s strategy of placing safety-focused individuals in positions of influence over AI development. From leading a student effective altruism group in Melbourne to sitting on the board of one of the world’s most powerful AI companies, Toner’s path demonstrates both the opportunities and limitations of this approach.
Toner’s expertise spans U.S.-China AI competition, AI safety research, and technology governance. She has testified before multiple Congressional committees, written for Foreign Affairs and The Economist, and was named to TIME’s 100 Most Influential People in AI in 2024. Her work emphasizes that AI governance requires active government intervention rather than relying on industry self-regulation.
Loading diagram...
| Period | Role | Organization | Key Activities |
|---|
| 2014 | Chapter Founder/Leader | Effective Altruism Melbourne | Introduced to EA movement as university student; became skeptical-turned-believer on AI risk |
| 2015-2016 | Research Analyst | GiveWell | Researched AI policy issues including military applications and geopolitics |
| 2016-2017 | Senior Research Analyst | Coefficient Giving (then Open Philanthropy) | Advised policymakers on AI policy; recommended $1.75M+ in grants for AI governance |
| 2018 | Research Affiliate | Oxford Center for the Governance of AI | Spent 9 months in Beijing studying Chinese AI ecosystem and Mandarin |
| Jan 2019 | Director of Strategy | Georgetown CSET | Helped found and shape CSET’s research agenda |
| Mar 2022 | Director of Strategy & Foundational Research Grants | Georgetown CSET | Led multimillion-dollar technical grantmaking function |
| 2021-2023 | Board Member | OpenAI | Invited by Holden Karnofsky to replace him on board |
| Sep 2025 | Interim Executive Director | Georgetown CSET | Appointed to lead the center |
The most consequential moment of Toner’s career came on November 17, 2023, when she and three other OpenAI board members voted to remove Sam Altman as CEO. The five-day crisis that followed revealed deep tensions between AI safety governance and commercial AI development.
| Date | Time | Event | Details |
|---|
| Nov 17, 2023 | ≈12:00 PM PST | Board votes to remove Altman | 4 board members (Toner, McCauley, D’Angelo, Sutskever) vote to fire Altman |
| Nov 17, 2023 | ≈12:05 PM | Altman learns of removal | Informed on Google Meet while watching Las Vegas Grand Prix; told 5-10 minutes before announcement |
| Nov 17, 2023 | Afternoon | Public announcement | Board cites Altman “not consistently candid in his communications” |
| Nov 18, 2023 | | Anthropic merger discussions | Active discussions about merging OpenAI with Anthropic; Toner “most supportive” per Sutskever testimony |
| Nov 18-21 | | Pressure campaign | Microsoft, VCs, 95% of OpenAI employees threaten to leave |
| Nov 21, 2023 | | Altman reinstated | Returns as CEO; Toner, McCauley resign from board |
The board’s official statement said Altman had “not been consistently candid in his communications.” In her May 2024 TED AI Show interview, Toner provided more detailed allegations:
| Allegation | Toner’s Claim | OpenAI Response |
|---|
| ChatGPT launch | Board learned about ChatGPT release from Twitter in November 2022, not informed in advance | ChatGPT was “released as a research project” built on GPT-3.5 already available for 8 months |
| Startup Fund ownership | Altman did not disclose he owned the OpenAI Startup Fund while claiming to be an independent board member | Not addressed |
| Safety processes | Altman gave “inaccurate information” about company’s safety processes | Independent review found firing “not based on concerns regarding product safety” |
| Executive complaints | Two executives reported “psychological abuse” from Altman with screenshots and documentation | Taylor: Review concluded decision not based on safety concerns |
| Pattern of behavior | ”For years, Sam had made it really difficult for the board… withholding information, misrepresenting things… in some cases outright lying” | Disputed by OpenAI current leadership |
In October 2025, Ilya Sutskever’s deposition in the Musk v. Altman lawsuit revealed additional details:
- Sutskever prepared a 52-page memo for independent board members (Toner, McCauley, D’Angelo) weeks before the removal
- The memo stated: “Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another”
- “Most or all” supporting material came from OpenAI CTO Mira Murati
- Altman was not shown the memo because Sutskever “felt that, had he become aware of these discussions, he would just find a way to make them disappear”
One of the most striking revelations was that within 48 hours of Altman’s firing, discussions were underway to potentially merge OpenAI with Anthropic:
| Aspect | Details |
|---|
| Timing | Saturday, November 18, 2023 |
| Toner’s Position | According to Sutskever, Toner was “most supportive” of merger direction |
| Sutskever’s Position | ”Very unhappy” about it; “really did not want OpenAI to merge with Anthropic” |
| Rationale | When warned company would collapse without Altman, Toner allegedly responded that destroying OpenAI “could be consistent with its safety mission” |
| Toner’s Response | Disputed Sutskever’s account on social media after deposition release |
| Outcome | Description |
|---|
| Immediate | Toner and McCauley resigned from board; Altman reinstated |
| Governance changes | OpenAI reformed board structure; added new independent directors |
| SEC investigation | February 2024: SEC reportedly investigating whether Altman misled investors |
| Toner’s influence | Named to TIME 100 AI 2024; increased requests from policymakers worldwide |
| Policy impact | Crisis highlighted tensions between AI safety governance and commercial interests |
Toner’s research at CSET spans three primary domains:
| Research Area | Description | Key Publications |
|---|
| U.S.-China AI Competition | Analysis of Chinese AI capabilities, military applications, and competitive dynamics | Congressional testimony, Foreign Affairs articles |
| AI Safety Research | Robustness, interpretability, reward learning, uncertainty quantification | CSET AI Safety series |
| AI Governance | Standards, testing, safety processes, accident prevention | Policy briefs, congressional testimony |
| Year | Type | Publication/Outlet | Topic |
|---|
| 2019 | Testimony | U.S.-China Economic and Security Review Commission | China’s Pursuit of AI |
| 2023 | Research Paper | CSET | ”Artificial Intelligence and Costly Signals” (co-authored with Andrew Imbrie, Owen Daniels) |
| 2024 | Op-Ed | Foreign Affairs | ”The Illusion of China’s AI Prowess” |
| 2024 | Op-Ed | The Economist | U.S.-China bilateral meetings on AI |
| 2024 | Testimony | Senate Judiciary Subcommittee | AI Oversight: Insider Perspectives |
| 2024 | Talk | TED2024 | ”How to Govern AI, Even if it’s Hard to Predict” |
| 2025 | Testimony | House Judiciary Subcommittee | Trade Secrets and the Global AI Arms Race |
Toner has authored or contributed to multiple papers examining AI safety:
| Topic | Key Findings |
|---|
| Robustness | Research tracking how ML systems behave under distribution shift and adversarial conditions |
| Interpretability | Analysis of research trends in understanding ML system decision-making |
| Reward Learning | Study of how systems can be trained to align with human intentions |
| Uncertainty Quantification | Work introducing the concept to non-technical audiences |
She has stated: “Building AI systems that are safe, reliable, fair, and interpretable is an enormous open problem. Research into these areas has grown over the past few years, but still only makes up a fraction of the total effort poured into building and deploying AI systems.”
According to Google Scholar, Toner’s research has been cited over 3,286 times, indicating significant academic influence in the AI governance field.
Toner has testified before multiple Congressional committees on AI policy and U.S.-China competition.
| Date | Committee | Topic | Key Arguments |
|---|
| June 2019 | U.S.-China Economic and Security Review Commission | China’s Pursuit of AI | AI research is unusually open/collaborative; strategic immigration policy critical; China’s approach to data privacy differs |
| September 2024 | Senate Judiciary Subcommittee | AI Oversight | Concerns about regulation slowing U.S. innovation are “not nearly as strong as it seems”; China “far from being poised to overtake the United States” |
| May 2025 | House Judiciary Subcommittee | Trade Secrets and AI Arms Race | ”AI IP is as core to U.S. competitiveness as rapid innovation”; adversaries cannot have easy access to U.S. technology |
Based on her testimony and public statements, Toner advocates for:
| Policy Area | Position |
|---|
| Immigration | Access to skilled researchers and engineers is key; U.S. ability to attract foreign talent is critical advantage |
| Federal Research | No major federal effort has strengthened fundamental AI research during current deep learning wave, unlike China |
| Regulation | Government must actively regulate AI; self-governance by companies “doesn’t actually work” |
| Safety Requirements | Supports mandatory safety testing and oversight for advanced AI systems |
| International Coordination | ”Laboratory of democracy” approach: different jurisdictions should try different approaches and learn from experiments |
Toner takes a nuanced position on AI existential risk:
| Aspect | Her View |
|---|
| Existential scenarios | Acknowledges “whole discourse around existential risk from AI” while noting “people who are being directly impacted by algorithmic systems and AI in really serious ways” already |
| Polarization concern | Worried about polarization where some want to “keep those existential or catastrophic issues totally off the table” while others are easily “freaked out about the more cataclysmic possibilities” |
| Industry concentration | Notes “natural tension” between view that fewer AI players helps coordination/regulation vs. concerns about power concentration |
| Government role | Believes government regulation is necessary; industry self-governance insufficient |
Based on her TED2024 talk and public statements:
| Principle | Explanation |
|---|
| Adaptive Regulation | ”Different experiments that are being run in how to govern this technology are treated as experiments, and can be adjusted and improved along the way” |
| Epistemic Humility | Policy should be developed despite uncertainty about AI capabilities and timelines |
| International Learning | ”Laboratory of democracy has always seemed pretty valuable to me” - countries should try different approaches |
| Implementation Focus | ”We’re shifting from a year of initial excitement to a year more of implementation, and coming back to earth” |
In her Foreign Affairs article “The Illusion of China’s AI Prowess,” Toner argued:
| Point | Assessment |
|---|
| Regulation Impact | Concerns about U.S. regulation enabling Chinese dominance are “overblown” |
| Chinese Capabilities | Chinese AI development “lags behind” U.S.; Chinese LLMs “heavily rely on American research and technology” |
| Chinese Regulation | China is already imposing AI regulations of its own |
| Macro Headwinds | China faces significant economic and demographic challenges |
| U.S. Advantage | Strength in fundamental research is “backbone of American advantage” |
| Period | Role | Activities |
|---|
| 2014 | University student | Introduced to EA movement by organizers of EA Melbourne |
| 2014 | Initial skepticism | ”Initially skeptical, dismissed them as philosophically confused and overly enthusiastic science fiction enthusiasts” |
| 2014 | Conversion | ”Eventually embraced their perspective” and assumed leadership of Melbourne chapter |
| 2015-2017 | Professional | Worked at GiveWell and Coefficient Giving (then Open Philanthropy), both EA-aligned organizations |
| 2019-Present | CSET | CSET was established through $55 million grant from Coefficient Giving |
Toner’s career exemplifies the EA approach of:
- Career capital building: Gaining expertise and credentials in a high-impact area
- Institutional leverage: Positioning within influential organizations (OpenAI board, CSET)
- Longtermism: Focus on AI risk as a priority concern for humanity’s future
- Impact-focused grantmaking: Recommending grants while at Coefficient Giving ($1.5M to UCLA for AI governance fellowship, $260K to CNAS for advanced technology risk research)
| Year | Amount | Recipient | Purpose |
|---|
| May 2017 | $1,500,000 | UCLA School of Law | Fellowship, research, and meetings on AI governance and policy |
| August 2017 | $260,000 | CNAS (Richard Danzig) | Publication on potential risks from advanced technologies |
Toner’s trajectory from EA student organizer to influential AI governance figure represents a model the EA movement has promoted for “building career capital” in high-impact areas. Her path illustrates several key elements:
| Career Capital Element | Toner’s Example |
|---|
| Early commitment | Joined EA movement as undergraduate; took leadership role immediately |
| Skills development | Chemical engineering degree provided analytical foundation; security studies MA added policy expertise |
| Network building | GiveWell and Coefficient Giving connected her to funders and researchers |
| International experience | Beijing research affiliate role built China expertise few Western researchers possess |
| Institutional positioning | CSET founding role and OpenAI board provided influence levers |
The CSET founding exemplifies the EA strategy of building institutions: Coefficient Giving (then Open Philanthropy) provided $55 million over five years specifically to create a think tank that would shape AI policy from within Washington’s foreign policy establishment. Toner was positioned as Director of Strategy from the beginning, allowing her to shape the center’s research agenda toward AI safety and governance concerns.
| Aspect | Details |
|---|
| Funding source | Coefficient Giving ($55M founding grant) |
| Mission alignment | CSET focuses on AI safety, security, and governance - core EA longtermist concerns |
| Staff pipeline | Multiple CSET researchers have EA movement connections |
| Research priorities | U.S.-China competition, AI accidents, standards/testing align with EA cause areas |
| Policy influence | Government briefings and congressional testimony extend EA ideas into policy |
Note: 80,000 Hours, the EA career advice organization that has featured Toner in multiple podcast episodes, is also funded by the same major donor (Coefficient Giving) that funds CSET.
TIME’s profile noted:
“In mid-November of 2023, Helen Toner made what will likely be the most pivotal decision of her career… One outcome of the drama was that Toner, a formerly obscure expert in AI governance, now has the ear of policymakers around the world trying to regulate AI.”
| Recognition Aspect | Details |
|---|
| Category | 100 Most Influential People in AI 2024 |
| Impact | ”More senior officials have requested her insights than in any previous year” |
| Stated Mission | ”Life’s work” is to consult with lawmakers on sensible AI policy |
| Type | Details |
|---|
| Podcast Features | 80,000 Hours (multiple appearances), TED AI Show, Cognitive Revolution, Clearer Thinking |
| Media Platforms | ChinaFile contributor, Sourcelist expert |
| Government Briefings | Has briefed senior officials across U.S. government |
| Person | Relationship | Context |
|---|
| Holden Karnofsky | Mentor/predecessor | Karnofsky invited Toner to replace him on OpenAI board in 2021 |
| Tasha McCauley | Board colleague | Co-voted to remove Altman; co-authored post-crisis Economist piece |
| Adam D’Angelo | Board colleague | Remained on OpenAI board after crisis; received 52-page memo |
| Ilya Sutskever | Board colleague | Co-voted to remove Altman; later disputed Toner’s account of events |
| Sam Altman | Adversary | Removed as OpenAI CEO by Toner and board colleagues |
| Jason Matheny | CSET colleague | CSET founding director; Toner was early hire |
| Strength | Evidence |
|---|
| Policy expertise | Congressional testimony, Foreign Affairs publications, TIME 100 recognition |
| Interdisciplinary background | Engineering + security studies + China expertise |
| Institutional access | Built relationships across government, academia, and industry |
| Research impact | 3,286+ Google Scholar citations |
| Risk awareness | Early EA convert; focused career on AI governance |
| Criticism | Context |
|---|
| OpenAI board outcome | Altman reinstated within 5 days; governance approach failed to achieve lasting change |
| Communication | Board’s initial silence created “information vacuum” that enabled pressure campaign |
| Process | Independent review reportedly found firing not based on product safety or security concerns |
| Disputed accounts | Sutskever and Toner have conflicting accounts of merger discussions and other events |
| Question | Relevance |
|---|
| Was removal justified? | Evidence remains contested; no public resolution |
| Did safety concerns exist? | Toner claims safety process misrepresentations; OpenAI review reportedly found otherwise |
| What were alternatives? | Could board have achieved safety goals through different approaches? |
| Long-term impact? | Did crisis ultimately help or hurt AI safety governance? |
As of September 2025, Toner serves as Interim Executive Director of Georgetown CSET, leading a research center with approximately 30 researchers focused on:
| Focus Area | Description |
|---|
| AI Safety Research | Robustness, interpretability, testing, standards |
| National Security | Military AI applications, intelligence implications |
| China Analysis | Chinese AI ecosystem, U.S.-China technology competition |
| Policy Development | Congressional testimony, government briefings, public writing |
She continues to advocate for active government regulation of AI, arguing that the “laboratory of democracy” approach of trying different regulatory experiments across jurisdictions is preferable to either inaction or one-size-fits-all approaches.
| Initiative | Description | Status |
|---|
| AI Safety Series | Publications on robustness, interpretability, reward learning | Ongoing |
| China AI Tracker | Monitoring Chinese AI ecosystem developments | Active |
| Congressional Engagement | Regular testimony and briefings | Active |
| Foundational Research Grants | Multimillion-dollar grantmaking for technical AI safety research | Expanded since 2022 |
| Government Fellowships | Placing researchers in policy positions | Ongoing |
Based on public statements, CSET under Toner’s leadership is expanding focus on:
| Area | Rationale |
|---|
| AI Standards and Testing | Need for rigorous evaluation before deployment in high-stakes settings |
| Accident Investigation | Learning from AI failures similar to aviation safety processes |
| Military AI Applications | Autonomous weapons, intelligence analysis, command and control |
| Compute Governance | Hardware controls as a lever for AI governance |
| International Coordination | Mechanisms for global AI governance despite geopolitical tensions |
In October 2023, shortly before the OpenAI board crisis, Toner co-authored a paper with Andrew Imbrie and Owen Daniels that reportedly caused tension with Sam Altman.
| Aspect | Details |
|---|
| Title | ”Artificial Intelligence and Costly Signals” |
| Publication | CSET, October 2023 |
| Co-authors | Andrew Imbrie, Owen Daniels |
| Topic | International signaling theory applied to AI development |
According to reports, the paper contained analysis that Altman viewed as unfavorable to OpenAI or as potentially undermining the company’s position. While the specific nature of the disagreement has not been fully disclosed, it illustrates the inherent tensions of having safety-focused researchers on commercial AI company boards:
| Tension | Description |
|---|
| Academic freedom | Researchers expect to publish without corporate approval |
| Fiduciary duty | Board members owe duty to the organization |
| Competitive concerns | Analysis may affect company’s competitive position |
| Governance role | Board members need to maintain independence for effective oversight |
Toner’s experience on the OpenAI board, while ending in resignation, offers several lessons for AI governance:
| Challenge | Description | Toner’s Experience |
|---|
| Information asymmetry | Boards depend on management for information | Board allegedly not informed of ChatGPT launch or other key developments |
| Resource imbalance | Management has full-time staff; board members serve part-time | Board lacked resources to verify management claims |
| Stakeholder pressure | Employees, investors, customers may oppose board actions | 95% employee letter, Microsoft pressure reversed board decision |
| Nonprofit/for-profit tension | OpenAI’s unusual structure created conflicts | Safety mission vs. commercial success difficult to balance |
Based on Toner’s public statements and the crisis outcome:
| Lesson | Implication |
|---|
| Communication matters | Board’s silence created vacuum filled by critics |
| Coalition building | Safety-focused board members were isolated when crisis hit |
| Structural power | Legal and financial structures determine who wins disputes |
| Transparency norms | AI companies may need new norms around board-management communication |
In her September 2024 Senate testimony, Toner stated:
“This technology would be enormously consequential, potentially extremely dangerous, and should only be developed with careful forethought and oversight.”
She has advocated for:
| Recommendation | Rationale |
|---|
| External oversight | Company self-governance insufficient |
| Mandatory safety testing | Prevent deployment of dangerous systems |
| Whistleblower protections | Enable internal critics to raise concerns |
| Regulatory experimentation | Different approaches across jurisdictions to learn what works |
| Figure | Background | Current Role | Primary Focus |
|---|
| Helen Toner | Chemical engineering + security studies | Georgetown CSET Interim ED | Governance, U.S.-China |
| Holden Karnofsky | Economics (Harvard) | Former Coefficient Giving co-CEO | Funding strategy, risk prioritization |
| Dario Amodei | Physics PhD (Princeton) | Anthropic CEO | Technical safety, constitutional AI |
| Jan Leike | ML PhD (Toronto) | Anthropic Alignment Lead | Technical alignment research |
| Paul Christiano | CS PhD (UC Berkeley) | ARC founder | AI alignment, evaluation |
| Approach | Toner | Karnofsky | Amodei |
|---|
| Primary lever | Policy/governance | Grantmaking | Lab leadership |
| Technical focus | Low (policy-oriented) | Medium (strategy) | High (research) |
| China focus | High | Low | Low |
| Government engagement | Very high | Medium | Medium |
| Public communication | High | High | Medium |
| Figure | Mechanism | Estimated Impact |
|---|
| Toner | Congressional testimony, CSET research, media | Moderate policy influence; limited on technical development |
| Karnofsky | $300M+ in grants | High influence on field direction and funding |
| Amodei | Controls Anthropic resources | Very high on one major lab’s approach |
| Podcast | Host | Date | Topic |
|---|
| 80,000 Hours | Rob Wiblin | 2019 | CSET founding and AI policy careers |
| 80,000 Hours | Rob Wiblin | 2024 | Geopolitics of AI in China and Middle East |
| TED AI Show | Bilawal Sidhu | May 2024 | OpenAI board crisis, AI regulation |
| Cognitive Revolution | Nathan Labenz | 2024 | AI safety, regulatory approaches |
| Clearer Thinking | Spencer Greenberg | 2024 | AI, U.S.-China relations, OpenAI board |
| Foresight Institute | | 2024 | ”Who gets to decide AI’s future?” |
| Publication | Type | Topics |
|---|
| Foreign Affairs | Op-eds | U.S.-China competition, Chinese AI |
| The Economist | Op-eds | U.S.-China bilateral relations |
| TIME | Op-eds | AI governance |
| GiveWell Blog | Analysis | AI policy research (2015-2016) |
| CSET Publications | Research | AI safety, China, standards |
Toner maintains active presence on X (formerly Twitter) at @hlntnr, where she shares research, responds to coverage, and occasionally disputes inaccurate reporting about her role in the OpenAI crisis.
| Aspect | Details |
|---|
| Duration | 9 months |
| Affiliation | Oxford University’s Center for the Governance of AI (Research Affiliate) |
| Focus | Chinese AI ecosystem, AI and defense |
| Language Study | Mandarin Chinese |
| Outcome | Built rare firsthand expertise on Chinese AI among Western researchers |
| Area | Key Findings |
|---|
| AI Capabilities | Chinese AI lags U.S.; heavily relies on American research/technology |
| Data Governance | Different approach to privacy; potential training data advantages |
| Military AI | Military-civil fusion creates different development dynamics |
| Talent | Competition for researchers is key variable |
| Regulation | China is implementing AI regulations despite perception otherwise |
Toner’s China expertise shapes her policy recommendations:
| Policy Area | Toner’s Position Based on China Research |
|---|
| Export Controls | Supports protecting AI IP; “adversaries cannot have easy access” |
| Immigration | U.S. must maintain talent advantage; China competes for researchers |
| Regulation | U.S. regulation won’t cede leadership to China; concerns “overblown” |
| Research Funding | U.S. needs major federal investment in fundamental AI research |
- Holden Karnofsky - Former Coefficient Giving co-CEO who invited Toner to OpenAI board
- Ilya Sutskever - OpenAI co-founder and board member who co-voted to remove Altman
- Sam Altman - OpenAI CEO removed and reinstated in November 2023
- Dario Amodei - Anthropic CEO; Anthropic was discussed as potential merger partner
| Year | Event |
|---|
| 1992 | Born in Melbourne, Victoria, Australia |
| 2014 | BSc Chemical Engineering, University of Melbourne; founded EA Melbourne chapter |
| 2015-2016 | Research Analyst at GiveWell |
| 2016-2017 | Senior Research Analyst at Coefficient Giving (then Open Philanthropy) |
| 2017 | Recommended $1.76M in AI governance grants |
| 2018 | Research Affiliate at Oxford GovAI; lived in Beijing studying Chinese AI |
| Jan 2019 | Joined Georgetown CSET as Director of Strategy at founding |
| 2021 | MA Security Studies, Georgetown University; joined OpenAI board |
| Mar 2022 | Became CSET Director of Strategy and Foundational Research Grants |
| Oct 2023 | Co-authored “AI and Costly Signals” paper creating reported tension with Altman |
| Nov 17, 2023 | Voted to remove Sam Altman as OpenAI CEO |
| Nov 21, 2023 | Resigned from OpenAI board after Altman’s reinstatement |
| May 2024 | First public interview about OpenAI crisis (TED AI Show) |
| Sep 2024 | Testified before Senate Judiciary Subcommittee |
| 2024 | Named to TIME 100 Most Influential People in AI |
| Sep 2025 | Appointed CSET Interim Executive Director |
| May 2025 | Testified before House Judiciary Subcommittee |
“Building AI systems that are safe, reliable, fair, and interpretable is an enormous open problem. Research into these areas has grown over the past few years, but still only makes up a fraction of the total effort poured into building and deploying AI systems. If we’re going to end up with trustworthy AI systems, we’ll need far greater investment and research progress in these areas.”
“The laboratory of democracy has always seemed pretty valuable to me. I hope that these different experiments that are being run in how to govern this technology are treated as experiments, and can be adjusted and improved along the way.”
“For years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.”
According to Sutskever’s deposition testimony, when warned that OpenAI would collapse without Altman, Toner allegedly responded that destroying OpenAI “could be consistent with its safety mission.” Toner has disputed this characterization.
“Looking at Chinese AI development, the AI regulations they are already imposing, and the macro headwinds they face leads her to conclude they are far from being poised to overtake the United States.”
“My life’s work is to consult with lawmakers to help them design AI policy that is sensible and connected to the realities of the technology.”