Leopold Aschenbrenner
Leopold Aschenbrenner
Comprehensive biographical profile of Leopold Aschenbrenner, covering his trajectory from Columbia valedictorian to OpenAI researcher to \$1.5B hedge fund founder, with detailed documentation of his controversial "Situational Awareness" essay predicting AGI by 2027, his disputed firing from OpenAI over security concerns, and the substantial criticisms of his epistemics and potential conflicts of interest.
Quick Assessment
| Dimension | Assessment |
|---|---|
| Primary Role | AI researcher, investor, writer |
| Key Affiliation | Former OpenAI Superalignment team; founder of Situational Awareness LP |
| Main Contribution | "Situational Awareness: The Decade Ahead" essay series predicting AGI by 2027 |
| Controversy Level | High - fired from OpenAI over disputed leak allegations; polarizing AGI timeline predictions |
| Current Influence | Manages $1.5B+ hedge fund; prominent voice in AGI discourse |
Key Links
| Source | Link |
|---|---|
| Official Website | forourposterity.com |
| Wikipedia | en.wikipedia.org |
Overview
Leopold Aschenbrenner (born 2001 or 2002) is a German AI researcher and investor, former OpenAI employee, and founder of the AI-focused hedge fund Situational Awareness LP.1 He gained prominence after publishing the viral essay series "Situational Awareness: The Decade Ahead" in June 2024, a 165-page work that analyzes AI capability trends, forecasts that by 2027 AI systems will have the capacity to conduct their own AI research, and warns that the United States needs to defend against the use of AI technologies by countries such as Russia and China.12
Aschenbrenner graduated as valedictorian from Columbia University at age 19 in 2021, having started his studies at age 15.3 He joined OpenAI's Superalignment team in 2023, working on technical methods to align superintelligent AI systems. His tenure ended abruptly in April 2024 when he was fired over what OpenAI characterized as leaking internal information—a characterization Aschenbrenner disputes, claiming the leak in question was a brainstorming document on preparedness and safety measures he shared with three external researchers for feedback, and that he was retaliated against for raising security concerns.45
Following his departure from OpenAI, Aschenbrenner leveraged his viral essay to launch Situational Awareness LP, a hedge fund focused on AGI-related investments. Backed by prominent tech figures including Stripe founders Patrick and John Collison, as well as investor Nat Friedman and his partner Daniel Gross, the fund manages over $1.5 billion and achieved approximately 47% returns, after fees, in the first half of 2025.67 He remains a polarizing figure in AI safety circles—praised by some as prescient about AGI timelines and risks, while criticized by others for using dubious and nationalistic narratives and for promoting what they characterize as a self-fulfilling "race to AGI" narrative.89
Early Life and Education
Aschenbrenner was born in Germany to parents who were both doctors and attended the John F. Kennedy School in Berlin.10 At age 17, he received a grant from economist Tyler Cowen's Emergent Ventures program, and Cowen described him as an "economics prodigy."11
He enrolled at Columbia University, majoring in economics and mathematics-statistics. During his time at Columbia, he co-founded and co-organized the university's Effective Altruism chapter and was a member of the Columbia Debate Society.1012 He graduated as valedictorian in 2021 at age 19.13
While at Columbia and shortly after graduation, Aschenbrenner conducted research on long-run economic growth and existential risks as a research affiliate at Oxford University's Global Priorities Institute (GPI).14 In 2024, he co-authored with economist Philip Trammell a working paper titled "Existential Risk and Growth," which models how technological acceleration may create an "existential risk Kuznets curve"—where risks initially rise with growth but can fall with optimal policy.15
OpenAI and the Superalignment Team
In 2023, Aschenbrenner joined OpenAI's Superalignment team, a research initiative led by Jan Leike and Ilya Sutskever focused on developing technical methods to control AI systems that might become smarter than humans.16 The team's core research question was how to use weaker AI systems to supervise and align stronger ones—a critical challenge given that future superintelligent systems could be difficult for humans to directly oversee.
During his tenure, Aschenbrenner co-authored the paper "Weak-to-Strong Generalization: Eliciting Strong Capabilities with Weak Supervision," which proposed leveraging deep learning's generalization properties to control strong AI models using weak supervisors.17 The paper was released as an arXiv preprint in 2023 and has been cited over 451 times.18
According to Aschenbrenner, he raised internal concerns about what he viewed as inadequate security measures at OpenAI to protect against industrial espionage, particularly from foreign state actors. He claims he wrote a memo warning that OpenAI's security was "egregiously insufficient" to prevent theft of model weights or algorithmic secrets by adversaries like the Chinese Communist Party.1920
Firing and Disputed Circumstances
In April 2024, OpenAI fired Aschenbrenner. The official reason given was that he had leaked internal information by sharing what he described as a "brainstorming document on preparedness, safety, and security" with three external researchers for feedback—something he characterized as "totally normal" practice at OpenAI.2122
Aschenbrenner disputes this characterization, claiming the firing was retaliation for his security concerns. He had previously written an internal memo warning about the need to secure model weights and algorithmic secrets against espionage, which he shared with board members. He alleges that OpenAI's HR department called his memo warning about foreign espionage "racist" and "unconstructive," and that when he was fired, it was made explicit that the security memo was a major reason—with the company stating "the reason this is a firing and not a warning is because of the security memo."2324 He also claims he was offered close to a million dollars in equity if he signed exit documents, which he refused.25
OpenAI has stated that he was "unforthcoming" during their investigation into the alleged information leak.26
The firing occurred just before Aschenbrenner's equity cliff and amid broader turmoil at OpenAI. The Superalignment team dissolved shortly after, with both Jan Leike and Ilya Sutskever departing the company. Leike publicly stated he had been "sailing against the wind" and that safety concerns were not being adequately prioritized.27
"Situational Awareness: The Decade Ahead"
Two months after leaving OpenAI, in June 2024, Aschenbrenner published "Situational Awareness: The Decade Ahead," a 165-page essay series that went viral in AI and tech circles.2829 The essay makes several bold predictions and arguments:
Core Predictions
The essay forecasts that AGI—defined as AI systems capable of performing the work of AI researchers and engineers—will likely arrive by 2027.30 This prediction is based on extrapolating three trends:
- Compute scaling: Continued exponential growth in training compute (approximately 0.5 orders of magnitude per year)
- Algorithmic efficiency: Continued improvements in algorithms (another 0.5 OOM/year in effective compute)
- "Unhobbling": Improvements in converting base models into useful agent systems that can complete complex tasks
According to Aschenbrenner, these trends combine to project a 100,000x increase in effective compute between 2024 and 2027.31 He argues that by 2025-26, AI systems will surpass college graduates on many benchmarks, and that superintelligence could emerge by the end of the decade through recursive self-improvement.32
National Security Framing
A central theme of the essay is that AGI development represents a national security competition comparable to the Manhattan Project. Aschenbrenner argues that the United States must prepare to defend against AI misuse by geopolitical rivals, particularly China, and warns that leading AI labs are inadvertently sharing key algorithmic secrets with the Chinese Communist Party through insufficient security.3334
He calls for a U.S. government "Project for AGI" with massive computing clusters and advocates for keeping AGI development within a "free world" coalition rather than allowing open dissemination of capabilities.35 This nationalist framing has proven controversial, with critics arguing it promotes a self-fulfilling arms race dynamic.36
Alignment Optimism
Despite warning about AI risk more broadly, Aschenbrenner has acknowledged that alignment is a solvable problem, though he notes that far fewer people are working on it than might be expected, and that existing alignment research is not on track.37 He argues that getting the right people and resources focused on the problem is key to making progress. However, critics note that his broader framing in Situational Awareness constructs a national security narrative around AI development that may undermine safety, and that he is simultaneously overly optimistic in some areas while insufficiently considering alternatives such as an international pause on development.38
Situational Awareness LP
Following the viral success of his essay, Aschenbrenner founded Situational Awareness LP, an AI-focused hedge fund named after his publication.39 The fund is not a venture capital firm but rather invests in companies set to benefit from AI adoption, including chipmakers, infrastructure providers, and power suppliers, and has also backed startups such as Anthropic.40
The fund secured anchor investments from prominent Silicon Valley figures including Patrick Collison and John Collison (co-founders of Stripe), Daniel Gross, and Nat Friedman (former GitHub CEO).41 The fund has global investors, including West Coast founders, family offices, institutions, and endowments, and manages over $1.5 billion in assets.4243
According to reports, the fund achieved approximately 47% returns (after fees) in the first half of 2025, significantly outperforming the S&P 500's 6% gain over the same period.44 Aschenbrenner has stated he has almost all of his personal net worth invested in the fund.45
The fund positions itself not just as an investment vehicle but as what Aschenbrenner describes as "the top think-tank in the AI field," aiming to contribute to understanding AGI trajectories while profiting from the transition.46
Track Record on Predictions
A June 2025 retrospective analysis on LessWrong examined how Aschenbrenner's predictions from "Situational Awareness" were tracking one year later:47
Predictions largely on track:
- Global AI investment, electricity consumption for AI, and chip production followed forecasted trends through June 2025
- Compute scaling, algorithmic efficiency gains, and "unhobbling" improvements aligned with projections (though with higher uncertainty)
- Models began outpacing college graduates on homework, exams, and mathematical reasoning tasks, including achieving gold medal performance at the International Math Olympiad
- Nvidia stock continued its "rocketship ride" as predicted
- AI revenue reached $10 billion annualized by early 2025 as forecasted
Areas of uncertainty or partial misses:
- Base model improvements (like GPT-4.5) were underwhelming, contradicting his prediction of a temporary post-GPT-4 lull, though "unhobbling" (agent capabilities) proved stronger than expected
- The $20-40 billion revenue target for year-end 2025 remained unproven, with slower doubling times than projected
- Predictions about specific capabilities like "internal monologue" for textbook understanding remained speculative
The analysis concluded that most key drivers remained on track for the AGI-by-2027 timeline, though significant uncertainties persist.48
Views on AI Safety and Alignment
Aschenbrenner advocates what he calls "AGI realism"—the position that AGI will likely emerge within the current decade and poses significant risks that require urgent preparation.49 His views on addressing these risks include:
Alignment Strategy
Aschenbrenner expresses optimism that AI alignment is solvable through iterative development building on current systems. He argues for dedicating massive compute resources to alignment research and potentially offering billion-dollar prizes for breakthroughs.50 However, he acknowledges significant challenges, particularly around supervising systems that become smarter than humans and the risk of deceptive alignment where models learn to provide desired outputs without actually being aligned.51
In a blog post titled "Nobody's On the Ball on AGI Alignment," Aschenbrenner criticizes the current state of alignment efforts, arguing that despite apparent funding in the effective altruism community, there are limited serious attempts to solve core alignment problems.52 He estimates the risk of AI existential catastrophe at approximately 5% over the next 20 years.53
Security and Competition
A major focus of Aschenbrenner's writing is information security around frontier AI systems. He argues that model weights and algorithmic secrets represent strategic assets comparable to nuclear weapons, and that current security practices at leading labs are inadequate to prevent theft by sophisticated state actors.54 This concern was central to his disputed memo at OpenAI and remains a theme in his public writing.
He frames AGI development as an inevitable geopolitical competition, arguing that the United States must maintain a lead over rivals like China to ensure AGI is developed and deployed by democratic rather than authoritarian powers.55 This perspective has been characterized by critics as promoting a nationalist, securitized approach that may be counterproductive to global AI safety.56
Criticisms and Controversies
Aschenbrenner has become a polarizing figure in AI discourse, with critics raising several concerns:
Epistemics and Timeline Predictions
Critics argue that Aschenbrenner's AGI timeline predictions rely on questionable extrapolations that ignore potential obstacles. A LessWrong post titled "Questionable Narratives of Situational Awareness" characterizes his essay as building on "questionable and sometimes conspiracy-esque narratives, nationalist feelings, and low-quality argumentation."57 The post critiques his approach as emphasizing vibes and speculation over rigorous analysis, though defenders note that predictions about unprecedented events necessarily involve significant uncertainty.58
National security experts have argued that Aschenbrenner's analysis ignores social, policy, and institutional constraints that could slow AI development, and that his historical analogies (such as to the Manhattan Project) overstate the inevitability of rapid AGI development.59
Self-Fulfilling Race Dynamics
Several commentators in effective altruism circles have expressed concern that Aschenbrenner's framing promotes a self-fulfilling "race to AGI" narrative. An EA Forum post notes that many in the community are "annoyed" with Aschenbrenner, particularly for promoting the narrative that there is a "race to AGI" that "becomes a self-fulfilling prophecy."60
Alignment Overconfidence
Critics argue that Aschenbrenner's project is insufficiently justified, particularly his failure to adequately consider a development pause.61 His framing of security has also been characterized as overly weighted toward national security concerns, while being overly pessimistic about international collaboration and overly optimistic that AGI development would not lead to nuclear war.62
Conflicts of Interest
The founding of Situational Awareness LP immediately after publishing his AGI essay has raised questions about potential conflicts of interest. Critics note that Aschenbrenner's public predictions about rapid AGI development and his advocacy for continued AI investment directly benefit his hedge fund's positioning and returns.63 His transition from OpenAI researcher to hedge fund founder managing $1.5 billion has led some to question whether his public warnings serve partly as marketing for his investment vehicle.64
Personality and Interpersonal Dynamics
According to Fortune's reporting, Aschenbrenner was described by some OpenAI colleagues as "politically clumsy," "arrogant," "astringent," and "abrasive" in meetings, with a willingness to challenge higher-ups that created friction.65 However, others defend him as principled in raising legitimate security concerns that were dismissed by the organization.
Influence and Reception
Despite controversies, Aschenbrenner has become a significant voice in discussions about AGI timelines and AI policy. His essay "Situational Awareness" was praised by figures ranging from Ivanka Trump to various AI researchers and was widely discussed in Silicon Valley.66 His predictions have influenced thinking about AI investment strategies and the urgency of AI safety work.
The Center for AI Policy praised his evidence-based analysis and called for increased federal AI regulation and permanent funding for explainability research based on concerns he raised.67 His work has been featured in major media outlets and he has appeared on prominent podcasts including a 4.5-hour interview with Dwarkesh Patel.68
However, his influence remains contested. Within the effective altruism and AI safety communities, responses range from viewing him as correctly identifying crucial dynamics to seeing his work as epistemically problematic and potentially harmful to AI safety efforts.69
Key Uncertainties
Several major uncertainties remain about Aschenbrenner's predictions and influence:
-
AGI Timeline Accuracy: Whether his 2027 AGI forecast will prove accurate depends on whether current scaling trends continue and whether unforeseen obstacles emerge. Historical technology predictions suggest significant uncertainty around specific timelines.
-
Alignment Solvability: The degree to which alignment can be solved through iterative methods on current architectures remains deeply uncertain, with expert opinion divided.
-
Geopolitical Dynamics: Whether framing AGI as a U.S.-China competition accelerates or slows overall AI development, and whether it helps or hinders international cooperation on safety, remains unclear.
-
Impact on AI Safety Field: The net effect of Aschenbrenner's work on AI safety efforts is debated—some argue it raises important concerns and urgency, while others contend it promotes counterproductive race dynamics.
-
Personal Trajectory: How Aschenbrenner's dual role as AI safety commentator and hedge fund manager will evolve, and whether conflicts between these roles will intensify, remains to be seen.
Sources
Footnotes
-
Leopold Aschenbrenner - Wikipedia — Leopold Aschenbrenner - Wikipedia ↩ ↩2
-
The AI investing boom gets its posterboy: Meet Leopold Aschenbrenner - Fortune — The AI investing boom gets its posterboy: Meet Leopold Aschenbrenner - Fortune ↩
-
Valedictorian in Special Times - Columbia College — Valedictorian in Special Times - Columbia College ↩
-
Leopold Aschenbrenner - All American Speakers — Leopold Aschenbrenner - All American Speakers ↩
-
Former OpenAI researcher Leopold Aschenbrenner interview about firing - Business Insider — Former OpenAI researcher Leopold Aschenbrenner interview about firing - Business Insider ↩
-
Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune — Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune ↩
-
$1.5B AI Hedge Fund Launches, Surges in First Year - Litquidity — $1.5B AI Hedge Fund Launches, Surges in First Year - Litquidity ↩
-
Response to Aschenbrenner's Situational Awareness - EA Forum — Response to Aschenbrenner's Situational Awareness - EA Forum ↩
-
Questionable Narratives of Situational Awareness - LessWrong — Questionable Narratives of Situational Awareness - LessWrong ↩
-
Leopold Aschenbrenner - Wikipedia — Leopold Aschenbrenner - Wikipedia ↩ ↩2
-
Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune — Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune ↩
-
Valedictorian in Special Times - Columbia College — Valedictorian in Special Times - Columbia College ↩
-
Valedictorian in Special Times - Columbia College — Valedictorian in Special Times - Columbia College ↩
-
Leopold Aschenbrenner - All American Speakers — Leopold Aschenbrenner - All American Speakers ↩
-
Existential Risk and Growth - Global Priorities Institute — Existential Risk and Growth - Global Priorities Institute ↩
-
Leopold Aschenbrenner - All American Speakers — Leopold Aschenbrenner - All American Speakers ↩
-
Leopold Aschenbrenner - Google Scholar — Leopold Aschenbrenner - Google Scholar ↩
-
Leopold Aschenbrenner - Google Scholar — Leopold Aschenbrenner - Google Scholar ↩
-
Who is Leopold Aschenbrenner - Max Read — Who is Leopold Aschenbrenner - Max Read ↩
-
OpenAI 8: The Right to Warn - Zvi Mowshowitz — OpenAI 8: The Right to Warn - Zvi Mowshowitz ↩
-
Former OpenAI researcher Leopold Aschenbrenner interview about firing - Business Insider — Former OpenAI researcher Leopold Aschenbrenner interview about firing - Business Insider ↩
-
Leopold Aschenbrenner - Wikipedia — Leopold Aschenbrenner - Wikipedia ↩
-
OpenAI 8: The Right to Warn - Zvi Mowshowitz — OpenAI 8: The Right to Warn - Zvi Mowshowitz ↩
-
Who is Leopold Aschenbrenner - Max Read — Who is Leopold Aschenbrenner - Max Read ↩
-
Ex-OpenAI Researcher Explains Why He Was Fired - Business Insider — Ex-OpenAI Researcher Explains Why He Was Fired - Business Insider ↩
-
Former OpenAI researcher Leopold Aschenbrenner interview about firing - Business Insider — Former OpenAI researcher Leopold Aschenbrenner interview about firing - Business Insider ↩
-
Influential Safety Researcher Sounds Alarm on OpenAI's Failure - Center for AI Policy — Influential Safety Researcher Sounds Alarm on OpenAI's Failure - Center for AI Policy ↩
-
Situational Awareness: The Decade Ahead — Situational Awareness: The Decade Ahead ↩
-
Situational Awareness: Understanding the Rapid Advancement of AGI - NorthBayBiz — Situational Awareness: Understanding the Rapid Advancement of AGI - NorthBayBiz ↩
-
Summary of Situational Awareness: The Decade Ahead - EA Forum — Summary of Situational Awareness: The Decade Ahead - EA Forum ↩
-
Situational Awareness: The Decade Ahead - FluidSelf — Situational Awareness: The Decade Ahead - FluidSelf ↩
-
Who is Leopold Aschenbrenner - Max Read — Who is Leopold Aschenbrenner - Max Read ↩
-
Leopold Aschenbrenner - All American Speakers — Leopold Aschenbrenner - All American Speakers ↩
-
Situational Awareness About the Coming AGI - The New Atlantis — Situational Awareness About the Coming AGI - The New Atlantis ↩
-
Response to Aschenbrenner's Situational Awareness - EA Forum — Response to Aschenbrenner's Situational Awareness - EA Forum ↩
-
Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong — Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong ↩
-
Leopold Aschenbrenner - Wikipedia — Leopold Aschenbrenner - Wikipedia ↩
-
$1.5B AI Hedge Fund Launches, Surges in First Year - Litquidity — $1.5B AI Hedge Fund Launches, Surges in First Year - Litquidity ↩
-
Leopold Aschenbrenner - Wikipedia — Leopold Aschenbrenner - Wikipedia ↩
-
Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune — Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune ↩
-
23-year-old Leopold Aschenbrenner launches $1.5B AI hedge fund - 36Kr — 23-year-old Leopold Aschenbrenner launches $1.5B AI hedge fund - 36Kr ↩
-
$1.5B AI Hedge Fund Launches, Surges in First Year - Litquidity — $1.5B AI Hedge Fund Launches, Surges in First Year - Litquidity ↩
-
Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune — Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune ↩
-
23-year-old Leopold Aschenbrenner launches $1.5B AI hedge fund - 36Kr — 23-year-old Leopold Aschenbrenner launches $1.5B AI hedge fund - 36Kr ↩
-
Situational Awareness: A One Year Retrospective - LessWrong — Situational Awareness: A One Year Retrospective - LessWrong ↩
-
Situational Awareness: A One Year Retrospective - LessWrong — Situational Awareness: A One Year Retrospective - LessWrong ↩
-
Situational Awareness - by Leopold Aschenbrenner — Situational Awareness - by Leopold Aschenbrenner ↩
-
Response to Aschenbrenner's Situational Awareness - EA Forum — Response to Aschenbrenner's Situational Awareness - EA Forum ↩
-
Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong — Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong ↩
-
Nobody's On the Ball on AGI Alignment - For Our Posterity — Nobody's On the Ball on AGI Alignment - For Our Posterity ↩
-
Response to Aschenbrenner's Situational Awareness - EA Forum — Response to Aschenbrenner's Situational Awareness - EA Forum ↩
-
Who is Leopold Aschenbrenner - Max Read — Who is Leopold Aschenbrenner - Max Read ↩
-
Citation rc-198b (data unavailable — rebuild with wiki-server access) ↩
-
Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong — Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong ↩
-
Questionable Narratives of Situational Awareness - EA Forum — Questionable Narratives of Situational Awareness - EA Forum ↩
-
Questionable Narratives of Situational Awareness - LessWrong — Questionable Narratives of Situational Awareness - LessWrong ↩
-
AI Timelines and National Security: The Obstacles to AGI by 2027 - Lawfare — AI Timelines and National Security: The Obstacles to AGI by 2027 - Lawfare ↩
-
Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune — Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune ↩
-
Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong — Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong ↩
-
Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong — Against Aschenbrenner: How Situational Awareness Constructs a Narrative - LessWrong ↩
-
Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune — Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune ↩
-
Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune — Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune ↩
-
Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune — Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune ↩
-
Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune — Leopold Aschenbrenner: From OpenAI, FTX to a $1.5 Billion Hedge Fund - Fortune ↩
-
Influential Safety Researcher Sounds Alarm on OpenAI's Failure - Center for AI Policy — Influential Safety Researcher Sounds Alarm on OpenAI's Failure - Center for AI Policy ↩
-
Situational Awareness: The Decade Ahead - FluidSelf — Situational Awareness: The Decade Ahead - FluidSelf ↩
-
Response to Aschenbrenner's Situational Awareness - EA Forum — Response to Aschenbrenner's Situational Awareness - EA Forum ↩