Longterm Wiki
Updated 2026-03-12HistoryData
Page StatusContent
Edited 1 day ago2.9k words4 backlinks
19QualityStub •47.5ImportanceReference40ResearchLow
Summary

Biography of Connor Leahy, CEO and co-founder of Conjecture, an AI safety company based in London. Previously co-founded EleutherAI in 2020, which produced GPT-J and GPT-NeoX. Leahy holds a high P(doom) estimate and short AGI timeline beliefs, and has advocated for mechanistic interpretability and international AI governance through proposals such as MAGIC.

Content4/13
LLM summaryScheduleEntityEdit history1Overview
Tables2/ ~12Diagrams0/ ~1Int. links20/ ~23Ext. links4/ ~15Footnotes0/ ~9References2/ ~9Quotes0Accuracy0RatingsN:1.5 R:2 A:1 C:4Backlinks4
Change History1
Auto-improve (standard): Connor Leahy3 weeks ago

Improved "Connor Leahy" via standard pipeline (1162.3s). Quality score: 81. Issues resolved: EntityLink for EleutherAI is missing the 'name' prop and use; Footnote [29] references a Manifold Markets prediction marke; The '<F e="openai" f="fba50864">75%</F>' component tag in th.

1162.3s · $5-8

Issues1
QualityRated 19 but structure suggests 87 (underrated by 68 points)

Connor Leahy

Person

Connor Leahy

Biography of Connor Leahy, CEO and co-founder of Conjecture, an AI safety company based in London. Previously co-founded EleutherAI in 2020, which produced GPT-J and GPT-NeoX. Leahy holds a high P(doom) estimate and short AGI timeline beliefs, and has advocated for mechanistic interpretability and international AI governance through proposals such as MAGIC.

AffiliationConjecture
RoleCEO & Co-founder
Known ForFounding Conjecture, AI safety advocacy, interpretability research
Related
Safety Agendas
Interpretability
People
Chris OlahNeel Nanda
2.9k words · 4 backlinks
Person

Connor Leahy

Biography of Connor Leahy, CEO and co-founder of Conjecture, an AI safety company based in London. Previously co-founded EleutherAI in 2020, which produced GPT-J and GPT-NeoX. Leahy holds a high P(doom) estimate and short AGI timeline beliefs, and has advocated for mechanistic interpretability and international AI governance through proposals such as MAGIC.

AffiliationConjecture
RoleCEO & Co-founder
Known ForFounding Conjecture, AI safety advocacy, interpretability research
Related
Safety Agendas
Interpretability
People
Chris OlahNeel Nanda
2.9k words · 4 backlinks

Quick Assessment

DimensionAssessment
Primary RoleCEO and co-founder of Conjecture (2022–present)
Key ContributionsCo-founded EleutherAI, producing GPT-J and GPT-NeoX; founded Conjecture to pursue scalable AI alignment research; co-authored the MAGIC international governance proposal
Key Publications"Cognitive Emulation: A Naive AI Safety Proposal" (2023); "Conjecture: A Retrospective After 8 Months of Work" (2022); MAGIC governance proposal on arXiv (2023)
Institutional AffiliationConjecture (London, UK)
Influence on AI SafetyLeahy holds a high subjective probability of catastrophic AI outcomes and AGI Timeline; he has publicly advocated for Interpretability and International Coordination, and has appeared on numerous podcasts and in press coverage on AI risk

Overview

Connor Leahy is the CEO and co-founder of Conjecture, a for-profit AI alignment research company based in London, founded in March 2022.1 Before founding Conjecture, Leahy was a co-founder of EleutherAI, a grassroots open-source AI research collective he helped establish in July 2020 alongside Sid Black and Leo Gao.2 EleutherAI produced several large open-source language models, including GPT-J and GPT-NeoX, which were among the largest openly available models at the time of their release.3

Leahy is one of the more publicly prominent voices in the AI safety community arguing for short AGI timelines and a high probability of catastrophic outcomes from advanced AI. He has described his P(doom) estimate as near the high end of the doomer range, characterizing himself in a 2022 interview as "not as pessimistic as Eliezer Yudkowsky but pretty close."4 His public advocacy spans podcasts, press interviews, and written posts on platforms such as LessWrong and the AI Alignment. He has also been active in UK AI policy outreach.5

Conjecture's research has included mechanistic interpretability, a pivot to a framework called Cognitive Emulation (CoEm) in early 2023, and a governance proposal for a Multinational AGI Consortium (MAGIC). The company has attracted funding from a range of prominent technology investors and has been the subject of both positive and critical commentary in the effective altruism and AI safety communities.6

Background

Education and Early Career

Leahy studied computer science at the Technical University of Munich from 2017 to 2020; no specific degree completion is confirmed in public sources.7 Prior to his involvement in EleutherAI, he worked as an AI researcher at Aleph Alpha GmbH, a German AI startup, from January 2019 to October 2021, with an overlapping period as a machine learning engineer from September 2019 to January 2021.7 A 2021 podcast appearance described him as "AI Researcher at German startup Aleph Alpha and founding member of EleutherAI."8

In 2019, Leahy partially replicated GPT-2, a result that has been discussed both as an early demonstration of his technical engagement and, by some critics, as an overstatement of accomplishment.5

EleutherAI (2020–2023)

Leahy co-founded EleutherAI in July 2020, beginning as a Discord server initially named "LibreAI" before rebranding, in reference to the Greek word for liberty.3 The collective grew into a significant open-source AI research community, producing GPT-Neo, GPT-J, and GPT-NeoX, among other artifacts. EleutherAI's work made large language model research more accessible outside major commercial laboratories.

Leahy stepped down as organizer and operational leader of EleutherAI on March 7, 2023, alongside co-founder Sid Black, to focus full attention on Conjecture.9 He retained a position on EleutherAI's board of directors following his operational departure.9 New operational leadership passed to Stella Biderman, Curtis Huebner, and Shivanshu Purohit, and EleutherAI formally incorporated as a non-profit research institute in early 2023.9

Founding Conjecture (2022)

Leahy co-founded Conjecture in March 2022 with Sid Black and Gabriel Alfour.1 The company emerged from the EleutherAI network: its founders and early staff were described at launch as "mostly EleutherAI alumni and previously independent researchers."10 Conjecture was incorporated as a for-profit company, with a stated intent to develop products alongside alignment research.11

Conjecture

Organization and Funding

Conjecture is headquartered in London, England.12 At its April 2022 public launch, the company announced venture capital backing from Nat Friedman (then ex-CEO of GitHub), Patrick and John Collison (co-founders of Stripe), Daniel Gross, Andrej Karpathy (then at OpenAI), Arthur Breitman, and Sam Bankman-Fried.10 A June 2023 Centre for Effective Altruism critique reported total funding at approximately $10 million as of 2022–2023.5 PitchBook lists a seed round of $25M completed in November 2022, though figures across financial databases differ and some data may be estimated.12 A subsequent seed round closed in December 2022 ($5.0M per LinkedIn), with additional funding rounds through at least May 2024.13

The company grew from approximately 4 employees in late 2021 to at least 22 as of June 2023, according to a June 2023 EA Forum post.5 PitchBook listed 13 employees as of its 2025 profile.12 Conjecture has fiscally sponsored AI safety field-building programs MATS and ARENA, both based in London.5

Research Output

Conjecture's early research agenda focused on mechanistic interpretability and new conceptual frameworks for understanding large language models. Its retrospective after eight months of operation (November 2022) described a candid self-assessment: "most of our efforts to date have not made meaningful progress on the alignment problem."14 Specific outputs from this period included:

  • The Polytope Lens (2022): Interpretability research identifying polytopes rather than neurons as a potentially fundamental unit of neural networks, finding that polysemanticity is reduced at the polytope level. The retrospective acknowledged "no clear implications of how to use this framework" in practice and that the project was overinvested in relative to output.14
  • Simulators (September 2, 2022): Published on LessWrong by Conjecture researcher "janus," this post introduced the "Simulators" framework arguing that GPT-like models trained on self-supervised loss are best understood as simulators rather than agents. Conjecture's retrospective described it as the organization's "most visible output" from its first eight months.14
  • Current Themes in Mechanistic Interpretability Research (November 2022): A summary of discussions among interpretability researchers across Anthropic, Conjecture, Google DeepMind, OpenAI, and Redwood Research, covering topics including superposition, non-linear representations, and field coordination.15

In February 2023, Conjecture announced a pivot in its primary research direction:

  • Cognitive Emulation (CoEm) (February 25, 2023): Authored by Connor Leahy and Gabriel Alfour, this proposal announced Conjecture's new primary direction. The core idea is to "build emulations of human-like things" rather than end-to-end black-box systems, with the goal of producing "predictably boundable systems, not directly aligned AGIs." The post introduced "Magic" (capitalized) as a term for "blackbox or not-understood computation."16 Following this pivot, several mechanistic interpretability researchers departed Conjecture.5

In late 2023, Conjecture contributed to AI Governance discussions:

  • MAGIC — Multinational AGI Consortium (October 13, 2023): A paper by Jason Hausenloy, Andrea Miotti, and Claire Dennis (Conjecture), proposing an exclusive international body as the only institution permitted to develop advanced AI, enforced through a global moratorium on other advanced AI development. The proposal described MAGIC as exclusive, safety-focused, highly secure, and collectively supported by member states, with benefits distributed equitably. It was selected by the Future of Humanity Institute as one of six funded global AI governance proposals.17

In December 2024, Conjecture published a longer strategic document:

  • "A Roadmap for Cognitive Software and A Humanist Future of AI" (December 2, 2024): Authored by Connor Leahy and Gabriel Alfour, this post outlines a five-phase roadmap for what Conjecture calls "cognitive software," moving from "comfortably unsound practices" toward "21st-century cognitive engineering."18

Views on AI Risk

AGI Timelines

Leahy has publicly stated short AGI timeline estimates across multiple appearances. In a February 2023 Future of Life Institute podcast, he described his estimate as: "the joke timeline I usually give people is like 30% in like the next four or five years, 50% by like 2030, or like 2035, something like that... 99% by 2100. If it's not by 2100, something truly f***ed up has happened. I mean, we had like a nuclear war or something."19 In the same interview, he stated: "We have hardware, we have most of the software, we're about two to five insights away from full blown AGI."19

These figures are consistent with estimates he gave in a July 2022 interview: "20 to 30% in the next five years, 50% by 2030, 99% by 2100, 1% had already happened."4 A 2023 internal survey of Conjecture employees found that all respondents expected AGI before 2035.20

Leahy has noted he withholds his full causal model for his timeline estimates, citing "info hazard practices."19

P(doom)

Leahy holds a high subjective probability of catastrophic or fatal outcomes from advanced AI development. In a July 2022 interview, when asked if he could be "at 99% probability of doom, but have like five or ten percent error bars," Leahy responded: "Yes, that's kind of my thinking."4 He has described himself as "not as pessimistic as Eliezer [Yudkowsky] but pretty close."4 An EA Forum post on AI x-risk camps groups Leahy with Yudkowsky as prominent representatives of the "doomer" position, noting that Leahy "is so worried precisely because he thinks that there is no secret sauce left" — i.e., that no fundamental breakthrough is needed to reach dangerous capability levels, as sufficient scale already provides most of what is needed.21

In April 2024, Leahy appeared on Azeem Azhar's Exponential View podcast, where Azhar framed him as "a guest who thinks we are (almost) doomed."22

Risk Assessment Overview

AssessmentLeahy's Stated PositionSource
AGI timeline (50%)≈2030–2035FLI Podcast, Feb 2023
AGI timeline (99%)By 2100FLI Podcast, Feb 2023
P(doom)Near 99%, with ≈5–10% error barsThe Inside View, Jul 2022
Default trajectoryCatastrophic without major alignment progressClearerThinking Podcast

Stated Views on Alignment

Leahy has articulated several positions on how alignment research should be conducted, which he has expressed across podcasts and written posts:

  • Interpretability as prerequisite: Leahy has argued that mechanistic understanding of AI systems is a necessary condition for safely deploying them, though he has not consistently claimed it is alone sufficient.
  • Skepticism of black-box approaches: He has characterized purely empirical approaches — testing alignment without understanding internal mechanisms — as insufficient for systems approaching AGI capabilities.
  • Cognitive Emulation as strategy: Following Conjecture's 2023 pivot, he has argued for building systems that emulate human cognitive processes in bounded and auditable ways, rather than pursuing end-to-end "magical" systems.16
  • Urgency over gradualism: He has argued that the field cannot afford to wait for theoretical completeness before acting, while simultaneously arguing against purely empirical tinkering without mechanistic grounding.
  • International governance: The MAGIC proposal reflects his view that purely national or voluntary governance is insufficient and that a global moratorium on frontier AI development may be necessary.17

On the Machine Learning Street Talk podcast, Leahy stated: "AI alignment is philosophy with a deadline" and "AI will go wrong by default."23 In the ClearerThinking podcast with Spencer Greenberg, he elaborated on his instrumental convergence reasoning: any sufficiently capable goal-directed system will have incentives to resist shutdown and acquire resources, not from malice but from the logic of goal pursuit.24

Leahy has also introduced the concept of "algorithmic cancer" — AI-generated content crowding out human-created content through algorithms optimizing for engagement rather than authenticity — as a near-term concern distinct from existential risk.25

Public Communication

Podcast and Media Appearances

Leahy has appeared on numerous podcasts and in press coverage. Confirmed appearances include:

  • The Inside View (Michaël Trazzi, host): First appearance May 4, 2021; second appearance July 21, 2022, covering P(doom), AGI timelines, Yudkowsky's "Die With Dignity" framework, and Conjecture.4
  • Future of Life Institute Podcast: Multiple appearances, including "Connor Leahy on AI Progress, Chimps, Memes, and Markets" (February 10, 2023), covering AGI definitions, timelines, and prediction markets.19
  • Machine Learning Street Talk: Multiple episodes, including "AI Alignment & AGI Fire Alarm" (2021–2022 era), "#112 Avoiding AGI Apocalypse" (December 2022), "e/acc, AGI and the Future" (February 2024), and "The Compendium" (with Gabriel Alfour, 2023–2024).232627
  • ClearerThinking Podcast (Spencer Greenberg, host): "Will AI Destroy Civilization in the Near Future?" — a debate-style discussion on existential AI risk.24
  • The Great Simplification (Nate Hagens, host): "Algorithmic Cancer: Why AI Development Is Not What You Think," covering AI-generated content displacement, job disruption, and regulatory approaches.25
  • Eye On A.I. (Craig Smith, host): Episode #158, "The Unspoken Risks of Centralizing AI Power," published November 29, 2023, covering monopolization of AI technology and governance.28
  • Exponential View (Azeem Azhar, host): "Does AI Present an Existential Risk?" (April 2024).22
  • Jim Rutt Show (Currents 038, 2021 era): Wide-ranging conversation on AI, in which Leahy was described as "AI Researcher at German startup Aleph Alpha and founding member of EleutherAI."8

Note: As of the time of this writing, no confirmed Connor Leahy appearance on the Lex Fridman Podcast has been published; a prediction market on Manifold listed a 75% probability of such an interview occurring by end of 2025, implying it had not yet happened at market creation.29

Leahy has also been active in UK policy outreach to government officials and to capabilities researchers at other AI companies.5

Communication Style

Leahy is generally characterized in press coverage and community discussion as direct and willing to express views outside the mainstream of AI development discourse. He debated Guillaume Verdon ("Beff Jezos"), a prominent e/acc (effective accelerationism) proponent, in a public forum, with behind-the-scenes footage released via Machine Learning Street Talk.27 He has engaged with critics in written form on LessWrong and the Alignment Forum, including responses to Conjecture's organizational retrospectives.

Criticism and Debates

Community Criticism

A June 2023 EA Forum post titled "Critiques of Prominent AI Safety Labs: Conjecture" raised a number of concerns about Leahy and Conjecture specifically:5

  • The post expressed concern about Leahy's "character and trustworthiness," citing "a lack of attention to rigor and engagement with risky behavior" and "an unwillingness to take external feedback" from him and other staff.
  • It noted that Leahy's technical background — a computer science undergraduate degree and approximately two years of professional machine learning experience — is less extensive than some other safety lab leaders, and questioned whether partial replication of GPT-2 in 2019 had been overstated.
  • It reported that growth slowed in 2023 primarily because Conjecture was unable to raise adequate funding for its expansion plans.
  • It observed that many mechanistic interpretability researchers left Conjecture following the pivot to Cognitive Emulation in early 2023.

The EA Forum critique also noted that Conjecture "recruits heavily from the EA movement" and that some technical AI safety researchers have expressed positive views of Conjecture's work, indicating a divided reception within the community.5

Debates on Timelines and P(doom)

Some AI safety researchers and commentators disagree with Leahy's timeline estimates and P(doom) figures. His 50% by 2030 estimate is at the shorter end of expert distributions; survey evidence from AI safety researchers in 2021 found a mean P(doom) of approximately 30%, substantially below Leahy's stated position.30 Critics of short-timeline positions typically argue that remaining barriers to AGI — including reasoning reliability, robustness, and sample efficiency — are more significant than Leahy's "two to five insights away" framing suggests.

Leahy has responded to such critiques by arguing that uncertainty about remaining barriers cuts in both directions, and that the costs of complacency outweigh the costs of excessive caution. He stated in the ClearerThinking podcast: "if you want to achieve a goal, whatever the goal is, it's useful to have resources... by default, if you have a system which is achieving whatever goal, and if it's very intelligent, it will disempower any other intelligent things that are around, that are in its way."24

Debates on Research Strategy

Leahy's view that interpretability and mechanistic understanding are prerequisites for safe deployment has been contested by researchers who argue that empirical alignment techniques (such as RLHF, Constitutional AI, and Scalable Oversight) can improve safety even without complete mechanistic understanding. Organizations such as Anthropic and OpenAI pursue both interpretability and empirical alignment in parallel, rather than treating the former as a strict prerequisite for the latter. Conjecture's own retrospective acknowledged that its interpretability work produced "no clear implications of how to use this framework" in practice, which critics have cited as evidence of the difficulty of the interpretability-first strategy.14

The pivot to Cognitive Emulation in 2023 was itself controversial: staff departures following the pivot suggest disagreement within Conjecture about the strategic direction.5

Evolution of Views and Organizational Strategy

Leahy's public positions have remained broadly consistent since Conjecture's founding: short AGI timelines, high P(doom), and emphasis on the need for mechanistic understanding. The most significant shift has been at the organizational level, from an early focus on mechanistic interpretability (2022) to Cognitive Emulation as the primary research framework (from February 2023 onward).16

Conjecture's own public retrospectives reflect an unusual degree of candor about limitations. The November 2022 retrospective acknowledged that "most of our efforts to date have not made meaningful progress on the alignment problem" while arguing that the candid accounting was itself evidence of the organization's approach to calibration.14 A December 2024 roadmap post continued in this vein, with Leahy commenting in the discussion thread: "I do not think we are going to make it, but it's worth taking our best shot."18

Current Priorities

Conjecture's publicly stated research directions as of 2024–2025 include:

  1. Cognitive Emulation: Building AI systems that emulate human cognitive processes in bounded, auditable ways, rather than deploying end-to-end systems whose internal operation is not understood.
  2. AI governance: Contributing to international governance proposals, including the MAGIC framework, and engaging with UK policymakers.
  3. Public communication: Continuing podcast appearances, written posts, and media engagement on AI risk.
  4. Organizational sustainability: Continuing fundraising after slower growth in 2023.5

Footnotes

  1. Conjecture About PageConjecture About Page. Conjecture. Accessed 2024. 2

  2. About — EleutherAIAbout — EleutherAI. EleutherAI. Accessed 2024. "Founded in July 2020 by Connor Leahy, Sid Black, and Leo Gao."

  3. EleutherAI — WikipediaEleutherAI — Wikipedia. "EleutherAI began as a Discord server on July 7, 2020, under the tentative name 'LibreAI'... Its founding members are Connor Leahy, Leo Gao, and Sid Black." 2

  4. Connor Leahy on Dignity and Conjecture — The Inside ViewConnor Leahy on Dignity and Conjecture — The Inside View. Michaël Trazzi (host). July 21, 2022. "I am not as pessimistic as Eliezer but I'm pretty close." On P(doom): "Yes, that's kind of my thinking." Timeline: "20 to 30% in the next five years, 50% by 2030, 99% by 2100, 1% had already happened." 2 3 4 5

  5. Critiques of Prominent AI Safety Labs: Conjecture — EA ForumCritiques of Prominent AI Safety Labs: Conjecture — EA Forum. June 12, 2023. 2 3 4 5 6 7 8 9 10 11

  6. AMA: Conjecture, A New Alignment Startup — AI Alignment ForumAMA: Conjecture, A New Alignment Startup — AI Alignment Forum. Conjecture team. April 9, 2022.

  7. Connor Leahy — Imagination in Action Speaker ProfileConnor Leahy — Imagination in Action Speaker Profile. "From 2017 to 2020, Connor Leahy attended the Technical University of Munich... No specific degree was mentioned. Worked as AI researcher at Aleph Alpha GmbH from January 2019 to October 2021." 2

  8. Currents 038: Connor Leahy on Artificial Intelligence — The Jim Rutt ShowCurrents 038: Connor Leahy on Artificial Intelligence — The Jim Rutt Show. Jim Rutt. 2021. 2

  9. Citation rc-6986 (data unavailable — rebuild with wiki-server access) 2 3

  10. AMA: Conjecture, A New Alignment Startup — AI Alignment ForumAMA: Conjecture, A New Alignment Startup — AI Alignment Forum. April 9, 2022. "We have VC backing from, among others, Nat Friedman, Daniel Gross, Patrick and John Collison, Arthur Breitman, Andrej Karpathy, and Sam Bankman-Fried." 2

  11. Import AI 291: Conjecture, A New AI Alignment Company — Jack ClarkImport AI 291: Conjecture, A New AI Alignment Company — Jack Clark. April 11, 2022.

  12. Conjecture — PitchBook Company ProfileConjecture — PitchBook Company Profile. PitchBook. 2025. Lists 13 employees; seed round of $25M on November 8, 2022. 2 3

  13. Conjecture — LinkedIn Company PageConjecture — LinkedIn Company Page. Updated 2024. "Last Seed round on December 8, 2022 raised $5.0M." Also: Crunchbase lists latest funding as May 2024 Seed round.

  14. Citation rc-2539 (data unavailable — rebuild with wiki-server access) 2 3 4 5

  15. Citation rc-3e6a (data unavailable — rebuild with wiki-server access)

  16. Cognitive Emulation: A Naive AI Safety Proposal — LessWrongCognitive Emulation: A Naive AI Safety Proposal — LessWrong. Connor Leahy and Gabriel Alfour. February 25, 2023. 2 3

  17. Multinational AGI Consortium (MAGIC): A Proposal for International Coordination on AI — arXivMultinational AGI Consortium (MAGIC): A Proposal for International Coordination on AI — arXiv. Jason Hausenloy, Andrea Miotti, Claire Dennis. October 13, 2023. Selected by Future of Life Institute as one of six funded global AI governance proposals. 2

  18. Conjecture: A Roadmap for Cognitive Software and A Humanist Future of AI — LessWrongConjecture: A Roadmap for Cognitive Software and A Humanist Future of AI — LessWrong. Connor Leahy and Gabriel Alfour. December 2, 2024. Comment by Leahy: "I do not think we are going to make it, but it's worth taking our best shot." 2

  19. Connor Leahy on AI Progress, Chimps, Memes, and Markets — Future of Life Institute PodcastConnor Leahy on AI Progress, Chimps, Memes, and Markets — Future of Life Institute Podcast. February 10, 2023. Transcript on Alignment Forum. 2 3 4

  20. When Do Experts Think Human-Level AI Will Be Created? — EA ForumWhen Do Experts Think Human-Level AI Will Be Created? — EA Forum. "A 2023 survey of employees at Conjecture found that all of the respondents expected AGI before 2035."

  21. Three Camps in AI X-Risk Discussions — EA ForumThree Camps in AI X-Risk Discussions — EA Forum. "Connor Leahy is so worried precisely because he thinks that there is no secret sauce left."

  22. Does AI Present an Existential Risk? — Exponential View (Azeem Azhar)Does AI Present an Existential Risk? — Exponential View (Azeem Azhar). April 2024. 2

  23. AI Alignment & AGI Fire Alarm — Connor Leahy | Machine Learning Street TalkAI Alignment & AGI Fire Alarm — Connor Leahy | Machine Learning Street Talk. Machine Learning Street Talk. 2021–2022. Quotes: "AI alignment is philosophy with a deadline"; "AI will go wrong by default." 2

  24. Will AI Destroy Civilization in the Near Future? (with Connor Leahy) — ClearerThinking PodcastWill AI Destroy Civilization in the Near Future? (with Connor Leahy) — ClearerThinking Podcast. Spencer Greenberg (host). 2 3

  25. Connor Leahy — Algorithmic Cancer: Why AI Development Is Not What You Think — The Great SimplificationConnor Leahy — Algorithmic Cancer: Why AI Development Is Not What You Think — The Great Simplification. Nate Hagens (host). 2023–2024. 2

  26. #112 Avoiding AGI Apocalypse — Connor Leahy | Machine Learning Street Talk#112 Avoiding AGI Apocalypse — Connor Leahy | Machine Learning Street Talk. December 2022.

  27. Citation rc-3de4 (data unavailable — rebuild with wiki-server access) 2

  28. #158 Connor Leahy: The Unspoken Risks of Centralizing AI Power — Eye On A.I.#158 Connor Leahy: The Unspoken Risks of Centralizing AI Power — Eye On A.I.. November 29, 2023.

  29. Will Lex Fridman Interview Connor Leahy Before End of 2025? — Manifold MarketsWill Lex Fridman Interview Connor Leahy Before End of 2025? — Manifold Markets. "As of market creation, no confirmed Connor Leahy appearance on Lex Fridman Podcast had been published."

  30. List of P(doom) Values — PauseAIList of P(doom) Values — PauseAI. Aggregated P(doom) estimates from named individuals. Mean from 44 AI safety researchers in 2021: approximately 30%.

References

Structured Data

6 facts·2 recordsView full profile →
Employed By
Conjecture
as of Mar 2022
Role / Title
CEO & Co-founder, Conjecture
as of Mar 2022
Birth Year
1,998

All Facts

People
PropertyValueAs OfSource
Employed ByConjectureMar 2022
Role / TitleCEO & Co-founder, ConjectureMar 2022
1 earlier value
2020Co-founder
Biographical
PropertyValueAs OfSource
Birth Year1,998
Notable ForCEO of Conjecture; co-founder of EleutherAI; prominent AI safety advocate; testified before UK Parliament and EU on AI risks
Social Media@NPCollapse

Career History

2
OrganizationTitleStartEnd
EleutherAICo-founder20202022
conjectureCEO & Co-founderMar 2022

Related Pages

Top Related Pages

Organizations

Redwood ResearchAnthropicCentre for Effective Altruism

Approaches

AI AlignmentMechanistic InterpretabilityConstitutional AI

Safety Research

Scalable Oversight

Concepts

RLHFAGI TimelineAI Doomer WorldviewAgentic AI

Risks

Emergent CapabilitiesAI Capability Sandbagging

Policy

Voluntary AI Safety Commitments

Key Debates

AI Alignment Research AgendasTechnical AI Safety Research

Analysis

Model Organisms of MisalignmentCapability-Alignment Race Model

Historical

Deep Learning Revolution EraMainstream Era