Skip to content
Longterm Wiki
Back

AI Panic News - The Rationality Trap

web

A critical Substack essay by Nirit Weiss-Blatt examining psychological harms within the MIRI/rationalist community; relevant to understanding sociological critiques of the AI safety movement and its institutional culture.

Metadata

Importance: 38/100opinion piececommentary

Summary

A critical investigative essay examining the psychological and social harms within the rationalist community centered around MIRI and CFAR, documenting cases of psychosis, suicide, and cult-like dynamics. The piece explores how the community's extreme commitment to rationality and AI existential risk created environments harmful to mental health. It raises broader questions about whether the cultural pathologies of the rationalist community undermine its credibility on AI safety.

Key Points

  • Multiple MIRI/CFAR-adjacent community members experienced psychotic episodes; at least two died by suicide, suggesting systemic psychological harm in the community.
  • The rationalist community exhibited cult-like behaviors: isolating 'normies,' confrontational 'debugging' sessions, psychedelic experimentation, and pushing social norm boundaries.
  • The piece questions whether the same epistemic community driving AI existential risk discourse is itself epistemically and psychologically dysfunctional.
  • Bloomberg's 2023 reporting on EA and rationalist community harms is cited as evidence these issues extend beyond isolated incidents.
  • The essay uses the term 'rationality trap' to suggest that extreme rationalist frameworks can paradoxically produce irrational and harmful outcomes.

Cited by 1 page

PageTypeQuality
Center for Applied RationalityOrganization62.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202643 KB
THE RATIONALITY TRAP - by Nirit Weiss-Blatt - AI PANIC 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 

 

 
 
 
 
 

 

 

 

 

 
 

 
 

 

 

 

 
 

 Subscribe Sign in THE RATIONALITY TRAP

 Nirit Weiss-Blatt Sep 12, 2025 29 6 7 Share You can listen to the 6,000 words via ElevenLabs

 0:00 -42:48 Audio playback is not supported on your browser. Please upgrade. Prologue 

 Jessica Taylor joined the Machine Intelligence Research Institute (MIRI) in August 2015. 

 Located in Berkeley, California, it was the epicenter of a small community that calls itself the rationalist community. Its leader is writer Eliezer Yudkowsky, the founder of the LessWrong forum and MIRI, who hired her as a research fellow. MIRI was preoccupied with the idea of “friendly” artificial intelligence (AI). Taylor had an MSc in computer science from Stanford and wanted to work on software agents “that can acquire human concepts.” 

 Two years later, she had a psychotic break. “I believed that I was intrinsically evil,” she would later write, “[and that I] had destroyed significant parts of the world with my demonic powers.”

 After she shared the details of her experience on the community’s forum in 2021, other testimonies surfaced. 

 In 2023, Bloomberg gave the wider public a glimpse with “ The Real-Life Consequences of Silicon Valley’s AI Obsession.” 

 “Taylor’s experience wasn’t an isolated incident. It encapsulates the cultural motifs of some rationalists, who often gathered around MIRI or CFAR employees, lived together, and obsessively pushed the edges of social norms, truth, and even conscious thought,” reported Bloomberg journalist Ellen Huet. “They referred to outsiders as normies and NPCs [non-player characters]. At house parties, they spent time ‘debugging’ each other, engaging in a confrontational style of interrogation that would supposedly yield more rational thoughts.” It didn't stop there. “Sometimes, to probe further, they experimented with psychedelics and tried ‘jailbreaking’ their minds, to crack open their consciousness and make them more influential, or ‘agentic.’” Taylor shared that several people in her sphere had similar psychotic episodes. “One died by suicide in 2018, and another in 2021.” 

 When Taylor joined the rationalist community, it had a seductive promise: we are the select few, who are so intelligent that we could save the world through our superpowered reasoning. They preached a new gospel from the heart of Silicon Valley: Superintelligent AI is coming, and we are the only ones clear-eyed and wise enough to stop the apocalypse before it’s too late. But reality proved very different: a trail of traumatic events, psychotic breakdowns, high-control groups, and, in the most extrem

... (truncated, 43 KB total)
Resource ID: 67b1f575de456581 | Stable ID: sid_YGAtPskcNi