Skip to content
Longterm Wiki
Back

The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence — Émile P. Torres and Timnit Gebru, First Monday (April 2024)

web

A critical outside-looking-in perspective on AI safety culture from Timnit Gebru and Émile Torres; relevant for understanding ideological critiques of longtermism and EA-aligned AI safety movements, though contested by many within those communities.

Metadata

Importance: 58/100journal articleanalysis

Summary

Torres and Gebru critique the ideological cluster they term 'TESCREAL' (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism), arguing these movements share eugenic roots and use AGI as a vehicle for utopian promises that risk marginalizing present-day populations. The paper contends that this ideological bundle disproportionately shapes AI safety and development discourse, embedding historically problematic assumptions about human optimization and population control into mainstream AI governance conversations.

Key Points

  • Coins the 'TESCREAL bundle' to describe seven overlapping techno-utopian ideologies that share philosophical and historical roots in eugenics.
  • Argues that longtermism and EA-influenced AI safety framing prioritizes speculative future humans over current marginalized populations.
  • Contends that AGI narratives within TESCREAL function as secularized eschatology, promising utopia while legitimizing harmful present-day tradeoffs.
  • Highlights how TESCREAL ideologies have gained outsized influence in AI labs and policy circles, shaping research priorities and governance debates.
  • Calls for critical scrutiny of the ideological assumptions embedded in mainstream AI safety and AGI discourse.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 9, 20264 KB
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
 | First Monday
 

 

 

 

 

 
 
 
 

 Jan
 FEB
 Mar
 

 
 

 
 28
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: GDELT Project

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20260228012243/https://firstmonday.org/ojs/index.php/fm/article/view/13636

 

 

 
 
 Skip to main content
 Skip to main navigation menu
 Skip to site footer

 

 

 
 Open Menu
 
 

 
 
 
 

 

 
 
 
 

 

 
 

 
 About
 
 
 

 
 About the Journal
 
 

 

 
 Editorial Team
 
 

 

 
 Privacy Statement
 
 

 

 
 Contact
 
 

 
 

 

 
 Search
 
 

 

 
 Current
 
 

 

 
 Archives
 
 

 

 
 Announcements
 
 

 

 
 Submissions
 
 

 

 

 

 
 
 Search
 
 

 

 

 

 
 

 
 Register
 
 

 

 
 Login
 
 

 

 

 
 

 

 

 

 

 
 
 

 
 Home
 
 /
 

 

 
 Archives
 
 /
 

 

 
 Volume 29, Number 4 - 1 April 2024
 
 /
 

 

 
 Articles
 
 

 

 
 

 
 

 The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
 

 
 

 

 

 
Authors

 
 

 
 Timnit Gebru
 
 

 

 
 Émile P. Torres
 
 

 
 

 
 

 

 DOI:
 

 
 
 https://doi.org/10.5210/fm.v29i4.13636
 
 
 

 
 
 

 
Abstract

 
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.

 

 
 

 
 
 

 

 

 

 
 
 
 

 

 
 

 

 Downloads
 

 
 

 
 
 

 
 HTML

 
 

 

 
 
 

 
 PDF

 
 

 
 

 
 

 

 

 Published
 

 

 2024-04-14
 

 

 

 
 

 

 

 How to Cite
 

 

 

 

 
Gebru, T., & Torres, Émile P. (2024)

... (truncated, 4 KB total)
Resource ID: d9c5ebff69e9f067 | Stable ID: sid_3VvLKDaUgy