Longterm Wiki
Back

The content intelligence: an argument against the lethality of artificial intelligence | Discover Artificial Intelligence | Springer Nature Link

paper

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Eliezer YudkowskyPerson35.0

Cached Content Preview

HTTP 200Fetched Feb 23, 202645 KB
[Skip to main content](https://link.springer.com/article/10.1007/s44163-024-00112-9#main)

## Search

Search by keyword or author

Search

## Navigation

- [Find a journal](https://link.springer.com/journals/)
- [Publish with us](https://www.springernature.com/gp/authors)
- [Track your research](https://link.springernature.com/home/)

# The content intelligence: an argument against the lethality of artificial intelligence

- Perspective
- [Open access](https://www.springernature.com/gp/open-science/about/the-fundamentals-of-open-access-and-open-research)
- Published: 22 February 2024

- Volume 4, article number 13, (2024)


- [Cite this article](https://link.springer.com/article/10.1007/s44163-024-00112-9#citeas)

You have full access to this [open access](https://www.springernature.com/gp/open-science/about/the-fundamentals-of-open-access-and-open-research) article

[Download PDF](https://link.springer.com/content/pdf/10.1007/s44163-024-00112-9.pdf)

[Save article](https://link.springer.com/article/10.1007/s44163-024-00112-9/save-research?_csrf=pL1S5Ypw50G2Lsj9nuWG176onza4R97P)

[View saved research](https://link.springer.com/saved-research)

[![](https://media.springernature.com/w72/springer-static/cover-hires/journal/44163?as=webp)Discover Artificial Intelligence](https://link.springer.com/journal/44163) [Aims and scope](https://link.springer.com/journal/44163/aims-and-scope) [Submit manuscript](https://submission.nature.com/new-submission/44163/3)

The content intelligence: an argument against the lethality of artificial intelligence


[Download PDF](https://link.springer.com/content/pdf/10.1007/s44163-024-00112-9.pdf)

## Abstract

This paper navigates artificial intelligence’s recent advancements and increasing media attention. A notable focus is placed on Eliezer Yudkowsky, a leading figure within the domain of artificial intelligence alignment, who aims to bridge the understanding gap between public perceptions and rationalist viewpoints on artificial intelligence technology. This focus analyzes his predicted course of action for artificial intelligence outlined within his unpublished paper _AGI Ruin: A List of Lethalities._ This is achieved by attempting to understand the concept of intelligence itself and identifying a reasonable working definition of that concept. The concept of intelligence is then applied to contemporary artificial intelligence capabilities and developments to understand its applicability to the technologies. This paper finds contemporary artificial intelligence systems are, to some extent, intelligent. However, it argues that both weak and strong artificial intelligence systems, devoid of human-defined goals, would not inherently pose existential threats to humanity, challenging the notions of artificial intelligence alignment, bringing into question the validity of Nick Bostrom’s Orthogonality Thesis. Furthermore, the possibility of artificial life created through the method of assembling various modules each emula

... (truncated, 45 KB total)
Resource ID: a4572a8ebeee9ecd | Stable ID: OWMyYmYwN2