Back
2023 AI researcher survey
referenceCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Wikipedia
Data Status
Not fetched
Cited by 3 pages
| Page | Type | Quality |
|---|---|---|
| The Case For AI Existential Risk | Argument | 66.0 |
| AI Welfare and Digital Minds | Concept | 63.0 |
| Controlled Vocabulary for Longtermist Analysis | -- | 55.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 202698 KB
Existential risk from artificial intelligence - Wikipedia
Jump to content
From Wikipedia, the free encyclopedia
Hypothesized risk to human existence
Part of a series on Artificial intelligence (AI)
Major goals
Artificial general intelligence
Intelligent agent
Recursive self-improvement
Planning
Computer vision
General game playing
Knowledge representation
Natural language processing
Robotics
AI safety
Approaches
Machine learning
Symbolic
Deep learning
Bayesian networks
Evolutionary algorithms
Hybrid intelligent systems
Systems integration
Open-source
AI data centers
Applications
Bioinformatics
Deepfake
Earth sciences
Finance
Generative AI
Art
Audio
Music
Government
Healthcare
Mental health
Industry
Software development
Translation
Military
Physics
Projects
Philosophy
AI alignment
Artificial consciousness
The bitter lesson
Chinese room
Friendly AI
Ethics
Existential risk
Turing test
Uncanny valley
Human–AI interaction
History
Timeline
Progress
AI winter
AI boom
AI bubble
Controversies
Deepfake pornography
Taylor Swift deepfake pornography controversy
Grok sexual deepfake scandal
Google Gemini image generation controversy
Pause Giant AI Experiments
Removal of Sam Altman from OpenAI
Statement on AI Risk
Tay (chatbot)
Théâtre D'opéra Spatial
Voiceverse NFT plagiarism scandal
Glossary
Glossary
v
t
e
Existential risk from artificial intelligence , or AI x-risk , refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
One argument for the validity of this concern and the importance of this risk references how human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass human intelligence and become superintelligent , it might become uncontrollable. [ 5 ] Just as the fate of the mountain gorilla depends on human goodwill, the fate of humanity could depend on the actions of a future machine superintelligence. [ 6 ]
Experts disagree on whether artificial general intelligence (AGI) can achieve the capabilities needed for human extinction. Debates center on AGI's technical feasibility, the speed of self-improvement, [ 7 ] and the effectiveness of alignment strategies. [ 8 ] Concerns about superintelligence have been voiced by researchers including Geoffrey Hinton , [ 9 ] Yoshua Bengio , [ 10 ] Demis Hassabis , [ 11 ] and Alan Turing , [ a ] and AI company CEOs such as Dario Am
... (truncated, 98 KB total)Resource ID:
9f9f0a463013941f | Stable ID: NjhjYjI5OG