Longterm Wiki
Explore
About
Statements
Claims
Internal
Search
⌘K
Toggle Sidebar
Explorer
Overview
Browse Claims
Relationships
Network
Publications
Resources
Fact Dashboard
Entities (87)
Anthropic
274
+180
Microsoft AI
379
+19
OpenAI
28
+359
Deep Learning Revolution Era
353
OpenAI Foundation
327
Why Alignment Might Be Hard
312
Epoch AI
302
+9
The Case Against AI Existential Risk
279
Large Language Models
274
Future of Life Institute (FLI)
261
+4
Evan Hubinger
258
Kalshi (Prediction Market)
238
+1
Stuart Russell
213
+9
Scheming
219
AI Timelines
206
Show all 87 entities…
Navigation
Explorer
Overview
Browse Claims
Relationships
Network
Publications
Resources
Fact Dashboard
Entities (87)
Anthropic
274
+180
Microsoft AI
379
+19
OpenAI
28
+359
Deep Learning Revolution Era
353
OpenAI Foundation
327
Why Alignment Might Be Hard
312
Epoch AI
302
+9
The Case Against AI Existential Risk
279
Large Language Models
274
Future of Life Institute (FLI)
261
+4
Evan Hubinger
258
Kalshi (Prediction Market)
238
+1
Stuart Russell
213
+9
Scheming
219
AI Timelines
206
Show all 87 entities…
AI Risk Critical Uncertainties Model
View wiki page →
Data page
No claims found for this entity yet.
Page Resources
(13)
— click to expand
📄
AI Impacts 2023 survey
paper
★★★☆☆
🔗
Metaculus
web
★★★☆☆
🔗
Epoch AI
web
★★★★☆
🔗
Pew Research: Public and AI Experts
web
★★★★☆
🔗
International AI Safety Report 2025
web
🔗
Stanford AI Index 2025
web
★★★★☆
🔗
McKinsey State of AI 2025
web
★★★☆☆
🔗
IAPP AI Governance
web
🔗
Infosys research
web
🔗
safety funding gap
web
★★☆☆☆
🔗
Mechanistic Interpretability for AI Safety — A Review
web
🔗
80,000 Hours AGI Timelines Review
web
★★★☆☆
✏️
An Overview of the AI Safety Funding Situation (LessWrong)
blog
★★★☆☆