Cold Takes – Holden Karnofsky's Blog
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Cold Takes
Influential blog by an Open Philanthropy co-CEO; the 'Most Important Century' series is widely read in the AI safety community and provides strategic framing for why AI safety work is urgent.
Metadata
Summary
Cold Takes is Holden Karnofsky's (co-CEO of Open Philanthropy) personal blog exploring big-picture questions about AI, existential risk, effective altruism, and how to think about the most important challenges of our time. It features in-depth essays on AI timelines, transformative AI scenarios, and philanthropic strategy. The blog is notable for its 'Most Important Century' series arguing that we may be living at a uniquely pivotal moment in history.
Key Points
- •Hosts the influential 'Most Important Century' series arguing current decades may be uniquely pivotal for humanity's long-term future.
- •Covers AI timelines, transformative AI risk, and the implications of advanced AI from a longtermist perspective.
- •Written by Holden Karnofsky, co-CEO of Open Philanthropy, a major funder of AI safety research.
- •Explores how individuals and philanthropists should prioritize actions given uncertainty about AI development trajectories.
- •Bridges technical AI safety concerns with broader existential risk, policy, and effective altruism considerations.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Holden Karnofsky | Person | 40.0 |
| Sharp Left Turn | Risk | 69.0 |
Cached Content Preview
Cold Takes
-->
-->
-->
-->
-->
Subscribe (free)
Cold Takes
For audio version, search for "Cold Takes Audio" in your podcast app
Subscribe now
Login -->
Subscribe
Latest post: Good job opportunities for helping with the most important century
-->
Featured posts
What does Bing Chat tell us about AI risk?
Jobs that can help with the most important century
Spreading messages to help with the most important century
How we could stumble into AI catastrophe
Transformative AI issues (not just misalignment): an overview
Racing through a minefield: the AI deployment problem
High-level hopes for AI alignment
AI Safety Seems Hard to Measure
Why Would AI "Aim" To Defeat Humanity?
The Track Record of Futurists Seems ... Fine
Nonprofit Boards are Weird
AI Could Defeat All Of Us Combined
Useful Vices for Wicked Problems
Ideal governance (for companies, countries and more)
The Wicked Problem Experience
Learning By Writing
Future-proof ethics
Where's Today's Beethoven?
Visualizing Utopia
Why Describing Utopia Goes Badly
Minimal-trust investigations
Rowing, Steering, Anchoring, Equity, Mutiny
Was life better in hunter-gatherer times?
Pre-agriculture gender relations seem bad
Has Life Gotten Better?
Summary of history (empowerment and well-being lens)
The Most Important Century (in a nutshell)
Why AI alignment could be hard with modern deep learning
One Cold Link: “The Past and Future of Economic Growth: A Semi-Endogenous Perspective”
AI Timelines: Where the Arguments, and the "Experts," Stand
Give Sports a Chance
Why talk about 10,000 years from now?
This Can't Go On
Does X cause Y? An in-depth evidence review
Phil Birnbaum's "bad regression" puzzles
All Possible Views About Humanity's Future Are Wild
All posts
-->
Jan
18
Good job opportunities for helping with the most important century
-->
5 min read
Feb
28
What does Bing Chat tell us about AI risk?
-->
3 min read
Feb
24
How major governments can help with the most important century
-->
6 min read
Feb
2
... (truncated, 14 KB total)859ff786a553505f | Stable ID: sid_DSlulvneaS