Back
Dwarkesh Podcast 2024
webdwarkeshpatel.com·dwarkeshpatel.com/
A popular podcast homepage; individual episodes featuring AI safety researchers can be valuable primary sources for understanding current thinking on alignment and AI risk, though the homepage itself lacks specific content.
Metadata
Importance: 45/100homepage
Summary
The Dwarkesh Podcast features long-form interviews with leading researchers, economists, and thinkers, including prominent AI safety and capabilities researchers. Episodes frequently cover AI development trajectories, alignment challenges, and the implications of advanced AI systems.
Key Points
- •Long-form interviews with top AI researchers including those from Anthropic, OpenAI, and DeepMind on safety and capabilities
- •Covers topics ranging from technical AI alignment to broader societal and existential implications of advanced AI
- •Features in-depth conversations that often surface novel perspectives not found in formal publications
- •Guests have included Ilya Sutskever, Demis Hassabis, Dario Amodei, and other leading figures in AI development
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Dario Amodei | Person | 41.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20261 KB
Dwarkesh Podcast | Substack Dwarkesh Podcast Deeply researched interviews By Dwarkesh Patel · Over 73,000 subscribers Subscribe By subscribing, you agree Substack's Terms of Use , and acknowledge its Information Collection Notice and Privacy Policy . No thanks “Deeply researched interviews with obscure intellectuals.”...” gwern, Gwern.net Newsletter “great interviewer”...” Razib Khan, Razib Khan's Unsupervised Learning This site requires JavaScript to run correctly. Please turn on JavaScript or unblock scripts
Resource ID:
e46ec6f080a1f2a4 | Stable ID: sid_SwmK5lt8k9