Eliezer Yudkowsky — Notable For: Founder of MIRI; pioneer of AI alignment as a field; author of 'The Sequences' on rationality; author of Harry Potter and the Methods of Rationality; prominent AI doomer
The claim lists five items. Four are clearly confirmed by the source: (1) Founder of MIRI - confirmed ('He is the founder of and a research fellow at the Machine Intelligence Research Institute'); (2) Pioneer of AI alignment - confirmed (extensive discussion of his work on AI safety and alignment); (3) Author of 'The Sequences' - confirmed ('Over 300 blog posts...were released as an ebook, Rationality: From AI to Zombies...This book is also referred to as The Sequences'); (4) Author of Harry Potter and the Methods of Rationality - confirmed. However, the fifth item 'prominent AI doomer' is not explicitly stated in the Wikipedia article. While the source discusses his views on AI risks and his 2023 Time op-ed advocating for halting AI development, it does not use the label 'AI doomer' or similar characterization. The claim is therefore partial—most elements are confirmed but one is unverifiable from this source.
Our claim
entire record- Subject
- Eliezer Yudkowsky
- Property
- Notable For
- Value
- Founder of MIRI; pioneer of AI alignment as a field; author of 'The Sequences' on rationality; author of Harry Potter and the Methods of Rationality; prominent AI doomer
Source evidence
1 src · 1 checkNoteThe claim lists five items. Four are clearly confirmed by the source: (1) Founder of MIRI - confirmed ('He is the founder of and a research fellow at the Machine Intelligence Research Institute'); (2) Pioneer of AI alignment - confirmed (extensive discussion of his work on AI safety and alignment); (3) Author of 'The Sequences' - confirmed ('Over 300 blog posts...were released as an ebook, Rationality: From AI to Zombies...This book is also referred to as The Sequences'); (4) Author of Harry Potter and the Methods of Rationality - confirmed. However, the fifth item 'prominent AI doomer' is not explicitly stated in the Wikipedia article. While the source discusses his views on AI risks and his 2023 Time op-ed advocating for halting AI development, it does not use the label 'AI doomer' or similar characterization. The claim is therefore partial—most elements are confirmed but one is unverifiable from this source.