Skip to content
Longterm Wiki
Index
Citation·page:david-sacks:fn52

David Sacks - Footnote 52

Verdictpartial85%
1 check · 4/3/2026

The claim that the EA Forum criticized Sacks is not directly supported by the article. The article mentions Matthew Adelstein, who has a Substack on EA, disagreeing with Sacks and Weiss-Blatt's portrayal, but this is not the same as the EA Forum itself issuing a criticism. The claim that EA views AI risks as comparable to nuclear and biological threats is a slight oversimplification. The article mentions that longtermism prioritizes preventing existential risks such as pandemics, nuclear war, or rogue AI, but doesn't explicitly state that they are viewed as directly comparable.

Our claim

entire record

No record data available.

Source evidence

1 src · 1 check
partial85%Haiku 4.5 · 4/3/2026

NoteThe claim that the EA Forum criticized Sacks is not directly supported by the article. The article mentions Matthew Adelstein, who has a Substack on EA, disagreeing with Sacks and Weiss-Blatt's portrayal, but this is not the same as the EA Forum itself issuing a criticism. The claim that EA views AI risks as comparable to nuclear and biological threats is a slight oversimplification. The article mentions that longtermism prioritizes preventing existential risks such as pandemics, nuclear war, or rogue AI, but doesn't explicitly state that they are viewed as directly comparable.

Case № page:david-sacks:fn52Filed 4/3/2026Confidence 85%
Source Check: David Sacks - Footnote 52 | Longterm Wiki