Back
White House feud with Anthropic reveals broader AI safety concerns | Semafor
webData Status
Not fetched
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Anthropic | Organization | 74.0 |
Cached Content Preview
HTTP 200Fetched Feb 25, 2026120 KB
White House feud with Anthropic reveals broader AI safety concerns | Semafor Intelligence for the New World Economy Semafor World Economy From Semafor Technology In your inbox, 2x per week Sign up View / White House feud with Anthropic reveals broader AI safety concerns Reed Albergotti Tech Editor, Semafor Oct 17, 2025, 10:59am EDT Oct 17, 2025, 10:59am EDT Technology North America Share Evelyn Hockstein/Reuters Post Email Whatsapp Copy link Sign up for Semafor Technology: What’s next in the new era of tech. Read it now . Email address Sign Up In this article: Reed’s view Room for Disagreement Notable Reed’s view The cold war between Anthropic and the White House broke out into public view this week when White House AI Czar David Sacks attacked Anthropic Co-founder Jack Clark on X, accusing him of concealing a “sophisticated regulatory capture strategy based on fear-mongering.” The core claim: The “AI safety” conversation is a mere commercial tactic by one of the four main US players in the AI race. That’s a convenient line for the accelerationists in the White House and some of the other companies, but my own assessment — based on many years of experience in weighing corporate BS — is that people like Clark and Anthropic CEO Dario Amodei aren’t faking it. This is a real, philosophical divide about the nature of this technology, one that has evolved since it burst out of subreddits three years ago. A concept I find helpful in this context is Moravec’s paradox: Even as AI makes huge leaps in performance with more data and more compute — contributing to scientific discovery, transforming Hollywood, and automating corporate workflows — it continues to have trouble with simple tasks. These very basic mistakes mean that while AI might help us cure cancer, it will still require human supervision for most tasks for the foreseeable future. From Sacks’ point of view, this is a “ goldilocks ” situation, where all the predictions of AI destroying humanity and replacing most human workers were wrong, and we’re instead seeing gradual improvements and a competitive marketplace primed to enable a wave of innovation. AD But from Clark’s vantage point (up-close to some of the top researchers in the world), Sacks’ assessment is premature. As its capabilities increase, AI may eventually learn how to better itself, leading to more rapid increases. “This technology really is more akin to something grown than something made,” Clark writes in his essay. “We are growing extremely powerful systems that we do not fully understand.” It’s possible that AI models trained with today’s architecture could continue to improve with scale, but make little progress toward the kind of reliability needed for those simple tasks. In that case, researchers would focus their brainpower on the last hurdles standing in the way of the AI holy grail: predictability and interpretability. Pass those, and you’d find a fundamentally different and much safer kind of AI than we have today. If not,
... (truncated, 120 KB total)Resource ID:
65cf1112a4149a11 | Stable ID: ZWUxNTk1Zj