Back
The AI industry turns against its favorite philosophy, effective altruism | Semafor
webContextualizes the November 2023 OpenAI leadership crisis as a flashpoint in the tension between EA-driven AI safety philosophy and the commercial AI industry, relevant to understanding governance and organizational dynamics in AI development.
Metadata
Importance: 52/100news articlenews
Summary
This Semafor article examines how the November 2023 OpenAI boardroom crisis—centered on Sam Altman's brief ouster—reflected broader tensions between effective altruism's influence on AI safety culture and the commercial AI industry. It analyzes how EA-aligned board members clashed with Altman's growth-oriented leadership, and how the fallout triggered a backlash against EA's dominant role in shaping AI governance and safety norms.
Key Points
- •The OpenAI board crisis of November 2023 was partly driven by EA-aligned members who prioritized safety concerns over commercial momentum.
- •The incident accelerated a backlash in the AI industry against effective altruism's outsized influence on AI safety and governance discourse.
- •EA's focus on long-term existential risk was criticized as clashing with near-term, pragmatic approaches to AI development and deployment.
- •The crisis exposed philosophical and strategic fault lines between 'doomers' and more commercially oriented AI developers within the broader AI safety community.
- •The episode marked a potential turning point in which EA's grip on AI safety culture began to loosen in favor of alternative governance frameworks.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Anthropic | Organization | 74.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20268 KB
The AI industry turns against its favorite philosophy, effective altruism | Semafor
Intelligence for the New World Economy Semafor World Economy The AI industry turns against its favorite philosophy
Louise Matsakis and Reed Albergotti Updated Nov 21, 2023, 3:29pm EST Nov 21, 2023, 3:29pm EST Technology Share Semafor/Joey Pfeifer Post Email Whatsapp Copy link Sign up for Semafor Technology: What’s next in the new era of tech. Read it now .
Email address Sign Up In this article:
The Scene
Know More
Louise’s view
Notable
The Scene
One of the most prominent backers of the “effective altruism” movement at the heart of the ongoing turmoil at OpenAI, Skype co-founder Jaan Tallinn, told Semafor he is now questioning the merits of running companies based on the philosophy.
“The OpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes,” said Tallinn, who has poured millions into effective altruism-linked nonprofits and AI startups. “So the world should not rely on such governance working as intended.”
His comments are part of a growing backlash against effective altruism and its arguments about the risks AI poses to humanity, which has snowballed over the last few days into the movement’s second major crisis in a year.
The first was caused by the downfall of convicted crypto fraudster Sam Bankman-Fried, who was once among the leading figures of EA, an ideology that emerged in the elite corridors of Silicon Valley and Oxford University in the 2010s offering an alternative, utilitarian-infused approach to charitable giving.
AD EA then played a role in the meltdown at OpenAI when its nonprofit board of directors — tasked solely with ensuring the company’s artificial intelligence models are “broadly beneficial” to all of humanity — abruptly fired CEO Sam Altman on Friday, creating a standoff that currently threatens its entire existence.
Three of the six seats on OpenAI’s board are occupied by people with deep ties to effective altruism: think tank researcher Helen Toner, Quora CEO Adam D’Angelo, and RAND scientist Tasha McCauley. A fourth member, OpenAI co-founder and chief scientist Ilya Sutskever, also holds views on AI that are generally sympathetic to EA.
Until a few days ago, OpenAI and its biggest corporate backers didn’t seem to think there was anything worrisome about this unusual governance structure. The president of Microsoft, which has invested $13 billion into OpenAI, argued earlier this month that the ChatGPT maker’s status as a nonprofit was what made it more trustworthy than competitors like Meta.
AD “ People get hung up on structure ,” Vinod Khosla, whose venture capital firm was among the first to invest in OpenAI’s for-profit subsidiary in 2019, said at an AI conference last week. “If you’re talking about changing the world, who freaking cares?”
Less than a week later, Khosla and other OpenAI investors are now left with shares of uncertain value. So far, it looks like M
... (truncated, 8 KB total)Resource ID:
7f104ac618c94ccc | Stable ID: sid_MXq0dWffp8