Back
Harvard Law Review: Amoral Drift in AI Corporate Governance
webharvardlawreview.org·harvardlawreview.org/print/vol-138/amoral-drift-in-ai-cor...
Published in the Harvard Law Review, this piece offers a legal and corporate governance lens on AI safety failures, relevant for those studying institutional design, regulatory approaches, and why AI companies may fail to self-govern effectively.
Metadata
Importance: 72/100journal articleanalysis
Summary
This Harvard Law Review article examines how AI companies exhibit 'amoral drift'—a structural tendency to deprioritize ethical considerations as commercial pressures intensify—and analyzes failures in current corporate governance mechanisms to constrain this drift. It argues that existing legal and organizational structures are insufficient to ensure AI development remains aligned with public interests.
Key Points
- •AI companies face structural incentives that systematically erode ethical commitments over time, a phenomenon termed 'amoral drift'
- •Standard corporate governance mechanisms (boards, fiduciary duties, internal ethics teams) are inadequate to constrain AI-specific harms
- •The article critiques high-profile governance failures at major AI labs as symptoms of deeper structural problems, not isolated incidents
- •Proposes legal and regulatory reforms to create binding accountability mechanisms for AI developers
- •Argues that voluntary safety commitments without enforceable legal backstops are insufficient to govern transformative AI systems
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Anthropic Long-Term Benefit Trust | Organization | 70.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202685 KB
Amoral Drift in AI Corporate Governance - Harvard Law Review
Skip to main content
Harvard Law Review
Print
Developments in the Law
Download
See Footnotes
ChatGPT’s debut in November of 2022 set off a race in Silicon Valley to develop and monetize artificial intelligence (AI). 1 Within a few months, Microsoft invested $10 billion in OpenAI, the company behind ChatGPT. 2 Anthropic, a competitor of OpenAI, raised similarly impressive amounts of money from companies and investors hoping to participate in the AI revolution. 3
Well before ChatGPT emerged, commentators warned of the risks advanced AI might pose. 4 Observers who predict existential threats to humanity from superintelligent AI point to the difficulty of precisely controlling it. 5 They reason that superintelligent AI might pursue a human-directed goal without balancing its goal against general human values. 6 For example, with access to enough tools, a superintelligent AI instructed to maximize paperclip production might end up “converting . . . large chunks of the observable universe into paperclips.” 7 Alternatively, a superintelligent AI may develop its own unexpected goals — goals that do not necessarily account for human wellbeing. 8 The proposed solution to these types of existential AI risks is “AI alignment”: the challenging task of ensuring that the values of an AI align with human values. 9 Critics believe AI startups are moving much faster than AI alignment research can keep up, at great risk to humanity. 10
Even if these existential risks sound far-fetched, AI certainly does present a challenge to existing legal and social frameworks. Companies have already demonstrated that AI can learn from and reflect human racial and gender biases. 11 Both the training inputs and the creative outputs of AI raise complicated questions of intellectual property law. 12 The current spotlight on AI also brings into focus the question of how to protect privacy in the era of Big Data, 13 especially as AI promises to massively boost data collection. 14 More gravely, malicious actors might use AI for terrorism, disinformation, and oppression. 15 AI startups need to confront these legal, ethical, and security issues implicated by AI as they advance the technology, including whether and how to implement guardrails to prevent the misuse of their products.
The risks posed by AI development have revived the question of how to deal with the negative externalities of corporations. Doubtful of the traditional profit motive, AI company founders have adopted some of the most ambitious versions of “prosocial” corporate governance mechanisms detailed in corporate governance literature. 16 To counterbalance the pressure to maximize profit, OpenAI and Anthropic have granted their boards outsized discretion in a manner consistent with a stakeholderist corpor
... (truncated, 85 KB total)Resource ID:
ab0dc9abee0cef4d | Stable ID: sid_hmgyYpa98m