Back
OpenAI's advocacy for licensing
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
This is an official OpenAI position piece by Sam Altman and Greg Brockman outlining their views on superintelligence governance; useful for understanding how leading AI labs frame the need for licensing and international oversight.
Metadata
Importance: 62/100blog postprimary source
Summary
OpenAI's blog post argues that superintelligence may arrive sooner than expected and calls for new governance frameworks, including international coordination and licensing regimes for the most powerful AI systems. It outlines OpenAI's views on how society should prepare for and oversee AI systems that could surpass human-level capabilities across most domains.
Key Points
- •Superintelligence could arrive within the current decade, requiring proactive governance structures before systems become ungovernable.
- •OpenAI advocates for licensing requirements for frontier AI developers to ensure accountability and safety standards.
- •International coordination is deemed essential to prevent races to the bottom on safety among competing nations or companies.
- •Existing regulatory institutions may be insufficient; new bodies analogous to the IAEA or nuclear regulators may be needed for AI.
- •The post acknowledges tension between open development and the need for safety-focused control over the most powerful systems.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Governance and Policy | Crux | 66.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20267 KB
Governance of superintelligence
Apr
MAY
Jun
01
2023
2024
2025
success
fail
About this capture
COLLECTED BY
Organization: Internet Archive
Focused crawls are collections of frequently-updated webcrawl data from narrow (as opposed to broad or wide) web crawls, often focused on a single domain or subdomain.
Collection: time.com
TIMESTAMPS
The Wayback Machine - http://web.archive.org/web/20240501033453/https://openai.com/blog/governance-of-superintelligence
Close
Search Submit
Skip to main content
Site Navigation
Research
Overview
Index
GPT-4
DALL·E 3
Sora
API
Overview
Pricing
Docs
ChatGPT
Overview
Team
Enterprise
Pricing
Try ChatGPT
Safety
Company
About
Blog
Careers
Residency
Charter
Security
Customer stories
Search
Navigation quick links
Log in
Try ChatGPT
Menu
Mobile Navigation
Close
Site Navigation
Research
Overview
Index
GPT-4
DALL·E 3
Sora
API
Overview
Pricing
Docs
ChatGPT
Overview
Team
Enterprise
Pricing
Try ChatGPT
Safety
Company
About
Blog
Careers
Residency
Charter
Security
Customer stories
Quick Links
Log in
Try ChatGPT
Search
Submit
Blog
Governance of superintelligence
Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.
Illustration: Justin Jay Wang × DALL·E
May 22, 2023
Authors
Sam Altman
Greg Brockman
Ilya Sutskever
Safety & Alignment
Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.
In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.
We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.
A starting point
There are many ideas that matter for us to have a good chance at successfully navigating this development; here we lay out our initial thinking on three of them.
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current effo
... (truncated, 7 KB total)Resource ID:
825843053766d808 | Stable ID: sid_FV8eIBpc0p