Skip to content
Longterm Wiki
Back

OpenAI. Governance of superintelligence

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OpenAI

A high-profile 2023 statement from OpenAI's founders calling for international governance of superintelligent AI; notable as an industry-insider perspective advocating for external oversight of their own technology.

Metadata

Importance: 72/100blog postprimary source

Summary

A policy statement by OpenAI's leadership (Sam Altman, Greg Brockman, Ilya Sutskever) arguing that superintelligence may arrive within a decade and requires new international governance frameworks beyond existing AI oversight approaches. It proposes coordination among leading AI labs, government involvement, and an international watchdog body analogous to the IAEA. The piece acknowledges the transformative and potentially dangerous nature of superintelligence while arguing development should continue under improved oversight.

Key Points

  • Superintelligence could arrive within a decade and will be qualitatively different from current AI, requiring new governance approaches.
  • Proposes an international authority (like the IAEA) to oversee and inspect the most powerful AI systems globally.
  • Argues that leading AI developers should coordinate on safety and governance even as they compete commercially.
  • Emphasizes that any governance framework must balance preventing catastrophic misuse while preserving benefits of advanced AI.
  • Acknowledges that normal regulatory processes may be too slow and that proactive institution-building is urgently needed.

Cited by 1 page

PageTypeQuality
SuperintelligenceConcept92.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202610 KB
Governance of superintelligence | OpenAI

 

 
 
 
 

 Mar
 APR
 May
 

 
 

 
 04
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Save Page Now Outlinks

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - https://web.archive.org/web/20260404143900/https://openai.com/index/governance-of-superintelligence/

 

Skip to main content

li:hover)>li:not(:hover)>*]:text-primary-60 flex h-full min-w-0 items-baseline gap-0 overflow-x-hidden whitespace-nowrap [-ms-overflow-style:none] [scrollbar-width:none] focus-within:overflow-visible [&::-webkit-scrollbar]:hidden">
Research

Products

Business

Developers

Company

Foundation(opens in a new window)

Log in

Try ChatGPT

(opens in a new window)

Research

Products

Business

Developers

Company

Foundation

(opens in a new window)

Try ChatGPT

(opens in a new window)Login

OpenAI

Table of contents

A starting point

What’s not in scope

Public input and potential

May 22, 2023
Safety

Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

Loading…

Share

Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.

We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.

A starting point

There are many ideas that matter for us to have a good chance at successfully navigating this development; here we lay out our initial thinking on three of them.

First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

And of course, individual companies should be held to an extremely high standard of acting responsibly.

Second, we are like

... (truncated, 10 KB total)
Resource ID: c2e3d7e5c92d5689 | Stable ID: sid_1FmbZxeN2t