Skip to content
Longterm Wiki
Back

Collective Alignment: Public Input on Our Model Spec

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OpenAI

This is an official OpenAI update on their collective alignment research program, detailing how public survey data informed changes to the Model Spec—relevant to governance, participatory AI design, and the operationalization of broad-based value alignment.

Metadata

Importance: 62/100blog postprimary source

Summary

OpenAI surveyed over 1,000 people worldwide to gather public input on how their AI models should behave, comparing responses to their existing Model Spec. The study found broad agreement with the Spec but used disagreements to drive targeted updates, and released the dataset publicly on HuggingFace to support further research.

Key Points

  • Surveyed 1,000+ people globally to assess alignment between public values and OpenAI's Model Spec, finding mostly agreement but identifying areas for clarification.
  • Disagreements between participant preferences and the Model Spec were transformed into proposals, with some adopted, some deferred, and others set aside based on principle or feasibility.
  • OpenAI released the public input dataset on HuggingFace to enable broader AI research community work on collective alignment methods.
  • The effort reflects a stated principle that no single person or institution should define ideal AI behavior, emphasizing diverse global representation.
  • Complements other OpenAI alignment inputs (expert feedback, listening sessions) and connects to ongoing work on model personalization and default behavior governance.

Cited by 1 page

PageTypeQuality
Why Alignment Might Be HardArgument69.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202627 KB
Collective alignment: public input on our Model Spec | OpenAI

 

 
 
 
 

 Feb
 MAR
 Apr
 

 
 

 
 11
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Save Page Now Outlinks

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20260311155935/https://openai.com/index/collective-alignment-aug-2025-updates/

 

Skip to main content

Log in

Switch to

ChatGPT(opens in a new window)

Sora(opens in a new window)

API Platform(opens in a new window)

Research

Safety

For Business

For Developers

ChatGPT

Sora

Codex

Stories

Company

News

Research

Back to main menu

Research Index

Research Overview

Research Residency

OpenAI for Science

Latest Advancements

GPT-5.4

GPT-5.3 Instant

GPT-5.3-Codex

GPT-5.2

GPT-5.1

Sora 2

Safety

Back to main menu

Safety Approach

Security & Privacy

For Business

Back to main menu

Business Overview

Enterprise

Startups

Solutions

Learn

App Integrations

ChatGPT Pricing

API Pricing

Contact Sales

For Developers

Back to main menu

API Platform

API Pricing

Agents

Codex

Open Models

Community

(opens in a new window)

ChatGPT

Back to main menu

Explore ChatGPT

Business

Enterprise

Education

Pricing

Download

Sora

Codex

Stories

Company

Back to main menu

About Us

Our Charter

Foundation

Careers

Brand Guidelines

News

Log in

OpenAI

Table of contents

Model Spec changes

What we did

Conclusion

Appendix: Demographic data

August 27, 2025
Publication

Collective alignment: public input on our Model Spec

We surveyed over 1,000 people worldwide on how our models should behave and compared their views to our Model Spec. We found they largely agree with the Spec, and we adopted changes from the disagreements.

View dataset

(opens in a new window)

Loading…

Share

No single person or institution should define how an ideal AI should behave for everyone.

To fulfill our mission of ensuring that AGI benefits all of humanity, OpenAI needs to build systems that reflect the wide range of values and priorities of all the people we serve. We approach this in many ways, including external feedback forms, expert input, and global listening sessions. Another way we do this is through collective alignment, a research effort that gathers a variety of perspectives on how our models should behave. The question of which values an AI system should follow is complex and we don’t have all the answers, especially in subjective, contentious or high-stakes situations. As AI becomes more capable and integrated into people’s lives, it’s important that their default behavior—and the boundaries of personalization—reflects a wide range of perspectives and values.

There will likely never be a single AI behavior set that suits everyone’s needs. This is why we also invest in personalization and custom personalities. However, the defaults of

... (truncated, 27 KB total)
Resource ID: 75b66340eb2fadc2 | Stable ID: ZDgyZWYwOD