Alignment Assemblies — The Collective Intelligence Project
webThe Collective Intelligence Project's Alignment Assemblies initiative uses democratic deliberation processes to involve the public in AI governance decisions, partnering with Anthropic, OpenAI, and others to surface collective values for AI behavior and risk assessment.
Metadata
Summary
Alignment Assemblies is a program by the Collective Intelligence Project that runs democratic deliberation processes to give the public meaningful input into AI governance decisions. Pilots have included collaborations with OpenAI on LLM risk prioritization, Anthropic on collectively-designed AI behavioral principles (Constitutional AI), Taiwan's Ministry of Digital Affairs on generative AI policy, and Creative Commons on AI training data licensing. The initiative argues that consequential AI decisions should not be left solely to a small group of developers and policymakers.
Key Points
- •Ran pilot with Anthropic to train a model on a 'collective constitution' co-written by 1,000 representative Americans (Collective Constitutional AI).
- •Partnered with OpenAI to use wikisurvey tools to rank LLM risks most concerning to the US public, informing model evaluations and regulation.
- •Collaborated with Taiwan's Ministry of Digital Affairs to adapt the vTaiwan process to generative AI policy questions.
- •Worked with Creative Commons Foundation on how to respond to CC-licensed work being used in AI training.
- •Argues that democratic processes can surface necessary information and ensure collective accountability for high-impact AI decisions.
1 FactBase fact citing this source
Cached Content Preview
Alignment Assemblies — The Collective Intelligence Project
0
Alignment Assemblies
AI is on track to lead to profound societal shifts.
Choices that are consequential for all of us are already being made , from how and when to release models, what constitutes appropriate risk, and how to determine underlying principles for model behavior. By default, these decisions fall to a small percentage of those likely to be affected. This disconnect between high impact decisions and meaningful collective input will only grow as AI capabilities accelerate.
We believe that we can do better. Experimentation with collective intelligence processes can surface necessary information for decision-making, ensure collective accountability, and better align with human values. We are partnering with allies and collaborators from around the world to prove it. Read our blog post for more on the vision for alignment assemblies, and see our pilot processes, partnership principles, and vision for the future below. Read the results from our processes with Anthropic and OpenAI , which showed that democracy can do a good job deciding how to govern AI. And join us!
2023 Roadmap
v0: Summit for Democracy, March 2023
Core question: What do global policymakers think about the impact of generative AI on democracy?
This pilot surfaced broad opinions from a wide set of participants, pulled from the White House’s Summit for Democracy, on the relationship between generative AI and the future of democracy. Read about this pilot in the New York Times.
v1: Democratizing Risk Assessments, June 2023
Core question: what does the US public want to measure and mitigate when it comes to LLM risks and harms?
Partner: OpenAI
This pilot used state-of-the-art wikisurvey tools to produce a ranked list of risks that are most concerning to the US public. The outcomes of this process will be used to inform model evaluations and release criteria, standards-setting processes, and AI regulation. Read our report here.
v2: Democratizing AI Futures, July 2023
Core question: How should the Ideathon, and Taiwan’s governmental policy more broadly, respond to generative AI?
Partner: Ministry of Digital Affairs, Taiwan
This pilot adapted the vTaiwan process to the question of generative AI, covering questions of copyright, due compensation, bias and discrimination, fair use, public service, and broader societal impacts. The results will directly structure the Ideathon and will be incorporated into policy over the next year.
... (truncated, 4 KB total)kb-b7452300bf1069e7