Cooperative AI
Cooperative AI research addresses multi-agent coordination failures through game theory and mechanism design, with ~$1-20M/year investment primarily at DeepMind and academic groups. The field remains largely theoretical with limited production deployment, facing fundamental challenges in defining co
Related Pages
Top Related Pages
Multi-Agent Safety
Multi-agent safety research addresses coordination failures, conflict, and collusion risks when multiple AI systems interact.
Autonomous Cooperative Agents
AI agents that act cooperatively on behalf of a principal — delegation of cooperation, multi-agent cooperation dynamics, and alignment implications
Cooperate-Bot
A personal AI agent with a monthly budget that maintains cooperative relationships on your behalf — design analysis, failure modes, and the spectru...
Cooperative Funding Mechanisms
Survey of mechanisms for cooperative resource allocation — from traditional structures (ROSCAs, mutual aid) through modern innovations (quadratic f...
Center for Human-Compatible AI (CHAI)
UC Berkeley research center founded by Stuart Russell developing cooperative AI frameworks and preference learning approaches to ensure AI systems ...