Skip to content
Longterm Wiki
Back

Center for Human-Compatible Artificial Intelligence - Wikipedia

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

Useful background reference for understanding CHAI's organizational structure, research agenda, and key personnel; relevant to anyone studying the AI safety research landscape and major institutional players.

Metadata

Importance: 55/100wiki pagereference

Summary

Wikipedia overview of CHAI, a UC Berkeley research center founded by Stuart Russell focused on ensuring AI systems are beneficial and aligned with human values. The center conducts research on value alignment, cooperative AI, and the technical and philosophical challenges of building AI that understands and respects human preferences.

Key Points

  • CHAI was founded in 2016 at UC Berkeley by Stuart Russell, author of the influential AI textbook and proponent of the 'assistance game' framework for alignment.
  • The center's core mission is to develop AI systems that are provably beneficial to humans by learning and respecting human preferences and values.
  • CHAI pursues both technical research (reward learning, inverse reinforcement learning) and policy/governance work on AI safety.
  • Notable researchers affiliated with CHAI include Pieter Abbeel, Anca Dragan, and others working on value alignment and human-robot interaction.
  • CHAI is one of the major academic institutions dedicated to long-term AI safety research, distinct from industry labs.

Cited by 1 page

PageTypeQuality
Stuart RussellPerson30.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20266 KB
Center for Human-Compatible Artificial Intelligence - Wikipedia 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Jump to content 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 From Wikipedia, the free encyclopedia 
 
 
 
 
 
 US AI safety research center 
 Center for Human-Compatible Artificial Intelligence Formation 2016 &#59; 10 years ago  ( 2016 ) Headquarters Berkeley, California Director Stuart J. Russell Executive director Mark Nitzberg Parent organization University of California, Berkeley Website humancompatible .ai 
 The Center for Human-Compatible Artificial Intelligence ( CHAI ) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods . The center was founded in 2016 by a group of academics led by Berkeley computer science professor and AI expert Stuart J. Russell . [ 1 ] [ 2 ] Russell is known for co-authoring the widely used AI textbook Artificial Intelligence: A Modern Approach .

 CHAI's faculty membership includes Russell , Pieter Abbeel and Anca Dragan from Berkeley , Bart Selman and Joseph Halpern from Cornell , [ 3 ] Michael Wellman and Satinder Singh Baveja from the University of Michigan , and Tom Griffiths and Tania Lombrozo from Princeton . [ 4 ] In 2016, the Open Philanthropy Project (OpenPhil) recommended that Good Ventures provide CHAI support of $5,555,550 over five years. [ 5 ] CHAI has since received additional grants from OpenPhil and Good Ventures of over $12,000,000, including for collaborations with the World Economic Forum and Global AI Council. [ 6 ] [ 7 ] [ 8 ] 

 
 Research

 [ edit ] 
 CHAI's approach to AI safety research focuses on value alignment strategies, particularly inverse reinforcement learning , in which the AI infers human values from observing human behavior. [ 9 ] It has also worked on modeling human-machine interaction in scenarios where intelligent machines have an "off-switch" that they are capable of overriding. [ 10 ] 

 See also

 [ edit ] 
 Existential risk from artificial general intelligence 

 Future of Humanity Institute 

 Future of Life Institute 

 Human Compatible 

 Machine Intelligence Research Institute 
 
 References

 [ edit ] 
 
 ^ Norris, Jeffrey (Aug 29, 2016). "UC Berkeley launches Center for Human-Compatible Artificial Intelligence" . Retrieved Dec 27, 2019 . 

 ^ Solon, Olivia (Aug 30, 2016). "The rise of robots: forget evil AI – the real risk is far more insidious" . The Guardian . Retrieved Dec 27, 2019 . 

 ^ Cornell University . "Human-Compatible AI" . Retrieved Dec 27, 2019 . 

 ^ Center for Human-Compatible Artificial Intelligence. "People" . Retrieved Dec 27, 2019 . 

 ^ Open Philanthropy Project (Aug 2016). "UC Berkeley — Center for Human-Compatible AI (2016)" . Retrieved Dec 27, 2019 . 

 ^ Open Philanthropy Project (Nov 2019). "UC Berkeley — Cent

... (truncated, 6 KB total)
Resource ID: 0dcffc6765044969 | Stable ID: sid_eHnDInnC8G