Skip to content
Longterm Wiki

Case Law for AI

active

AIDF project exploring judicial-inspired approaches for connecting deliberative public input into AI alignment and inference systems. Uses case law methods to create knowledge representation that preserves deliberative nuance.

Organizations

1
AI & Democracy FoundationNonprofit designing and testing democratic processes for AI governance and alignment. Led by Aviv Ovadya. Key projects include the Democracy Levels framework (evaluating how democratically decisions are made), Case Law for AI (judicial-inspired approaches to connecting deliberative input into alignment), and Safeguarded AI (ARIA partnership for formal safety specifications through deliberation). 11 staff members.

People

3
Aviv OvadyaFounder and CEO of the AI & Democracy Foundation. Coined "infocalypse" (2016) to describe AI-generated misinformation crisis. Created the Democracy Levels framework for evaluating democratic AI governance. Leads ARIA-funded "Deliberative AI Specifications and Infrastructure" project. Affiliated with Harvard Berkman Klein Center, Centre for Governance of AI, and newDemocracy Foundation. MIT MEng in CS.
Luke ThorburnResearcher at AI & Democracy Foundation and UCL. Works on deliberative AI specifications and connecting public input to AI alignment. Co-author of "Prosocial Media" with Glen Weyl and Audrey Tang.
Vincent ConitzerProfessor at CMU and University of Oxford. Director of FOCAL (Foundations of Cooperative AI Lab). Leading figure bridging AI and social choice theory. ACM Fellow (2019). Key paper "Social Choice Should Guide AI Alignment" (ICML 2024) connects Arrow's impossibility theorem to AI alignment.

Related Wiki Pages

Top Related Pages

Clusters

governance

Tags

judicialai-alignmentdeliberationgovernance

Quick Links