Council of Europe Framework Convention on Artificial Intelligence
- Links3 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Rating/Details |
|---|---|
| Type | International treaty on AI governance |
| Adopted | May 17, 2024 |
| Opened for Signature | September 5, 2024 |
| Status | Awaiting ratification (requires 5 signatories, including 3 CoE members) |
| Geographic Reach | 46 Council of Europe member states + EU + 11 non-member observers |
| Primary Focus | Human rights, democracy, and rule of law in AI systems |
| Enforcement | Conference of the Parties oversight; binding obligations on ratifying states |
| Relation to AI Safety | Addresses human rights risks but not technical AI safety or existential risks |
Overview
Section titled “Overview”The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225) represents a landmark achievement in international AI governance as the world’s first legally binding treaty specifically addressing artificial intelligence.1 Adopted by the Council of Europe on May 17, 2024, and opened for signature on September 5, 2024, in Vilnius, the convention establishes binding standards to ensure AI systems align with human rights, democratic processes, and the rule of law throughout their entire lifecycle—from design through decommissioning.2
The treaty emerged from multi-year consultations involving governments, civil society, academia, and industry representatives, beginning with the establishment of the Ad Hoc Committee on Artificial Intelligence (CAHAI) in 2019.3 The convention comprises eight chapters and 26 articles outlining seven core principles including transparency, accountability, non-discrimination, privacy protection, and safe innovation.4 Unlike purely advisory frameworks, this convention creates enforceable obligations for public authorities and private actors operating on their behalf, while providing flexibility for signatories to implement measures appropriate to their domestic legal systems.5
Notably, the convention employs a risk-based approach requiring parties to conduct dynamic risk and impact assessments and implement graduated mitigation measures—including potential bans or moratoria—for AI uses incompatible with human rights.6 The treaty has been signed by EU member states and over 50 countries including non-CoE members such as the United States, Japan, Israel, and various Latin American nations, reflecting its broad international appeal.7 While it complements the EU AI Act’s more prescriptive regulations, the convention focuses specifically on human rights impacts rather than technical risk categories, creating a principles-based framework with global reach.8
History
Section titled “History”Origins and Development Process
Section titled “Origins and Development Process”The convention’s origins trace to the Council of Europe’s recognition that existing human rights frameworks required adaptation for the AI era. In May 2019, the Committee of Ministers’ 1346th meeting in Helsinki acknowledged the need to assess whether existing standards adequately addressed AI’s implications for human rights, democracy, and the rule of law.9 This led to the formal establishment of the Ad Hoc Committee on Artificial Intelligence (CAHAI) in September 2019, with a mandate to examine the feasibility of a legal framework through broad multi-stakeholder consultations.10
CAHAI conducted extensive work from 2019 through 2021, publishing a progress report in September 2020 that affirmed the Council of Europe’s critical role in ensuring AI development aligned with human rights protections.11 In December 2021, CAHAI released its final report titled “Possible Elements of a Legal Framework on Artificial Intelligence,” which recommended proceeding with a binding international treaty.12
Drafting and Negotiation
Section titled “Drafting and Negotiation”In 2022, the Committee on Artificial Intelligence (CAI) succeeded CAHAI and assumed responsibility for drafting and negotiating the convention text.13 This phase involved intensive negotiations among all 46 Council of Europe member states, the European Union, and 11 non-member observer states (Argentina, Australia, Canada, Costa Rica, Holy See, Israel, Japan, Mexico, Peru, United States, and Uruguay).14 The CAI’s January 2023 third plenary meeting established a drafting group, though notably, observers including civil society organizations were excluded from this stage—a decision criticized by some participants including AlgorithmWatch.15
The negotiations reflected significant compromises to achieve consensus among diverse stakeholders with varying regulatory philosophies. According to analysis by the ENSURED project, the United States and European Commission dominated discussions, with the final text mirroring elements of both EU and US regulatory priorities, particularly the risk-based approach of the EU AI Act.16
Key Timeline
Section titled “Key Timeline”- May 2019: Committee of Ministers recognizes need for AI governance framework
- September 2019: CAHAI formally established
- September 2020: CAHAI progress report approved
- December 2021: CAHAI recommends binding treaty
- 2022: CAI begins drafting and negotiations
- January 2023: CAI establishes drafting group
- March 2024: CAI agrees on final convention text
- May 17, 2024: Convention adopted by Committee of Ministers
- September 5, 2024: Convention opens for signature in Vilnius
Key Figures
Section titled “Key Figures”Marija Pejčinović Burić, Secretary General of the Council of Europe, played a prominent role in promoting the convention, describing it as a “first-of-its-kind, global treaty” designed to uphold human rights while fostering responsible innovation and mitigating AI risks.17 The treaty also received endorsements from Theodoros Roussopoulos (President of the Parliamentary Assembly) and Christopher Holmes (UK House of Lords member), along with support from professional organizations like the International Bar Association through its president Almudena Arpón de Mendívil.18
Structure and Core Provisions
Section titled “Structure and Core Provisions”The convention establishes a comprehensive regulatory framework organized into eight chapters covering objectives, scope, principles, obligations, procedural safeguards, international cooperation, implementation mechanisms, and final provisions.19
Scope and Application
Section titled “Scope and Application”The treaty applies to activities within the AI lifecycle conducted by public authorities and private actors operating on their behalf.20 This includes AI systems used in contexts affecting human rights, democracy, and the rule of law such as employment, healthcare, education, law enforcement, migration control, and public surveillance. However, the convention contains significant exemptions:
| Included | Excluded |
|---|---|
| Public sector AI affecting human rights/democracy | Activities related to “national interests” (Article 3.2) |
| Private actors working on behalf of public authorities | National defense operations |
| AI systems in healthcare, employment, justice | Pure research and development not yet available for use (Article 3.3) |
| Law enforcement and border control AI | National security applications |
These exclusions have drawn criticism from organizations like the European Network of National Human Rights Institutions, which argue the broad national security exemption creates substantial gaps in protection.21
Seven Core Principles
Section titled “Seven Core Principles”The convention mandates adherence to seven fundamental principles throughout the AI lifecycle:
- Respect for human dignity and individual autonomy: Ensuring AI systems do not undermine human agency or intrinsic worth
- Transparency: Making AI systems and their decision-making processes understandable and traceable
- Accountability: Establishing clear responsibility for AI impacts, with mechanisms for redress
- Equality and non-discrimination: Preventing discriminatory outcomes and ensuring fair treatment
- Privacy and personal data protection: Safeguarding personal information and data rights
- Reliability and safety: Ensuring AI systems function as intended without undue risks
- Democratic participation and oversight: Protecting electoral integrity and democratic institutions from AI-related threats22
Risk Assessment and Mitigation Requirements
Section titled “Risk Assessment and Mitigation Requirements”Article 16 establishes binding obligations for parties to “take measures for the identification, assessment, prevention and mitigation of risks and impacts to human rights, democracy and the rule of law arising from the design, development, use and decommissioning of artificial intelligence systems.”23 This risk-based framework requires:
- Dynamic assessments: Continuous evaluation adapted to evolving AI capabilities and deployment contexts
- Graduated measures: Responses proportional to the severity and probability of identified risks
- Extreme measures: Authority to implement bans or moratoria on AI uses fundamentally incompatible with human rights
- Lifecycle coverage: Application from initial design through final decommissioning
The convention takes a technology-neutral approach, avoiding prescriptive technical requirements to remain relevant as AI technology evolves.24
Transparency and Procedural Rights
Section titled “Transparency and Procedural Rights”The treaty mandates specific transparency measures including:
- Labeling AI-generated content and synthetic media
- Notice when individuals are interacting with AI systems
- Clear documentation of AI system capabilities, limitations, and training data
- Access to information about AI decision-making processes affecting individuals25
Individuals must have rights to challenge AI-based decisions affecting them, with access to effective remedies and complaint mechanisms.26
Oversight and Implementation
Section titled “Oversight and Implementation”The convention establishes a Conference of the Parties as its primary oversight body, responsible for monitoring implementation, issuing recommendations, and adapting the framework to technological developments.27 The CAI’s mandate continues until the end of 2025, at which point it transitions to this new governance structure.28
Relationship to Other AI Regulations
Section titled “Relationship to Other AI Regulations”EU AI Act
Section titled “EU AI Act”The convention complements but differs significantly from the EU AI ActPolicyEU AI ActComprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. ...Quality: 55/100, which entered into force in August 2024. While both employ risk-based approaches, they differ in scope and methodology:
| Dimension | CoE Convention | EU AI Act |
|---|---|---|
| Geographic scope | Global (open to non-CoE states) | EU member states only |
| Approach | Principles-based, flexible implementation | Prescriptive technical requirements |
| Focus | Human rights impacts across all AI | Risk tiers (unacceptable, high, limited, minimal) |
| Enforcement | Conference of the Parties, national implementation | EU Commission, harmonized rules, fines up to 7% revenue |
| Private sector | States have discretion on coverage | Direct obligations on providers and deployers |
The EU Commission has indicated it plans to implement the convention primarily through the AI Act and related regulations like GDPR and the Digital Services Act, creating a layered governance architecture.29
Broader International Context
Section titled “Broader International Context”The convention is positioned as a template for global AI governance, potentially influencing future efforts at the United Nations level.30 It builds on existing Council of Europe instruments including the European Convention on Human Rights and Convention 108+ on data protection, extending these frameworks to address AI-specific challenges like algorithmic bias, automated surveillance, and threats to democratic processes.31
Implementation and Status
Section titled “Implementation and Status”Entry into Force
Section titled “Entry into Force”The convention will enter into force on the first day of the month following three months after five signatories, including at least three Council of Europe member states, complete ratification.32 As of early 2026, the convention remains in the signature and ratification phase, with states proceeding through their domestic approval processes.
Initial Signatories
Section titled “Initial Signatories”At the September 5, 2024 opening ceremony, the following entities signed the convention:
Council of Europe Members: Andorra, Georgia, Iceland, Norway, Republic of Moldova, San Marino, United Kingdom
Non-CoE Members: Israel, United States
Regional Organization: European Union (represented by Vice-President Věra Jourová)33
Additional signatures are expected as states complete their domestic approval procedures, with the European Parliament’s consent required for EU participation.34
National Implementation
Section titled “National Implementation”The convention requires signatories to translate international principles into concrete domestic laws and regulations.35 States must establish:
- National frameworks for AI risk assessment and oversight
- Mechanisms for public consultation on high-impact AI systems
- Enforcement authorities with adequate resources and expertise
- Pathways for individuals to seek redress for AI-related harms
- Safe innovation “sandboxes” to test AI systems under controlled conditions (Article 13)36
This flexibility allows adaptation to different legal traditions but may result in varying levels of protection across jurisdictions.
Criticisms and Limitations
Section titled “Criticisms and Limitations”Enforcement Gaps
Section titled “Enforcement Gaps”The convention faces criticism for lacking robust enforcement mechanisms. Unlike the EU AI Act with its substantial fines (up to €35 million or 7% of global turnover for the most serious violations), the convention has “no proper enforcement regime,” potentially limiting its ability to deliver concrete results beyond voluntary compliance.37
Scope Exclusions
Section titled “Scope Exclusions”The national security and defense exemptions (Articles 3.2 and 3.4) have drawn particular scrutiny. Article 3.2 provides a blanket exemption for “all activities within the lifecycle of artificial intelligence systems related to the protection of national interests,” giving states wide discretion to claim exemptions for sensitive applications.38 This creates potential loopholes for government surveillance, military AI, and border control systems that may pose significant human rights risks.
The research and development exclusion (Article 3.3) is similarly controversial. By exempting AI systems “not yet available for use,” the convention prevents initial oversight of whether emerging technologies comply with human rights standards by design, potentially allowing development of inherently problematic systems before any regulatory intervention.39
Public vs. Private Sector Standards
Section titled “Public vs. Private Sector Standards”Critics note the convention imposes differential obligations on public versus private actors. While binding requirements apply to public authorities, Article 2.2 merely requires states to “take measures to address the risks and impacts” from private sector AI “in a manner that conforms with the object and purpose of this Convention.”40 This lower threshold for private entities—which develop and deploy the vast majority of AI systems—creates a regulatory imbalance that may limit the treaty’s practical impact.
Minimal Standards Approach
Section titled “Minimal Standards Approach”By seeking consensus across diverse jurisdictions with different regulatory philosophies, the convention adopts what critics describe as a “common denominator” approach that may represent “its greatest weakness.”41 The emphasis on flexibility and proportionality could enable signatories to implement minimal compliance measures rather than robust protections.
Limited Technical AI Safety Focus
Section titled “Limited Technical AI Safety Focus”The convention focuses exclusively on human rights, democracy, and rule of law impacts, explicitly not regulating all AI aspects or technologies.42 This means it does not address:
- Technical AI safety and robustness challenges
- AI alignment with human values
- Catastrophic or existential risks from advanced AI systems
- Narrow technical issues in machine learning systems
- General-purpose AI capabilities
While this focused scope was deliberate—staying within the Council of Europe’s mandate—it leaves significant gaps for AI risks beyond immediate human rights concerns.
Significance and Impact
Section titled “Significance and Impact”Despite its limitations, the convention represents several important achievements in international AI governance:
First Binding Global Treaty
Section titled “First Binding Global Treaty”As the world’s first legally binding international AI treaty, the convention establishes a precedent for multilateral cooperation on AI governance extending beyond regional blocs.43 Its openness to non-CoE members enables broader participation than EU-centric initiatives, with early signatories including major AI developers like the United States.
Human Rights Baseline
Section titled “Human Rights Baseline”The treaty establishes international legal standards specifically addressing AI’s impacts on fundamental rights, filling gaps in existing human rights instruments that predate modern AI capabilities.44 This creates a normative framework that can influence domestic legislation, court decisions, and corporate practices globally.
Risk-Based Framework
Section titled “Risk-Based Framework”The convention’s lifecycle and risk-based approach provides a flexible methodology that other jurisdictions may adapt, potentially serving as a template for future regional and international agreements.45 Its emphasis on dynamic risk assessment acknowledges that appropriate governance must evolve alongside rapidly advancing AI capabilities.
Multi-Stakeholder Model
Section titled “Multi-Stakeholder Model”The development process—involving governments, international organizations, civil society, academia, and industry across 57 participating entities—demonstrates a relatively inclusive model for negotiating AI governance frameworks, despite noted limitations in the final drafting stage.46
Key Uncertainties
Section titled “Key Uncertainties”Several important questions remain about the convention’s future implementation and effectiveness:
-
Ratification timeline: When will sufficient states complete ratification to trigger entry into force? Will major AI-developing nations like the United States, China, or India ultimately join?
-
Enforcement effectiveness: Without strong penalties or compliance mechanisms, how will the Conference of the Parties ensure meaningful implementation rather than superficial adherence?
-
National security exemptions: How broadly will states interpret the “national interests” exception? Will this undermine protections for AI surveillance, military applications, or border control systems?
-
Private sector coverage: Will states exercise their discretion to extend binding obligations to private AI developers and deployers, or will most adopt minimal approaches?
-
Coordination with other frameworks: How will implementation harmonize with the EU AI Act, proposed U.S. regulations, and other emerging national frameworks? Will this create complementary layers of protection or regulatory fragmentation?
-
Technical evolution: As AI capabilities advance toward more powerful and general systems, will the convention’s flexible framework prove adaptable, or will its technology-neutral approach prove insufficient for novel risks?
-
Relationship to AI safety: Will future protocols or amendments address technical AI safety, alignment, or catastrophic risks beyond human rights concerns, or will these remain outside the convention’s scope?
-
Global South participation: How will the convention engage countries outside Europe and North America, and will its governance structure adequately represent diverse perspectives on AI development priorities?
Sources
Section titled “Sources”Footnotes
Section titled “Footnotes”-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
The Framework Convention on AI: Embedding Human Rights in the Digital Age ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
The Framework Convention on AI: A Landmark Agreement for Ethical AI ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
The Framework Convention on AI: Embedding Human Rights in the Digital Age ↩
-
Europe Committee Artificial Intelligence Draft Framework Convention ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
Council of Europe Opens First Global AI Treaty for Signatures - ASIL ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
Europe Committee Artificial Intelligence Draft Framework Convention ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
The Framework Convention on AI: Embedding Human Rights in the Digital Age ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
EU Moves to Ratify International Treaty on AI and Human Rights ↩
-
What the Council of Europe’s New Treaty Tells Us About Global AI Governance ↩
-
The Framework Convention on AI: Embedding Human Rights in the Digital Age ↩
-
Framework Convention on Artificial Intelligence - Wikipedia ↩
-
Europe Committee Artificial Intelligence Draft Framework Convention ↩
-
Framework Convention on Artificial Intelligence - Full Text ↩
-
First Step on Long Road to Global AI Regulation - Lowy Institute ↩
-
Understanding the Scope of the CoE Framework Convention on AI ↩
-
Understanding the Scope of the CoE Framework Convention on AI ↩
-
Understanding the Scope of the CoE Framework Convention on AI ↩
-
First Step on Long Road to Global AI Regulation - Lowy Institute ↩
-
The World’s First Ever International AI Treaty - NYU Journal ↩
-
The Framework Convention on AI: Embedding Human Rights in the Digital Age ↩