Skip to content

Council of Europe Framework Convention on Artificial Intelligence

📋Page Status
Page Type:ResponseStyle Guide →Intervention/response page
Quality:65 (Good)
Importance:65 (Useful)
Last edited:2026-02-01 (today)
Words:3.0k
Structure:
📊 3📈 0🔗 1📚 4623%Score: 12/15
LLM Summary:The Council of Europe's AI Framework Convention represents the first legally binding international AI treaty, establishing human rights-focused governance principles across 57+ countries, though it has significant enforcement gaps and excludes national security applications. While historically significant for AI governance, it addresses human rights risks rather than technical AI safety or existential risks.
Issues (1):
  • Links3 links could use <R> components
DimensionRating/Details
TypeInternational treaty on AI governance
AdoptedMay 17, 2024
Opened for SignatureSeptember 5, 2024
StatusAwaiting ratification (requires 5 signatories, including 3 CoE members)
Geographic Reach46 Council of Europe member states + EU + 11 non-member observers
Primary FocusHuman rights, democracy, and rule of law in AI systems
EnforcementConference of the Parties oversight; binding obligations on ratifying states
Relation to AI SafetyAddresses human rights risks but not technical AI safety or existential risks

The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225) represents a landmark achievement in international AI governance as the world’s first legally binding treaty specifically addressing artificial intelligence.1 Adopted by the Council of Europe on May 17, 2024, and opened for signature on September 5, 2024, in Vilnius, the convention establishes binding standards to ensure AI systems align with human rights, democratic processes, and the rule of law throughout their entire lifecycle—from design through decommissioning.2

The treaty emerged from multi-year consultations involving governments, civil society, academia, and industry representatives, beginning with the establishment of the Ad Hoc Committee on Artificial Intelligence (CAHAI) in 2019.3 The convention comprises eight chapters and 26 articles outlining seven core principles including transparency, accountability, non-discrimination, privacy protection, and safe innovation.4 Unlike purely advisory frameworks, this convention creates enforceable obligations for public authorities and private actors operating on their behalf, while providing flexibility for signatories to implement measures appropriate to their domestic legal systems.5

Notably, the convention employs a risk-based approach requiring parties to conduct dynamic risk and impact assessments and implement graduated mitigation measures—including potential bans or moratoria—for AI uses incompatible with human rights.6 The treaty has been signed by EU member states and over 50 countries including non-CoE members such as the United States, Japan, Israel, and various Latin American nations, reflecting its broad international appeal.7 While it complements the EU AI Act’s more prescriptive regulations, the convention focuses specifically on human rights impacts rather than technical risk categories, creating a principles-based framework with global reach.8

The convention’s origins trace to the Council of Europe’s recognition that existing human rights frameworks required adaptation for the AI era. In May 2019, the Committee of Ministers’ 1346th meeting in Helsinki acknowledged the need to assess whether existing standards adequately addressed AI’s implications for human rights, democracy, and the rule of law.9 This led to the formal establishment of the Ad Hoc Committee on Artificial Intelligence (CAHAI) in September 2019, with a mandate to examine the feasibility of a legal framework through broad multi-stakeholder consultations.10

CAHAI conducted extensive work from 2019 through 2021, publishing a progress report in September 2020 that affirmed the Council of Europe’s critical role in ensuring AI development aligned with human rights protections.11 In December 2021, CAHAI released its final report titled “Possible Elements of a Legal Framework on Artificial Intelligence,” which recommended proceeding with a binding international treaty.12

In 2022, the Committee on Artificial Intelligence (CAI) succeeded CAHAI and assumed responsibility for drafting and negotiating the convention text.13 This phase involved intensive negotiations among all 46 Council of Europe member states, the European Union, and 11 non-member observer states (Argentina, Australia, Canada, Costa Rica, Holy See, Israel, Japan, Mexico, Peru, United States, and Uruguay).14 The CAI’s January 2023 third plenary meeting established a drafting group, though notably, observers including civil society organizations were excluded from this stage—a decision criticized by some participants including AlgorithmWatch.15

The negotiations reflected significant compromises to achieve consensus among diverse stakeholders with varying regulatory philosophies. According to analysis by the ENSURED project, the United States and European Commission dominated discussions, with the final text mirroring elements of both EU and US regulatory priorities, particularly the risk-based approach of the EU AI Act.16

  • May 2019: Committee of Ministers recognizes need for AI governance framework
  • September 2019: CAHAI formally established
  • September 2020: CAHAI progress report approved
  • December 2021: CAHAI recommends binding treaty
  • 2022: CAI begins drafting and negotiations
  • January 2023: CAI establishes drafting group
  • March 2024: CAI agrees on final convention text
  • May 17, 2024: Convention adopted by Committee of Ministers
  • September 5, 2024: Convention opens for signature in Vilnius

Marija Pejčinović Burić, Secretary General of the Council of Europe, played a prominent role in promoting the convention, describing it as a “first-of-its-kind, global treaty” designed to uphold human rights while fostering responsible innovation and mitigating AI risks.17 The treaty also received endorsements from Theodoros Roussopoulos (President of the Parliamentary Assembly) and Christopher Holmes (UK House of Lords member), along with support from professional organizations like the International Bar Association through its president Almudena Arpón de Mendívil.18

The convention establishes a comprehensive regulatory framework organized into eight chapters covering objectives, scope, principles, obligations, procedural safeguards, international cooperation, implementation mechanisms, and final provisions.19

The treaty applies to activities within the AI lifecycle conducted by public authorities and private actors operating on their behalf.20 This includes AI systems used in contexts affecting human rights, democracy, and the rule of law such as employment, healthcare, education, law enforcement, migration control, and public surveillance. However, the convention contains significant exemptions:

IncludedExcluded
Public sector AI affecting human rights/democracyActivities related to “national interests” (Article 3.2)
Private actors working on behalf of public authoritiesNational defense operations
AI systems in healthcare, employment, justicePure research and development not yet available for use (Article 3.3)
Law enforcement and border control AINational security applications

These exclusions have drawn criticism from organizations like the European Network of National Human Rights Institutions, which argue the broad national security exemption creates substantial gaps in protection.21

The convention mandates adherence to seven fundamental principles throughout the AI lifecycle:

  1. Respect for human dignity and individual autonomy: Ensuring AI systems do not undermine human agency or intrinsic worth
  2. Transparency: Making AI systems and their decision-making processes understandable and traceable
  3. Accountability: Establishing clear responsibility for AI impacts, with mechanisms for redress
  4. Equality and non-discrimination: Preventing discriminatory outcomes and ensuring fair treatment
  5. Privacy and personal data protection: Safeguarding personal information and data rights
  6. Reliability and safety: Ensuring AI systems function as intended without undue risks
  7. Democratic participation and oversight: Protecting electoral integrity and democratic institutions from AI-related threats22

Risk Assessment and Mitigation Requirements

Section titled “Risk Assessment and Mitigation Requirements”

Article 16 establishes binding obligations for parties to “take measures for the identification, assessment, prevention and mitigation of risks and impacts to human rights, democracy and the rule of law arising from the design, development, use and decommissioning of artificial intelligence systems.”23 This risk-based framework requires:

  • Dynamic assessments: Continuous evaluation adapted to evolving AI capabilities and deployment contexts
  • Graduated measures: Responses proportional to the severity and probability of identified risks
  • Extreme measures: Authority to implement bans or moratoria on AI uses fundamentally incompatible with human rights
  • Lifecycle coverage: Application from initial design through final decommissioning

The convention takes a technology-neutral approach, avoiding prescriptive technical requirements to remain relevant as AI technology evolves.24

The treaty mandates specific transparency measures including:

  • Labeling AI-generated content and synthetic media
  • Notice when individuals are interacting with AI systems
  • Clear documentation of AI system capabilities, limitations, and training data
  • Access to information about AI decision-making processes affecting individuals25

Individuals must have rights to challenge AI-based decisions affecting them, with access to effective remedies and complaint mechanisms.26

The convention establishes a Conference of the Parties as its primary oversight body, responsible for monitoring implementation, issuing recommendations, and adapting the framework to technological developments.27 The CAI’s mandate continues until the end of 2025, at which point it transitions to this new governance structure.28

The convention complements but differs significantly from the EU AI Act, which entered into force in August 2024. While both employ risk-based approaches, they differ in scope and methodology:

DimensionCoE ConventionEU AI Act
Geographic scopeGlobal (open to non-CoE states)EU member states only
ApproachPrinciples-based, flexible implementationPrescriptive technical requirements
FocusHuman rights impacts across all AIRisk tiers (unacceptable, high, limited, minimal)
EnforcementConference of the Parties, national implementationEU Commission, harmonized rules, fines up to 7% revenue
Private sectorStates have discretion on coverageDirect obligations on providers and deployers

The EU Commission has indicated it plans to implement the convention primarily through the AI Act and related regulations like GDPR and the Digital Services Act, creating a layered governance architecture.29

The convention is positioned as a template for global AI governance, potentially influencing future efforts at the United Nations level.30 It builds on existing Council of Europe instruments including the European Convention on Human Rights and Convention 108+ on data protection, extending these frameworks to address AI-specific challenges like algorithmic bias, automated surveillance, and threats to democratic processes.31

The convention will enter into force on the first day of the month following three months after five signatories, including at least three Council of Europe member states, complete ratification.32 As of early 2026, the convention remains in the signature and ratification phase, with states proceeding through their domestic approval processes.

At the September 5, 2024 opening ceremony, the following entities signed the convention:

Council of Europe Members: Andorra, Georgia, Iceland, Norway, Republic of Moldova, San Marino, United Kingdom

Non-CoE Members: Israel, United States

Regional Organization: European Union (represented by Vice-President Věra Jourová)33

Additional signatures are expected as states complete their domestic approval procedures, with the European Parliament’s consent required for EU participation.34

The convention requires signatories to translate international principles into concrete domestic laws and regulations.35 States must establish:

  • National frameworks for AI risk assessment and oversight
  • Mechanisms for public consultation on high-impact AI systems
  • Enforcement authorities with adequate resources and expertise
  • Pathways for individuals to seek redress for AI-related harms
  • Safe innovation “sandboxes” to test AI systems under controlled conditions (Article 13)36

This flexibility allows adaptation to different legal traditions but may result in varying levels of protection across jurisdictions.

The convention faces criticism for lacking robust enforcement mechanisms. Unlike the EU AI Act with its substantial fines (up to €35 million or 7% of global turnover for the most serious violations), the convention has “no proper enforcement regime,” potentially limiting its ability to deliver concrete results beyond voluntary compliance.37

The national security and defense exemptions (Articles 3.2 and 3.4) have drawn particular scrutiny. Article 3.2 provides a blanket exemption for “all activities within the lifecycle of artificial intelligence systems related to the protection of national interests,” giving states wide discretion to claim exemptions for sensitive applications.38 This creates potential loopholes for government surveillance, military AI, and border control systems that may pose significant human rights risks.

The research and development exclusion (Article 3.3) is similarly controversial. By exempting AI systems “not yet available for use,” the convention prevents initial oversight of whether emerging technologies comply with human rights standards by design, potentially allowing development of inherently problematic systems before any regulatory intervention.39

Critics note the convention imposes differential obligations on public versus private actors. While binding requirements apply to public authorities, Article 2.2 merely requires states to “take measures to address the risks and impacts” from private sector AI “in a manner that conforms with the object and purpose of this Convention.”40 This lower threshold for private entities—which develop and deploy the vast majority of AI systems—creates a regulatory imbalance that may limit the treaty’s practical impact.

By seeking consensus across diverse jurisdictions with different regulatory philosophies, the convention adopts what critics describe as a “common denominator” approach that may represent “its greatest weakness.”41 The emphasis on flexibility and proportionality could enable signatories to implement minimal compliance measures rather than robust protections.

The convention focuses exclusively on human rights, democracy, and rule of law impacts, explicitly not regulating all AI aspects or technologies.42 This means it does not address:

  • Technical AI safety and robustness challenges
  • AI alignment with human values
  • Catastrophic or existential risks from advanced AI systems
  • Narrow technical issues in machine learning systems
  • General-purpose AI capabilities

While this focused scope was deliberate—staying within the Council of Europe’s mandate—it leaves significant gaps for AI risks beyond immediate human rights concerns.

Despite its limitations, the convention represents several important achievements in international AI governance:

As the world’s first legally binding international AI treaty, the convention establishes a precedent for multilateral cooperation on AI governance extending beyond regional blocs.43 Its openness to non-CoE members enables broader participation than EU-centric initiatives, with early signatories including major AI developers like the United States.

The treaty establishes international legal standards specifically addressing AI’s impacts on fundamental rights, filling gaps in existing human rights instruments that predate modern AI capabilities.44 This creates a normative framework that can influence domestic legislation, court decisions, and corporate practices globally.

The convention’s lifecycle and risk-based approach provides a flexible methodology that other jurisdictions may adapt, potentially serving as a template for future regional and international agreements.45 Its emphasis on dynamic risk assessment acknowledges that appropriate governance must evolve alongside rapidly advancing AI capabilities.

The development process—involving governments, international organizations, civil society, academia, and industry across 57 participating entities—demonstrates a relatively inclusive model for negotiating AI governance frameworks, despite noted limitations in the final drafting stage.46

Several important questions remain about the convention’s future implementation and effectiveness:

  1. Ratification timeline: When will sufficient states complete ratification to trigger entry into force? Will major AI-developing nations like the United States, China, or India ultimately join?

  2. Enforcement effectiveness: Without strong penalties or compliance mechanisms, how will the Conference of the Parties ensure meaningful implementation rather than superficial adherence?

  3. National security exemptions: How broadly will states interpret the “national interests” exception? Will this undermine protections for AI surveillance, military applications, or border control systems?

  4. Private sector coverage: Will states exercise their discretion to extend binding obligations to private AI developers and deployers, or will most adopt minimal approaches?

  5. Coordination with other frameworks: How will implementation harmonize with the EU AI Act, proposed U.S. regulations, and other emerging national frameworks? Will this create complementary layers of protection or regulatory fragmentation?

  6. Technical evolution: As AI capabilities advance toward more powerful and general systems, will the convention’s flexible framework prove adaptable, or will its technology-neutral approach prove insufficient for novel risks?

  7. Relationship to AI safety: Will future protocols or amendments address technical AI safety, alignment, or catastrophic risks beyond human rights concerns, or will these remain outside the convention’s scope?

  8. Global South participation: How will the convention engage countries outside Europe and North America, and will its governance structure adequately represent diverse perspectives on AI development priorities?

  1. Council of Europe Convention on Artificial Intelligence

  2. Framework Convention on Artificial Intelligence - Wikipedia

  3. The Framework Convention on AI: Embedding Human Rights in the Digital Age

  4. Framework Convention on Artificial Intelligence - Wikipedia

  5. The Framework Convention on AI: A Landmark Agreement for Ethical AI

  6. Council of Europe Convention on Artificial Intelligence

  7. Framework Convention on Artificial Intelligence - Wikipedia

  8. The Framework Convention on AI: Embedding Human Rights in the Digital Age

  9. Europe Committee Artificial Intelligence Draft Framework Convention

  10. Framework Convention on Artificial Intelligence - Wikipedia

  11. COE AI Treaty - CAIDP

  12. COE AI Treaty - CAIDP

  13. Framework Convention on Artificial Intelligence - Wikipedia

  14. The Convention on AI and Human Rights - EuroDIG Wiki

  15. Council of Europe Convention on AI - AlgorithmWatch

  16. Anchoring Global AI Governance - ENSURED

  17. COE AI Treaty - CAIDP

  18. Framework Convention on Artificial Intelligence - Wikipedia

  19. Framework Convention on Artificial Intelligence - Wikipedia

  20. Council of Europe Convention on Artificial Intelligence

  21. Council of Europe Opens First Global AI Treaty for Signatures - ASIL

  22. Framework Convention on Artificial Intelligence - Wikipedia

  23. Europe Committee Artificial Intelligence Draft Framework Convention

  24. UK Signs First Treaty on AI and Human Rights

  25. Framework Convention on Artificial Intelligence - Wikipedia

  26. The Framework Convention on AI: Embedding Human Rights in the Digital Age

  27. Framework Convention on Artificial Intelligence - Wikipedia

  28. Anchoring Global AI Governance - ENSURED

  29. EU Moves to Ratify International Treaty on AI and Human Rights

  30. What the Council of Europe’s New Treaty Tells Us About Global AI Governance

  31. The Framework Convention on AI: Embedding Human Rights in the Digital Age

  32. Framework Convention on Artificial Intelligence - Wikipedia

  33. European Commission Signs Framework Convention on AI

  34. European Commission Signs Framework Convention on AI

  35. Europe Committee Artificial Intelligence Draft Framework Convention

  36. Framework Convention on Artificial Intelligence - Full Text

  37. First Step on Long Road to Global AI Regulation - Lowy Institute

  38. Understanding the Scope of the CoE Framework Convention on AI

  39. Understanding the Scope of the CoE Framework Convention on AI

  40. Understanding the Scope of the CoE Framework Convention on AI

  41. First Step on Long Road to Global AI Regulation - Lowy Institute

  42. Framework Convention on AI - Cambridge Journals

  43. The World’s First Ever International AI Treaty - NYU Journal

  44. The Framework Convention on AI: Embedding Human Rights in the Digital Age

  45. Council of Europe’s Convention on AI - Crowell

  46. The Convention on AI and Human Rights - EuroDIG Wiki