Skip to content

Global Partnership on Artificial Intelligence (GPAI)

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:50 (Adequate)
Importance:45 (Reference)
Last edited:2026-02-01 (today)
Words:3.0k
Structure:
📊 1📈 0🔗 1📚 5816%Score: 10/15
LLM Summary:GPAI represents the first major multilateral AI governance initiative but operates as a non-binding policy laboratory with limited enforcement power and structural coordination challenges. While providing valuable international cooperation frameworks, its voluntary nature and exclusion of key AI-developing nations limits its practical impact on global AI safety.
Issues (1):
  • Links2 links could use <R> components
DimensionAssessment
TypeInternational multistakeholder governance initiative
FoundedJune 2020 (proposed 2018)
Members29 countries + European Union
StructureCouncil, Steering Committee, 4 Working Groups, 2 Centres of Expertise
HostOrganisation for Economic Co-operation and Development (OECD)
FocusResponsible AI, Data Governance, Future of Work, Innovation
AuthorityNon-binding policy guidance and recommendations
Key FeatureFirst major multilateral AI governance cooperation effort

The Global Partnership on Artificial Intelligence (GPAI) is an international initiative launched in June 2020 as the world’s first major multilateral effort for AI governance cooperation.1 Proposed by Canadian Prime Minister Justin Trudeau and French President Emmanuel Macron at the 2018 G7 summit, GPAI brings together over 25 countries plus the European Union, alongside experts from governments, academia, civil society, industry, and international organizations.23

GPAI operates as a policy laboratory fostering practical, values-based approaches to AI governance rather than binding regulations, bridging theory and practice through collaboration on shared challenges that transcend borders.4 The partnership’s mission is to support the responsible adoption of AI grounded in human rights, inclusion, diversity, gender equality, innovation, economic growth, and environmental and societal benefits, while contributing to the UN 2030 Agenda and Sustainable Development Goals.5

Hosted by the OECD in Paris with support from centers of expertise in Montreal and Paris, GPAI facilitates multistakeholder, multidisciplinary projects; shares analysis on AI impacts; maximizes coordination to reduce duplication; and prioritizes perspectives from emerging and developing countries.6 The organization produces practical resources like reports, policy guidance, case studies, and toolkits, rather than legally binding regulations.7

GPAI emerged from discussions at the 2018 G7 summit in Biarritz, France, where Prime Minister Trudeau and President Macron proposed creating an international body to guide responsible AI development.8 The initiative received support from all G7 members, though the United States initially showed hesitation.9

The partnership was officially launched on June 15, 2020, with 15 founding members: Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom, the United States, and the European Union.1011 The launch represented a diplomatic achievement amid growing recognition that AI governance challenges transcend national borders and require coordinated international responses.

GPAI has expanded significantly since its founding:

  • November 2021: Added 10 new members including Czech Republic, Israel, and several EU countries, bringing total membership to 25.12
  • November 2022: Grew to 29 members with the addition of Argentina, Belgium, Brazil, Denmark, Ireland, the Netherlands, Poland, Senegal, Serbia, Spain, Sweden, and Türkiye.13
  • Invited members: Austria, Chile, Finland, Malaysia, Norway, Slovakia, and Switzerland have been invited but remain pending as of recent reports.14

The expansion to 29 members by 2024 represented a diplomatic achievement amid geopolitical challenges, bringing together diverse nations across continents to collaborate on AI governance.15

GPAI operates with a rotating presidency structure similar to the G7:16

  • 2020: Canada (founding presidency)
  • 2021: France
  • 2022: Japan
  • 2023: India
  • 2024: Republic of Serbia (incoming chair)
  • 2025: Republic of Serbia (lead chair)17

The Multistakeholder Experts Group (MEG) has had the following co-chairs:18

  • 2020-2021: Jordan Zed and Baroness Joanna Shields (Shields as MEG chair)
  • 2021-2022: Joanna Shields and Renaud Vedel (Shields as MEG chair)
  • 2023-2024: Yoichi Iida and Inma Martinez (Martinez as MEG chair)

In a significant development announced in July 2024, GPAI formally merged with the OECD’s AI policy work while retaining the GPAI brand.19 This integration incorporated the OECD’s Working Party on AI Governance and Committee on Digital Policy, creating synergies between GPAI’s multistakeholder approach and OECD’s institutional stability. The merger provided GPAI with access to the OECD’s mixed budget of stable member dues and voluntary project contributions, addressing previous funding complications.20

The GPAI Summit held on December 3-4, 2024, in Belgrade, Serbia, at the Palace of Serbia, marked a transition point under the new integrated structure, gathering experts, members, industry representatives, and academics to focus on responsible AI for societal benefit.21

GPAI’s governance is organized hierarchically:22

GPAI Council: The ultimate decision-making body that provides strategic direction and makes major decisions on membership and participation. The Council meets annually at ministerial level and is led by three members in staggered one-year terms: a Lead Chair, an Outgoing Support Chair, and an Incoming Support Chair. Chairs are elected annually by simple majority.23

Steering Committee: Composed of five government members and six non-government representatives, the Steering Committee is responsible for developing work plans and establishing working groups. The committee includes the Council Chairs plus three additional government representatives recommended by government members, and six non-government representatives recommended by the MEG and appointed by the Council.24

Secretariat: Hosted by the OECD in Paris, the Secretariat provides administrative support to GPAI’s governing bodies and facilitates coordination.25

GPAI operates through two specialized centres that support its working groups:26

Montreal Centre (ICEMAI/CEIMIA): The International Centre of Expertise in Montreal supports the Responsible AI and Data Governance working groups. As of 2025, CEIMIA has conducted thematic mapping of GPAI members’ National AI Strategies, analyzing 44 governments’ public documents on strategies, applications, and enforcement agencies to position itself as a partner for applied projects and cross-country synergies.27

Paris Centre (INRIA): Located in Paris, this centre supports the Future of Work and Innovation & Commercialization working groups. INRIA hosted the 2021 GPAI Summit in Paris on November 11-12.28

GPAI organizes its substantive work through four specialized expert working groups:29

  1. Responsible AI: Addresses ethical considerations, accountability, and trustworthy AI development. Includes an ad-hoc subgroup on AI and Pandemic Response.
  2. Data Governance: Focuses on data management, sharing, and governance frameworks for AI systems.
  3. Future of Work: Examines AI’s impact on employment, labor markets, and workforce development.
  4. Innovation & Commercialization: Supports AI innovation while addressing commercialization challenges.

Each working group selects three or more GPAI Experts as Project Leads per project, with decisions made by two-thirds majority. Co-Chairs serve one-year terms and can be re-elected twice.30

GPAI distinguishes itself through its emphasis on multistakeholder engagement, bringing together diverse perspectives from governments, academia, civil society, industry, labor unions, and international organizations.31 This model contrasts with traditional intergovernmental initiatives and reflects the recognition that AI governance requires input from multiple sectors due to the technology’s complexity and cross-cutting impacts.

The multistakeholder approach aims to create shared vocabulary and frameworks to enable coordination and avoid fragmented governance, while ensuring that policies reflect diverse societal perspectives rather than solely government or industry interests.32

GPAI explicitly positions itself as a “policy laboratory” rather than a regulatory body.33 The organization produces practical, adaptable guidance that member countries can voluntarily implement according to their own legal frameworks and contexts. This approach prioritizes flexibility and experimentation over binding rules, allowing members to learn from each other’s experiences.

Key output types include:34

  • Policy guidance documents on cross-cutting AI governance challenges
  • Case studies examining specific implementations and lessons learned
  • Toolkits providing practical resources for policymakers
  • Research reports assessing scientific, technical, and socio-economic AI impacts

A notable example is the 2020 report “The Role of Data in AI” produced by the Data Governance Working Group in collaboration with DCC (Data Commons Canada), the University of Edinburgh, and Trilateral Research, which addressed data governance, international sharing, and anti-discrimination considerations.35

GPAI’s work aligns closely with the OECD AI Principles adopted in 2019, which emphasize inclusive growth, human rights, transparency, robustness, safety, security, and accountability.36 These principles have been adopted by 44 countries and serve as the basis for G20 AI principles and GPAI’s own framework.

The partnership also supports the UN 2030 Agenda and Sustainable Development Goals, positioning responsible AI development as a tool for advancing sustainable development objectives.37

GPAI operates on a funding model where members contribute in equal shares as the default principle, with exceptions approved by the GPAI Council’s Lead Chair.38 However, no specific funding amounts or detailed sources are publicly disclosed in available documentation.

Following the 2024 merger with the OECD, GPAI gains access to the OECD’s mixed budget consisting of stable dues from member countries plus voluntary contributions for specific projects.39 This funding structure aims to provide both stability for core operations and flexibility for rapid implementation of new initiatives.

GPAI represents the first major multilateral effort specifically focused on AI governance cooperation, establishing a model for international collaboration on emerging technology governance.40 The partnership has contributed to the international AI governance landscape by:

  • Bridging theory and practice: Connecting academic research, policy development, and practical implementation through expert collaboration across sectors.41
  • Facilitating knowledge exchange: Creating forums for peer-to-peer learning and sharing of best practices among member countries at different stages of AI development.
  • Creating shared frameworks: Developing common vocabulary and conceptual frameworks that enable coordination and reduce fragmentation in global AI governance approaches.42
  • Informing national strategies: GPAI outputs have been incorporated into national AI strategies, as documented in CEIMIA’s 2025 analysis of 44 governments’ AI strategy documents.43

As of 2025, GPAI’s working groups have produced numerous practical resources:44

  • Policy guidance documents on responsible AI development
  • Toolkits for implementing data governance frameworks
  • Case studies examining AI impacts on labor markets
  • Research reports on innovation and commercialization challenges

The organization’s annual summits, such as the 2021 Paris Summit hosted by Inria and the 2024 Belgrade Summit, have gathered international experts to advance discussions on AI governance priorities.45

The expansion from 15 founding members to 29 full members demonstrates growing international interest in collaborative AI governance, with representation expanding beyond traditional Western democracies to include countries like Argentina, Brazil, India, Senegal, and Serbia.46 This geographic diversity enhances the legitimacy and relevance of GPAI’s work for different regional contexts.

Critics have identified several structural issues with GPAI’s organization:47

Complicated funding and coordination: The dual centers in Paris and Montreal have “vastly complicated” funding processes for working groups, according to analysis from the Brookings Institution. Substantive work has been primarily overseen by the two founding countries (France and Canada), reducing agency and ownership for other members.48

Limited participation channels: The structure leaves few meaningful contribution paths for entities beyond the founding nations, limiting the ability of other members or even the OECD itself to substantially shape the agenda. This has created perceptions of control by founders despite the formally equal membership structure.49

Decision-making inefficiencies: The reliance on rotating chairs and presidencies can potentially cause incoherence during transitions, while the two-thirds majority requirement for expert groups and simple majorities for council decisions may slow decision-making.50

Non-binding nature: GPAI explicitly lacks regulatory power or enforcement mechanisms. Outputs like policy guidance and toolkits are optional for members to adapt nationally, limiting the partnership’s influence to the quality of its work rather than any coercive authority.51 This voluntary approach allows flexibility but reduces accountability for implementation.

Limited global representation: While membership has grown to 29 countries, GPAI still excludes some major AI-developing nations, undermining claims to represent global consensus on AI governance approaches.52 For example, Southeast Asia is represented only by Singapore, excluding Indonesia, Malaysia, Thailand, Vietnam, and other countries with growing AI ecosystems.53

Lack of enforcement: As a policy laboratory producing non-binding recommendations, GPAI cannot compel members to adopt its guidance or sanction non-compliance. The partnership’s impact depends entirely on the persuasiveness of its outputs and members’ voluntary implementation.

The partnership’s membership process has been criticized as restrictive, emphasizing political systems over AI development focus.54 Suggested improvements include amending membership criteria to include G20 non-members, allowing intergovernmental organizations beyond the EU, and emphasizing human capital development over ethics frameworks to attract participation from regions like Southeast Asia.55

Prior to the 2024 OECD merger, GPAI lacked a stable secretariat and institutional continuity, with rotating leadership potentially causing strategic discontinuities.56 While the OECD integration aims to address this issue by providing institutional stability, it also raises questions about maintaining GPAI’s distinctive multistakeholder character within a traditional intergovernmental organization.

GPAI’s work does not explicitly address advanced AI safety, alignment, or existential risks as understood in the technical AI safety community.57 The partnership’s focus areas—responsible AI, data governance, labor market impacts, and innovation—emphasize near-term governance challenges related to fairness, transparency, accountability, and societal benefit rather than long-term catastrophic risks.

The organization’s alignment with the OECD AI Principles emphasizes “robustness, safety, and security,” but this primarily refers to conventional cybersecurity and system reliability rather than concerns about misaligned superintelligent systems or existential risks from advanced AI.58 GPAI’s working groups have not produced outputs specifically addressing risks from transformative AI, scheming in AI systems, or challenges in aligning advanced AI systems with human values.

This focus reflects GPAI’s origins in broader technology policy communities and its emphasis on multilateral consensus-building, which may favor near-term, widely-acknowledged challenges over speculative long-term risks. The partnership’s policy laboratory model and multistakeholder structure may also make it difficult to develop strong positions on controversial or technically complex topics like existential risk from AI.

Several important questions remain about GPAI’s role and effectiveness:

Influence and adoption: To what extent have GPAI’s policy recommendations and toolkits actually influenced national AI strategies and regulatory frameworks? While the 2025 CEIMIA analysis examines national strategies, the causal impact of GPAI outputs remains unclear.

Post-merger structure: How will GPAI’s distinctive multistakeholder character and informal policy laboratory approach be maintained within the more traditional OECD institutional structure following the 2024 merger? Will the integration enhance or dilute GPAI’s unique contributions?

Global representation: Can GPAI expand membership to include more countries from underrepresented regions (especially Southeast Asia, Africa, and Latin America) while maintaining effective decision-making? How can the partnership balance inclusivity with operational efficiency?

Relationship to other governance initiatives: How does GPAI coordinate with other international AI governance efforts, such as the UN’s AI Scientific Panel and Global Dialogue established in 2025, the G7 Hiroshima Process, and regional initiatives like the EU AI Act? Is there effective division of labor or problematic duplication?

Long-term sustainability: Will GPAI maintain relevance as AI capabilities advance and governance challenges evolve? Can the partnership adapt to address emerging issues like advanced AI systems, autonomous weapons, or transformative AI scenarios?

Implementation gap: What mechanisms exist to ensure member countries actually implement GPAI recommendations domestically, given the non-binding nature of outputs? How can the partnership measure and improve the real-world impact of its work?

  1. Global Partnership on AI (GPAI) - VerifyWise

  2. Global Partnership on Artificial Intelligence - Wikipedia

  3. Global Partnership on Artificial Intelligence - Dig.watch

  4. Global Partnership on AI (GPAI) - VerifyWise

  5. Global Partnership on Artificial Intelligence - OECD

  6. GPAI Terms of Reference - OECD

  7. Global Partnership on AI (GPAI) - VerifyWise

  8. Global Partnership on Artificial Intelligence - Wikipedia

  9. A new institution for governing AI: Lessons from GPAI - Brookings

  10. Global Partnership on Artificial Intelligence - Wikipedia

  11. Joint Statement from Founding Members - Government of Canada

  12. Global Partnership on Artificial Intelligence - Wikipedia

  13. Global Partnership on Artificial Intelligence - Dig.watch

  14. Global Partnership on Artificial Intelligence - Wikipedia

  15. A new institution for governing AI: Lessons from GPAI - Brookings

  16. Global Partnership on Artificial Intelligence - Wikipedia

  17. GPAI Summit 2024 - Government of Serbia

  18. Global Partnership on Artificial Intelligence - Wikipedia

  19. A new institution for governing AI: Lessons from GPAI - Brookings

  20. A new institution for governing AI: Lessons from GPAI - Brookings

  21. GPAI Summit 2024 - Government of Serbia

  22. GPAI Terms of Reference - OECD

  23. GPAI Terms of Reference - OECD

  24. Global Partnership on Artificial Intelligence - Dig.watch

  25. Global Partnership on Artificial Intelligence - Wikipedia

  26. Global Partnership on Artificial Intelligence - Wikipedia

  27. Highest Ranked AI Priorities Around the Globe - CEIMIA

  28. GPAI 2021 - Inria

  29. Global Partnership on Artificial Intelligence - Wikipedia

  30. GPAI Terms of Reference - OECD

  31. Global Partnership on Artificial Intelligence - Dig.watch

  32. Global Partnership on AI (GPAI) - VerifyWise

  33. Global Partnership on AI (GPAI) - VerifyWise

  34. Global Partnership on AI (GPAI) - VerifyWise

  35. The Role of Data in AI - DCC

  36. Global AI Governance: Five Key Frameworks Explained - Bradley

  37. Global Partnership on Artificial Intelligence - OECD

  38. GPAI Terms of Reference - OECD

  39. A new institution for governing AI: Lessons from GPAI - Brookings

  40. Global Partnership on AI (GPAI) - VerifyWise

  41. Global Partnership on AI (GPAI) - VerifyWise

  42. Global Partnership on AI (GPAI) - VerifyWise

  43. Highest Ranked AI Priorities Around the Globe - CEIMIA

  44. Global Partnership on AI (GPAI) - VerifyWise

  45. GPAI 2021 - Inria

  46. Global Partnership on Artificial Intelligence - Dig.watch

  47. A new institution for governing AI: Lessons from GPAI - Brookings

  48. A new institution for governing AI: Lessons from GPAI - Brookings

  49. A new institution for governing AI: Lessons from GPAI - Brookings

  50. GPAI Terms of Reference - OECD

  51. Global Partnership on AI (GPAI) - VerifyWise

  52. Global Partnership on AI (GPAI) - VerifyWise

  53. Governance of Artificial Intelligence in Southeast Asia - Global Policy Journal

  54. Governance of Artificial Intelligence in Southeast Asia - Global Policy Journal

  55. Governance of Artificial Intelligence in Southeast Asia - Global Policy Journal

  56. A new institution for governing AI: Lessons from GPAI - Brookings

  57. GPAI Terms of Reference - OECD

  58. Global AI Governance: Five Key Frameworks Explained - Bradley