Global Partnership on Artificial Intelligence (GPAI)
Global Partnership on Artificial Intelligence (GPAI)
GPAI represents the first major multilateral AI governance initiative but operates as a non-binding policy laboratory with limited enforcement power and structural coordination challenges. While providing valuable international cooperation frameworks, its voluntary nature and exclusion of key AI-developing nations limits its practical impact on global AI safety.
Quick Assessment
| Dimension | Assessment |
|---|---|
| Type | International multistakeholder governance initiative |
| Founded | June 2020 (proposed 2018) |
| Members | 29 countries + European Union |
| Structure | Council, Steering Committee, 4 Working Groups, 2 Centres of Expertise |
| Host | Organisation for Economic Co-operation and Development (OECD) |
| Focus | Responsible AI, Data Governance, Future of Work, Innovation |
| Authority | Non-binding policy guidance and recommendations |
| Key Feature | First major multilateral AI governance cooperation effort |
Key Links
| Source | Link |
|---|---|
| Official Website | gpai.ai |
| Wikipedia | en.wikipedia.org |
Overview
The Global Partnership on Artificial Intelligence (GPAI) is an international initiative launched in June 2020 as the world's first major multilateral effort for AI governance cooperation.1 Proposed by Canadian Prime Minister Justin Trudeau and French President Emmanuel Macron at the 2018 G7 summit, GPAI brings together over 25 countries plus the European Union, alongside experts from governments, academia, civil society, industry, and international organizations.23
GPAI operates as a policy laboratory fostering practical, values-based approaches to AI governance rather than binding regulations, bridging theory and practice through collaboration on shared challenges that transcend borders.4 The partnership's mission is to support the responsible adoption of AI grounded in human rights, inclusion, diversity, gender equality, innovation, economic growth, and environmental and societal benefits, while contributing to the UN 2030 Agenda and Sustainable Development Goals.5
Hosted by the OECD in Paris with support from centers of expertise in Montreal and Paris, GPAI facilitates multistakeholder, multidisciplinary projects; shares analysis on AI impacts; maximizes coordination to reduce duplication; and prioritizes perspectives from emerging and developing countries.6 The organization produces practical resources like reports, policy guidance, case studies, and toolkits, rather than legally binding regulations.7
History and Development
Founding (2018-2020)
GPAI emerged from discussions at the 2018 G7 summit in Biarritz, France, where Prime Minister Trudeau and President Macron proposed creating an international body to guide responsible AI development.8 The initiative received support from all G7 members, though the United States initially showed hesitation.9
The partnership was officially launched on June 15, 2020, with 15 founding members: Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom, the United States, and the European Union.1011 The launch represented a diplomatic achievement amid growing recognition that AI governance challenges transcend national borders and require coordinated international responses.
Expansion and Growth
GPAI has expanded significantly since its founding:
- November 2021: Added 10 new members including Czech Republic, Israel, and several EU countries, bringing total membership to 25.12
- November 2022: Grew to 29 members with the addition of Argentina, Belgium, Brazil, Denmark, Ireland, the Netherlands, Poland, Senegal, Serbia, Spain, Sweden, and Türkiye.13
- Invited members: Austria, Chile, Finland, Malaysia, Norway, Slovakia, and Switzerland have been invited but remain pending as of recent reports.14
The expansion to 29 members by 2024 represented a diplomatic achievement amid geopolitical challenges, bringing together diverse nations across continents to collaborate on AI governance.15
Rotating Leadership
GPAI operates with a rotating presidency structure similar to the G7:16
- 2020: Canada (founding presidency)
- 2021: France
- 2022: Japan
- 2023: India
- 2024: Republic of Serbia (incoming chair)
- 2025: Republic of Serbia (lead chair)17
The Multistakeholder Experts Group (MEG) has had the following co-chairs:18
- 2020-2021: Jordan Zed and Baroness Joanna Shields (Shields as MEG chair)
- 2021-2022: Joanna Shields and Renaud Vedel (Shields as MEG chair)
- 2023-2024: Yoichi Iida and Inma Martinez (Martinez as MEG chair)
Integration with OECD (2024)
In a significant development announced in July 2024, GPAI formally merged with the OECD's AI policy work while retaining the GPAI brand.19 This integration incorporated the OECD's Working Party on AI Governance and Committee on Digital Policy, creating synergies between GPAI's multistakeholder approach and OECD's institutional stability. The merger provided GPAI with access to the OECD's mixed budget of stable member dues and voluntary project contributions, addressing previous funding complications.20
The GPAI Summit held on December 3-4, 2024, in Belgrade, Serbia, at the Palace of Serbia, marked a transition point under the new integrated structure, gathering experts, members, industry representatives, and academics to focus on responsible AI for societal benefit.21
Organizational Structure
Governance Bodies
GPAI's governance is organized hierarchically:22
GPAI Council: The ultimate decision-making body that provides strategic direction and makes major decisions on membership and participation. The Council meets annually at ministerial level and is led by three members in staggered one-year terms: a Lead Chair, an Outgoing Support Chair, and an Incoming Support Chair. Chairs are elected annually by simple majority.23
Steering Committee: Composed of five government members and six non-government representatives, the Steering Committee is responsible for developing work plans and establishing working groups. The committee includes the Council Chairs plus three additional government representatives recommended by government members, and six non-government representatives recommended by the MEG and appointed by the Council.24
Secretariat: Hosted by the OECD in Paris, the Secretariat provides administrative support to GPAI's governing bodies and facilitates coordination.25
Centres of Expertise
GPAI operates through two specialized centres that support its working groups:26
Montreal Centre (ICEMAI/CEIMIA): The International Centre of Expertise in Montreal supports the Responsible AI and Data Governance working groups. As of 2025, CEIMIA has conducted thematic mapping of GPAI members' National AI Strategies, analyzing 44 governments' public documents on strategies, applications, and enforcement agencies to position itself as a partner for applied projects and cross-country synergies.27
Paris Centre (INRIA): Located in Paris, this centre supports the Future of Work and Innovation & Commercialization working groups. INRIA hosted the 2021 GPAI Summit in Paris on November 11-12.28
Expert Working Groups
GPAI organizes its substantive work through four specialized expert working groups:29
- Responsible AI: Addresses ethical considerations, accountability, and trustworthy AI development. Includes an ad-hoc subgroup on AI and Pandemic Response.
- Data Governance: Focuses on data management, sharing, and governance frameworks for AI systems.
- Future of Work: Examines AI's impact on employment, labor markets, and workforce development.
- Innovation & Commercialization: Supports AI innovation while addressing commercialization challenges.
Each working group selects three or more GPAI Experts as Project Leads per project, with decisions made by two-thirds majority. Co-Chairs serve one-year terms and can be re-elected twice.30
Approach and Outputs
Multistakeholder Model
GPAI distinguishes itself through its emphasis on multistakeholder engagement, bringing together diverse perspectives from governments, academia, civil society, industry, labor unions, and international organizations.31 This model contrasts with traditional intergovernmental initiatives and reflects the recognition that AI governance requires input from multiple sectors due to the technology's complexity and cross-cutting impacts.
The multistakeholder approach aims to create shared vocabulary and frameworks to enable coordination and avoid fragmented governance, while ensuring that policies reflect diverse societal perspectives rather than solely government or industry interests.32
Policy Laboratory Approach
GPAI explicitly positions itself as a "policy laboratory" rather than a regulatory body.33 The organization produces practical, adaptable guidance that member countries can voluntarily implement according to their own legal frameworks and contexts. This approach prioritizes flexibility and experimentation over binding rules, allowing members to learn from each other's experiences.
Key output types include:34
- Policy guidance documents on cross-cutting AI governance challenges
- Case studies examining specific implementations and lessons learned
- Toolkits providing practical resources for policymakers
- Research reports assessing scientific, technical, and socio-economic AI impacts
A notable example is the 2020 report "The Role of Data in AI" produced by the Data Governance Working Group in collaboration with DCC (Data Commons Canada), the University of Edinburgh, and Trilateral Research, which addressed data governance, international sharing, and anti-discrimination considerations.35
Alignment with International Frameworks
GPAI's work aligns closely with the OECD AI Principles adopted in 2019, which emphasize inclusive growth, human rights, transparency, robustness, safety, security, and accountability.36 These principles have been adopted by 44 countries and serve as the basis for G20 AI principles and GPAI's own framework.
The partnership also supports the UN 2030 Agenda and Sustainable Development Goals, positioning responsible AI development as a tool for advancing sustainable development objectives.37
Funding and Resources
GPAI operates on a funding model where members contribute in equal shares as the default principle, with exceptions approved by the GPAI Council's Lead Chair.38 However, no specific funding amounts or detailed sources are publicly disclosed in available documentation.
Following the 2024 merger with the OECD, GPAI gains access to the OECD's mixed budget consisting of stable dues from member countries plus voluntary contributions for specific projects.39 This funding structure aims to provide both stability for core operations and flexibility for rapid implementation of new initiatives.
Impact and Achievements
Governance Contributions
GPAI represents the first major multilateral effort specifically focused on AI governance cooperation, establishing a model for international collaboration on emerging technology governance.40 The partnership has contributed to the international AI governance landscape by:
- Bridging theory and practice: Connecting academic research, policy development, and practical implementation through expert collaboration across sectors.41
- Facilitating knowledge exchange: Creating forums for peer-to-peer learning and sharing of best practices among member countries at different stages of AI development.
- Creating shared frameworks: Developing common vocabulary and conceptual frameworks that enable coordination and reduce fragmentation in global AI governance approaches.42
- Informing national strategies: GPAI outputs have been incorporated into national AI strategies, as documented in CEIMIA's 2025 analysis of 44 governments' AI strategy documents.43
Practical Outputs
As of 2025, GPAI's working groups have produced numerous practical resources:44
- Policy guidance documents on responsible AI development
- Toolkits for implementing data governance frameworks
- Case studies examining AI impacts on labor markets
- Research reports on innovation and commercialization challenges
The organization's annual summits, such as the 2021 Paris Summit hosted by Inria and the 2024 Belgrade Summit, have gathered international experts to advance discussions on AI governance priorities.45
Membership Growth
The expansion from 15 founding members to 29 full members demonstrates growing international interest in collaborative AI governance, with representation expanding beyond traditional Western democracies to include countries like Argentina, Brazil, India, Senegal, and Serbia.46 This geographic diversity enhances the legitimacy and relevance of GPAI's work for different regional contexts.
Criticisms and Limitations
Structural Challenges
Critics have identified several structural issues with GPAI's organization:47
Complicated funding and coordination: The dual centers in Paris and Montreal have "vastly complicated" funding processes for working groups, according to analysis from the Brookings Institution. Substantive work has been primarily overseen by the two founding countries (France and Canada), reducing agency and ownership for other members.48
Limited participation channels: The structure leaves few meaningful contribution paths for entities beyond the founding nations, limiting the ability of other members or even the OECD itself to substantially shape the agenda. This has created perceptions of control by founders despite the formally equal membership structure.49
Decision-making inefficiencies: The reliance on rotating chairs and presidencies can potentially cause incoherence during transitions, while the two-thirds majority requirement for expert groups and simple majorities for council decisions may slow decision-making.50
Scope and Authority Limitations
Non-binding nature: GPAI explicitly lacks regulatory power or enforcement mechanisms. Outputs like policy guidance and toolkits are optional for members to adapt nationally, limiting the partnership's influence to the quality of its work rather than any coercive authority.51 This voluntary approach allows flexibility but reduces accountability for implementation.
Limited global representation: While membership has grown to 29 countries, GPAI still excludes some major AI-developing nations, undermining claims to represent global consensus on AI governance approaches.52 For example, Southeast Asia is represented only by Singapore, excluding Indonesia, Malaysia, Thailand, Vietnam, and other countries with growing AI ecosystems.53
Lack of enforcement: As a policy laboratory producing non-binding recommendations, GPAI cannot compel members to adopt its guidance or sanction non-compliance. The partnership's impact depends entirely on the persuasiveness of its outputs and members' voluntary implementation.
Geographic and Systemic Biases
The partnership's membership process has been criticized as restrictive, emphasizing political systems over AI development focus.54 Suggested improvements include amending membership criteria to include G20 non-members, allowing intergovernmental organizations beyond the EU, and emphasizing human capital development over ethics frameworks to attract participation from regions like Southeast Asia.55
Integration Challenges
Prior to the 2024 OECD merger, GPAI lacked a stable secretariat and institutional continuity, with rotating leadership potentially causing strategic discontinuities.56 While the OECD integration aims to address this issue by providing institutional stability, it also raises questions about maintaining GPAI's distinctive multistakeholder character within a traditional intergovernmental organization.
Relationship to AI Safety
GPAI's work does not explicitly address advanced AI safety, alignment, or existential risks as understood in the technical AI safety community.57 The partnership's focus areas—responsible AI, data governance, labor market impacts, and innovation—emphasize near-term governance challenges related to fairness, transparency, accountability, and societal benefit rather than long-term catastrophic risks.
The organization's alignment with the OECD AI Principles emphasizes "robustness, safety, and security," but this primarily refers to conventional cybersecurity and system reliability rather than concerns about misaligned superintelligent systems or existential risks from advanced AI.58 GPAI's working groups have not produced outputs specifically addressing risks from transformative AI, scheming in AI systems, or challenges in aligning advanced AI systems with human values.
This focus reflects GPAI's origins in broader technology policy communities and its emphasis on multilateral consensus-building, which may favor near-term, widely-acknowledged challenges over speculative long-term risks. The partnership's policy laboratory model and multistakeholder structure may also make it difficult to develop strong positions on controversial or technically complex topics like existential risk from AI.
Key Uncertainties
Several important questions remain about GPAI's role and effectiveness:
Influence and adoption: To what extent have GPAI's policy recommendations and toolkits actually influenced national AI strategies and regulatory frameworks? While the 2025 CEIMIA analysis examines national strategies, the causal impact of GPAI outputs remains unclear.
Post-merger structure: How will GPAI's distinctive multistakeholder character and informal policy laboratory approach be maintained within the more traditional OECD institutional structure following the 2024 merger? Will the integration enhance or dilute GPAI's unique contributions?
Global representation: Can GPAI expand membership to include more countries from underrepresented regions (especially Southeast Asia, Africa, and Latin America) while maintaining effective decision-making? How can the partnership balance inclusivity with operational efficiency?
Relationship to other governance initiatives: How does GPAI coordinate with other international AI governance efforts, such as the UN's AI Scientific Panel and Global Dialogue established in 2025, the G7 Hiroshima Process, and regional initiatives like the EU AI Act? Is there effective division of labor or problematic duplication?
Long-term sustainability: Will GPAI maintain relevance as AI capabilities advance and governance challenges evolve? Can the partnership adapt to address emerging issues like advanced AI systems, autonomous weapons, or transformative AI scenarios?
Implementation gap: What mechanisms exist to ensure member countries actually implement GPAI recommendations domestically, given the non-binding nature of outputs? How can the partnership measure and improve the real-world impact of its work?
Sources
Footnotes
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
Global Partnership on Artificial Intelligence - Dig.watch — Global Partnership on Artificial Intelligence - Dig.watch ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Global Partnership on Artificial Intelligence - OECD — Global Partnership on Artificial Intelligence - OECD ↩
-
[GPAI Terms of Reference - OECD](https://one.oecd.org/document/GPAI(2022) — GPAI Terms of Reference - OECD ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
A new institution for governing AI: Lessons from GPAI - Brookings — A new institution for governing AI: Lessons from GPAI - Brookings ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
Joint Statement from Founding Members - Government of Canada — Joint Statement from Founding Members - Government of Canada ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
Global Partnership on Artificial Intelligence - Dig.watch — Global Partnership on Artificial Intelligence - Dig.watch ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
A new institution for governing AI: Lessons from GPAI - Brookings — A new institution for governing AI: Lessons from GPAI - Brookings ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
GPAI Summit 2024 - Government of Serbia — GPAI Summit 2024 - Government of Serbia ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
A new institution for governing AI: Lessons from GPAI - Brookings — A new institution for governing AI: Lessons from GPAI - Brookings ↩
-
A new institution for governing AI: Lessons from GPAI - Brookings — A new institution for governing AI: Lessons from GPAI - Brookings ↩
-
GPAI Summit 2024 - Government of Serbia — GPAI Summit 2024 - Government of Serbia ↩
-
[GPAI Terms of Reference - OECD](https://one.oecd.org/document/GPAI(2022) — GPAI Terms of Reference - OECD ↩
-
[GPAI Terms of Reference - OECD](https://one.oecd.org/document/GPAI(2022) — GPAI Terms of Reference - OECD ↩
-
Global Partnership on Artificial Intelligence - Dig.watch — Global Partnership on Artificial Intelligence - Dig.watch ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
Highest Ranked AI Priorities Around the Globe - CEIMIA — Highest Ranked AI Priorities Around the Globe - CEIMIA ↩
-
Global Partnership on Artificial Intelligence - Wikipedia — Global Partnership on Artificial Intelligence - Wikipedia ↩
-
[GPAI Terms of Reference - OECD](https://one.oecd.org/document/GPAI(2022) — GPAI Terms of Reference - OECD ↩
-
Global Partnership on Artificial Intelligence - Dig.watch — Global Partnership on Artificial Intelligence - Dig.watch ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
The Role of Data in AI - DCC — The Role of Data in AI - DCC ↩
-
Global AI Governance: Five Key Frameworks Explained - Bradley — Global AI Governance: Five Key Frameworks Explained - Bradley ↩
-
Global Partnership on Artificial Intelligence - OECD — Global Partnership on Artificial Intelligence - OECD ↩
-
[GPAI Terms of Reference - OECD](https://one.oecd.org/document/GPAI(2022) — GPAI Terms of Reference - OECD ↩
-
A new institution for governing AI: Lessons from GPAI - Brookings — A new institution for governing AI: Lessons from GPAI - Brookings ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Highest Ranked AI Priorities Around the Globe - CEIMIA — Highest Ranked AI Priorities Around the Globe - CEIMIA ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Global Partnership on Artificial Intelligence - Dig.watch — Global Partnership on Artificial Intelligence - Dig.watch ↩
-
A new institution for governing AI: Lessons from GPAI - Brookings — A new institution for governing AI: Lessons from GPAI - Brookings ↩
-
A new institution for governing AI: Lessons from GPAI - Brookings — A new institution for governing AI: Lessons from GPAI - Brookings ↩
-
A new institution for governing AI: Lessons from GPAI - Brookings — A new institution for governing AI: Lessons from GPAI - Brookings ↩
-
Citation rc-192b (data unavailable — rebuild with wiki-server access) ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Global Partnership on AI (GPAI) - VerifyWise — Global Partnership on AI (GPAI) - VerifyWise ↩
-
Governance of Artificial Intelligence in Southeast Asia - Global Policy Journal — Governance of Artificial Intelligence in Southeast Asia - Global Policy Journal ↩
-
Governance of Artificial Intelligence in Southeast Asia - Global Policy Journal — Governance of Artificial Intelligence in Southeast Asia - Global Policy Journal ↩
-
Governance of Artificial Intelligence in Southeast Asia - Global Policy Journal — Governance of Artificial Intelligence in Southeast Asia - Global Policy Journal ↩
-
A new institution for governing AI: Lessons from GPAI - Brookings — A new institution for governing AI: Lessons from GPAI - Brookings ↩
-
Citation rc-be0d (data unavailable — rebuild with wiki-server access) ↩
-
Global AI Governance: Five Key Frameworks Explained - Bradley — Global AI Governance: Five Key Frameworks Explained - Bradley ↩
References
“The annual PMIA Summit, led by Inria in 2021, will take place on November 11-12 in Paris.”
The summit was led by Inria, not hosted by Inria. The location is Institut de France, not in Paris.
“The annual PMIA Summit, led by Inria in 2021, will take place on November 11-12 in Paris.”
The source only mentions the 2021 Paris Summit. It does not mention the 2024 Belgrade Summit. The source refers to the organization as PMIA, not GPAI.
“GPAI’s history and development show that it struggled with three crucial pieces for any AI governance body: an appropriate organizational structure, a stable budget, and a clear, well-communicated mission.”
“IPAI was officially proposed to leaders meeting in Biarritz , garnering support from all G7 members except the United States.”
“For example, the most demonstrable success of GPAI has been the impressive diplomatic achievement of nearly doubling its membership over four years—from 15 founding members that included India and Singapore to 29 countries including Argentina, Brazil and Senegal.”
“The DCC’s contribution to GPAI’s Data Governance Working Group took the form of a report titled ‘ The Role of Data in AI ’. Produced in collaboration with colleagues at the University of Edinburgh School of Informatics and Trilateral Research , the report focused on five central questions: How important is data to AI? Why the focus on the ‘role of data’ in AI? Why does data governance matter in the context of AI? Is there a need to share data internationally? How can we make sure data collected about individuals isn’t used unfairly or in a discriminatory way?”
4Governance of Artificial Intelligence in Southeast Asia - Global Policy Journalglobalpolicyjournal.com▸
“Although GPAI aims for broad international participation, the only GPAI member from Southeast Asia is Singapore.”
“GPAI's imbalanced global participation, restrictive membership process, and limited translations are potential barriers to Southeast Asian participation. The Global Partnership on Artificial Intelligence Executive Council could increase the likelihood of Southeast Asian participation by amending its membership process to focus on artificial intelligence rather than national political systems. The Global Partnership on Artificial Intelligence could increase the likelihood of Southeast Asian participation by amending its Terms of Reference to allow for other intergovernmental organisations to join, in addition to the European Union. The Global Partnership on Artificial Intelligence could improve the regional balance of its membership by recruiting the five G20 members who are not currently participants.”
“The Republic of Serbia, the incoming chair of GPAI in 2024, was elected to assume the role of its lead chair in 2025.”
The Republic of Serbia was elected to assume the role of lead chair in 2025, but the source states that Serbia was the incoming chair of GPAI in 2024.
“The Global Partnership on Artificial Intelligence (GPAI) Summit was held on 3-4 December 2024 at the Palace of Serbia in Belgrade. The Summit brought together leading global artificial intelligence (AI) experts, GPAI member country representatives, international organisations, industry leaders and members of the academic community. This year’s Summit focused on the responsible development of AI and leveraging it to benefit society as a whole.”
“GPAI's outputs are deliberately practical rather than aspirational.”
“GPAI's outputs are deliberately practical rather than aspirational. The partnership publishes policy guidance documents, case studies, and toolkits that member countries use to inform their national AI governance approaches.”
The source does not mention any specific resources produced by GPAI's working groups as of 2025.
“The Global Partnership on AI represents the world's first major attempt at multilateral AI governance cooperation. Launched in 2020 with 15 founding members and now spanning over 25 countries plus the EU, GPAI operates as a policy laboratory where governments, academia, and civil society collaborate to develop practical approaches to AI governance.”
The claim that GPAI was proposed at the 2018 G7 summit is not explicitly stated in the source. The source only mentions that the partnership was born from discussions between France and Canada. The claim states that GPAI brings together over 25 countries plus the European Union, alongside experts from governments, academia, civil society, industry, and international organizations. The source does not mention 'industry' or 'international organizations'.
“The Council provides strategic direction to GPAI and is responsible for all major decisions, including on membership and participation, while the elected Steering Committee – composed of five government members and six non-government representatives – is tasked, among other issues, with developing work plans, establishing working groups, and providing guidance to the Secretariat.”
The claim states the committee includes Council Chairs plus three additional government representatives recommended by government members, but the source states the Steering Committee is composed of five government members. The claim states that six non-government representatives are recommended by the MEG, but the source does not mention the MEG.
“The Global Partnership on Artificial Intelligence (GPAI) was launched in 2020 to foster the responsible development of AI grounded in the principles of human rights, inclusion, diversity, innovation, and economic growth.”
The source does not mention that GPAI is the world's first major multilateral effort for AI governance cooperation. The source mentions 29 members, not over 25 countries plus the European Union. The source does not mention that GPAI was proposed at the 2018 G7 summit.
“The partnership brings together 29 members: Argentina, Australia, Belgium, Brazil, Canada, Czech Republic, Denmark, France, Germany, India, Ireland, Israel, Italy, Japan, Mexico, the Netherlands, New Zealand, Poland, the Republic of Korea, Senegal, Serbia, Singapore, Slovenia, Spain, Sweden, Türkiye, the UK, the USA and the EU.”
The source does not specify that the growth to 29 members occurred in November 2022. The source only states that GPAI brings together 29 members, listing them. The wiki claim includes Belgium in the list of countries added in November 2022. However, Belgium is already listed as one of the 29 members in the source.
“In 2019, the Organisation for Economic Co-operation and Development (OECD), an intergovernmental group of developed nations, established five core principles that form a global consensus on the responsible and trustworthy governance of AI: (1) inclusive growth, sustainable development and well-being, (2) respect for the rule of law, human rights, and democratic values, including fairness and privacy, (3) transparency and explainability, (4) robustness, security, and safety, and (5) accountability.”
The source does not state that the OECD AI Principles have been adopted by 44 countries. The source does not explicitly state that the OECD AI Principles serve as the basis for GPAI's own framework.
“The organization's alignment with the OECD AI Principles emphasizes "robustness, safety, and security," but this primarily refers to conventional cybersecurity and system reliability rather than concerns about misaligned superintelligent systems or existential risks from advanced AI. GPAI's working groups have not produced outputs specifically addressing risks from transformative AI, scheming in AI systems, or challenges in aligning advanced AI systems with human values.”