Paris AI Action Summit (February 2025)
Paris AI Action Summit (February 2025)
The Paris AI Action Summit (February 2025) marked a significant reorientation of international AI governance discourse away from safety-focused frameworks toward innovation and economic opportunity, with the US and UK declining to sign the final declaration and critics noting the weakening of safety norms established at Bletchley and Seoul.
Quick Assessment
| Property | Value |
|---|---|
| Event type | International AI governance summit |
| Dates | February 10–11, 2025 |
| Location | Grand Palais, Paris, France |
| Co-chairs | Emmanuel Macron (France), Narendra Modi (India) |
| Signatories | ≈60 countries (US and UK declined) |
| Current AI endowment | $400 million |
| Private investment commitments | €109 billion |
| Predecessor events | AI Safety Summit (Bletchley Park) (November 2023), Seoul AI Summit (May 2024) |
Key Links
| Source | Link |
|---|---|
| Official Website | onu.delegfrance.org |
| Wikipedia | en.wikipedia.org |
Overview
The Paris AI Action Summit (also known as the AI Action Summit or Sommet pour l'action sur l'intelligence artificielle) was an international conference held on February 10–11, 2025 at the Grand Palais in Paris, France. Co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, the summit brought together heads of state, international organizations, companies, researchers, NGOs, artists, and civil society representatives from over 100 countries. It was part of a broader AI Action Week running from February 6–11, 2025, which included Science Days (February 6–7 at Institut Polytechnique de Paris), a Cultural Weekend (February 8–9), and nearly 100 parallel events worldwide.
The summit positioned itself as the third installment in an emerging series of high-level AI governance events, following the AI Safety Summit (Bletchley Park) in November 2023 and the Seoul AI Summit in May 2024. However, the Paris event marked a deliberate reorientation: where its predecessors emphasized AI safety risks and precautionary governance, the Paris summit centered on action, adoption, innovation, and economic opportunity. Macron articulated a vision of a distinctly European AI model oriented around intellectual property protection, cultural creativity, and child safety, rather than the safety-and-fundamental-rights framing that characterized earlier iterations.
The summit concluded with the release of the "Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet," signed by approximately 60 countries including France, China, and India. The United States and the United Kingdom notably declined to sign, citing concerns about vague governance mechanisms and national security implications. Major announced outcomes included a $400 million French government endowment for the Current AI foundation and €109 billion in private-sector investment commitments for AI research and infrastructure in France.
History and Background
Predecessor Summits
The Paris summit was the third in a series of international AI governance events initiated by the UK government's convening at Bletchley Park in November 2023. That inaugural summit, attended by representatives from 28 countries including the United States and China, focused primarily on frontier AI risks and produced the Bletchley Declaration acknowledging shared concerns about potentially catastrophic AI harms. The Seoul AI Summit in May 2024 continued this safety-oriented framing while broadening participation. Both summits emphasized precautionary governance and elicited voluntary safety commitments from major AI developers.
France hosted the third summit with an avowed intent to shift the conversation. Rather than risk mitigation, the Paris summit's stated mission was to "establish scientific foundations, solutions and standards for sustainable AI working for collective progress and in the public interest." Preparation began in earnest in the summer of 2024, when a steering committee of approximately 30 countries and international institutions established contact groups involving over 800 participants from the public and private sectors, academia, and NGOs.
AI Action Week
The summit was embedded within AI Action Week (February 6–11, 2025), a broader program of events intended to democratize participation. Science Days at Institut Polytechnique de Paris on February 6–7 preceded a Cultural Weekend on February 8–9 before the high-level governmental sessions on February 10–11. This structure reflected the summit's emphasis on broadening AI governance beyond technical and regulatory specialists.
Key Outcomes and Announcements
The Paris Statement
The summit's primary diplomatic output was the "Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet," released on February 11, 2025. The statement, signed by approximately 60 countries, addressed themes including bridging digital divides, AI safety and security, trustworthiness, transparency, environmental sustainability, labor market impacts, and avoiding market concentration. China and members of the European Union were among the signatories.
The statement gave only passing acknowledgment to AI safety concerns, with language that merely "noted" prior voluntary safety commitments rather than explicitly endorsing or strengthening them. This rhetorical step back from the Bletchley and Seoul frameworks drew criticism from observers who viewed it as a weakening of international safety norms. The United States and United Kingdom both declined to sign, with US officials citing concerns over governance clarity and national security implications.
Current AI Foundation
One of the summit's headline announcements was the creation of Current AI, a public interest foundation intended to develop AI "public goods" — including high-quality open-source datasets, tools, and shared infrastructure. France committed a $400 million endowment to the foundation. Beyond the French government, the initiative was backed by nine governments (Finland, France, Germany, Chile, India, Kenya, Morocco, Nigeria, Slovenia, and Switzerland), philanthropic organizations including the Omidyar Group and McGovern Foundation, and private companies including Google and Salesforce.
ROOST
The summit also saw the announcement of ROOST (Repository of Robust Open Online Safety Tools), a nonprofit initiative focused on developing scalable tools for detecting harmful content including child sexual abuse material. ROOST was led by Camille François and backed by major technology companies including OpenAI and Google.
Investment Commitments
Macron announced €109 billion in private-sector commitments to advance AI research and infrastructure in France. A significant portion — between €30 and €50 billion — was pledged by the United Arab Emirates for the development of a 1-gigawatt data center intended to support large-scale AI model training in Europe.
First International AI Safety Report
The summit's discussions under the "Trust in AI" theme incorporated the First International AI Safety Report, which had been published on January 29, 2025, and addressed risks from general-purpose AI systems.
Thematic Focus
Summit discussions were organized around five strategic themes:
- Public Service AI — AI applications in government and public sector services
- Future of Work — AI's role in employment and workplace transformation
- Innovation and Culture — AI applications in creative sectors
- Trust in AI — Building reliable, secure, and accountable AI systems
- Global Governance of AI — Frameworks for international AI oversight
This thematic architecture reflected the summit's broad agenda. Critics noted that despite the stated focus on trust and governance, the substantive discussions skewed toward economic opportunity and adoption, with limited time devoted to discrimination, sustainability, and rights-based concerns.
Policy Significance
The Shift from Safety to Action
The Paris summit represented a meaningful inflection point in international AI governance discourse. The AI Safety Summit (Bletchley Park) and its Seoul successor had established a norm of framing high-level AI diplomacy primarily around existential and catastrophic risks, producing declarations that acknowledged the possibility of serious harms from frontier AI systems. The Paris summit deliberately reframed this agenda.
Macron's opening address characterized AI as a force for human progress and positioned France and Europe as prospective AI hubs — an economic and geopolitical aspiration that shaped the summit's overall tone. The shift was widely noted by analysts as reflecting broader tensions between AI governance paradigms: precautionary safety-focused approaches associated with the UK and the EU's AI Act, and innovation-oriented approaches emphasizing competitiveness and adoption.
US Positioning
US Vice President JD Vance, who attended the summit, articulated a position critical of what he characterized as heavy-handed European AI regulation, arguing that rules like the EU AI Act and Digital Services Act risked entrenching incumbent technology companies and stifling beneficial innovation. US officials declined to sign the final declaration. This positioning reflected the early-2025 US administration's broader skepticism of multilateral AI governance mechanisms and preference for unilateral competitive strategy.
Global South Inclusion
The summit placed notable emphasis on North-South equity, with discussions addressing digital divides, access to computing resources, and trade structures affecting smaller nations. India's co-chairmanship, alongside the participation of governments from Kenya, Nigeria, Morocco, and Chile in the Current AI foundation, reflected an intention to position the event as genuinely global rather than a forum for wealthy technologically advanced states.
Criticisms and Concerns
The summit attracted substantial criticism from AI safety researchers, civil society organizations, and some industry figures.
Inadequate Safety Coverage
David Leslie of the Alan Turing Institute argued that the summit's communiqué failed to adequately address real-world AI risks and harms, including bias, cybersecurity vulnerabilities, and data privacy. Dario Amodei of Anthropic characterized the summit as a missed opportunity for not more seriously addressing the risks of artificial general intelligence amid rapid technological progress.
The Future of Life Institute described the declaration as "extremely vague and lacking ambition." Max Tegmark of MIT and the Future of Life Institute characterized the outcome as undermining the progress made at Bletchley and Seoul. Critics pointed to the declaration's diplomatic language — merely "noting" prior voluntary safety commitments rather than endorsing or building on them — as a concrete sign of backsliding.
Industry Capture and Civil Society Exclusion
Civil society representatives noted that the summit's structure marginalized their participation. Governments did not interact directly with non-state stakeholders during the first day of high-level sessions; side meetings occurred behind closed doors on the second day, and the final communiqué reflected government input only. Amnesty International and other human rights organizations called for binding regulatory frameworks rather than voluntary commitments, and rejected what they characterized as a false dichotomy between innovation and regulation.
Narrow and Growth-Oriented Agenda
Critics argued that the summit's main program lacked nuanced treatment of different AI system types and their distinct risk profiles, conflating large language models with automated decision-making systems used in high-stakes domains like migration adjudication and welfare. Observers noted the absence of substantive panels addressing discrimination or ecological sustainability, despite both being named as summit themes.
Lack of Binding Commitments
The summit's primary output — the Paris Statement — was non-binding, with no specified implementation timelines, accountability mechanisms, or metrics for evaluating progress. This structural limitation meant that ambitious-sounding commitments around digital inclusion, environmental sustainability, and trustworthiness had no enforcement pathway.
Geopolitical Fragmentation
The US and UK's refusal to sign the declaration was interpreted by multiple analysts as demonstrating the absence of a unified democratic consensus on AI governance. Rather than narrowing the gap between different regulatory approaches, the summit arguably made the divergences more visible, with the US positioning itself in explicit opposition to multilateral governance frameworks and the European approach.
Counterarguments
Defenders of the summit, including French government officials and some industry voices, argued that the declaration implicitly built on prior safety frameworks and that the shift toward "action" reflected pragmatic recognition that governance must address AI's actual deployment rather than speculative future risks. Some industry representatives argued that the emphasis on open tools, public goods infrastructure, and Global South inclusion represented genuine progress on equity dimensions that prior safety-focused summits had neglected.
Key Uncertainties
- Whether the Current AI foundation and ROOST will achieve sustained institutional impact beyond the summit announcement moment remains to be seen.
- The long-term trajectory of the international AI summit series — and whether a coherent governance framework can emerge despite US-EU divergence — is unclear.
- The extent to which the $400 million endowment and €109 billion investment commitments translate into substantive outcomes has not been independently verified.
- Whether the Paris summit's de-emphasis on safety norms will prove to be a durable reorientation of international AI governance or a temporary inflection is contested.
Sources
References
Wikipedia overview of the 2025 AI Action Summit held in Paris, an international AI governance conference co-chaired by France and India that drew over 1,000 participants from 100+ countries. It is the third in a series of global AI summits beginning with the 2023 Bletchley Park AI Safety Summit and the 2024 Seoul AI Summit. The summit focused on AI governance, safety, and international coordination rather than purely frontier AI risks.