Back
Infosecurity Magazine: Seoul Summit Coverage
webinfosecurity-magazine.com·infosecurity-magazine.com/news/ai-seoul-summit-safety-com...
News coverage of the May 2024 Seoul AI Safety Summit, a key international governance milestone where governments and AI labs made safety commitments; useful for tracking the evolution of global AI governance and voluntary safety frameworks.
Metadata
Importance: 45/100news articlenews
Summary
News coverage of the 2024 Seoul AI Safety Summit, focusing on commitments made by governments and AI companies regarding AI safety standards and governance frameworks. The summit built on the Bletchley Park AI Safety Summit to advance international coordination on frontier AI risks and safety testing.
Key Points
- •The Seoul Summit continued momentum from the 2024 Bletchley Park Declaration, with nations reaffirming commitments to AI safety collaboration
- •Major AI companies made voluntary safety commitments including pre-deployment testing and information sharing on dangerous capabilities
- •Governments agreed to work toward internationally comparable AI safety evaluations and red-teaming standards
- •The summit advanced discussions on establishing AI Safety Institutes and cross-border cooperation frameworks
- •Commitments addressed both near-term deployment risks and longer-term existential and catastrophic AI risk scenarios
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Seoul Declaration on AI Safety | Policy | 60.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20269 KB
AI Seoul Summit: 16 AI Companies Sign Frontier AI Safety Commitments - Infosecurity Magazine
Infosecurity Magazine Home » News » AI Seoul Summit: 16 AI Companies Sign Frontier AI Safety Commitments
AI Seoul Summit: 16 AI Companies Sign Frontier AI Safety Commitments
News
21 May 2024
Written by
Kevin Poireault
Reporter , Infosecurity Magazine
Follow @Kpoireault
Connect on LinkedIn
In a “historic first,” 16 global AI companies have signed new commitments to safely develop AI models.
The announcement was made during the virtual AI Seoul Summit, the second event on AI safety co-hosted on May 21-22 by the UK and South Korea.
The Frontier AI Safety Commitments’ signatories include some of the biggest US tech giants, such as Amazon, Anthropic, Google, IBM, Microsoft and OpenAI.
They also include AI organizations from Europe (Cohere and Mistral AI), the Middle East (G42 and the Technology Innovation Institute) and Asia (Naver, Samsung and Zhipu.ai).
AI Risk Thresholds to Be Decided in France
These organizations vowed to publish safety frameworks on how they will measure the risks of their frontier AI models, such as examining the risk of misuse of technology by bad actors.
The frameworks will also outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to ensure thresholds are not surpassed.
In the most extreme circumstances, the companies have also committed to “not develop or deploy a model or system at all” if mitigations cannot keep risks below specific agreed-upon thresholds.
The 16 organizations have agreed to coordinate with multiple stakeholders, including governments, to define those thresholds ahead of the AI Action Summit in France in early 2025.
Professor Yoshua Bengio, a world-leading AI researcher, Turing Prize winner and the lead author of the International Scientific Report on the Safety of Advanced AI, said he was pleased to see leading AI companies from around the world sign up to the Frontier AI Safety commitments.
“In particular, I welcome companies’ commitments to halt their models where they present extreme risks until they can make them safe as well as the steps they are taking to boost transparency around their risk management practices,” he said.
An Emerging Global AI Safety Governance Regime
These commitments build on a previous agreement made with leadin
... (truncated, 9 KB total)Resource ID:
949e4dabcaaff50f | Stable ID: sid_6hCXCLZuze