Skip to content
Longterm Wiki
Back

FLI Grant Program: Call for Proposed Designs for Global Institutions Governing AI

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Future of Life Institute

This FLI grant program page documents six funded proposals for designing global institutions to govern advanced AI and AGI, representing concrete policy design efforts relevant to AI safety governance.

Metadata

Importance: 58/100organizational reportreference

Summary

The Future of Life Institute's 2024 grant program funded six $15,000 proposals for designing global AI governance institutions. Proposals ranged from a Global AGI Agency (US-EU-China coalition), a Fair Trade AI certification model, an IAEA/CERN-for-AI hybrid, to an international treaty prohibiting misaligned AGI. These represent diverse institutional approaches to stabilizing a future with advanced AI development.

Key Points

  • FLI funded six $15,000 grants for proposals designing trustworthy global governance mechanisms for AI/AGI in 2024.
  • Proposals include a Global AGI Agency as a public-private partnership housed by IEEE and the UN.
  • A 'CERN for AI' concept would centralize frontier model training in one international facility under monitoring by an International AI Agency.
  • A 'Fair Trade AI' certification model draws on ethical commodity market principles to promote trustworthy AGI deployment.
  • An international treaty proposal focuses on the legal framework for prohibiting misaligned AGI development.

Cached Content Preview

HTTP 200Fetched Apr 11, 202619 KB
All Grant Programs Call for proposed designs for global institutions governing AI 

 In 2024, FLI called for research proposals with the aim of designing trustworthy global governance mechanisms or institutions that can help stabilise a future with 0, 1, or more AGI projects. Status: Funds allocated Alongside the AI for SDGs grants track in 2024, FLI launched a request for design proposals for global institutions to govern advanced AI, or artificial general intelligence (AGI). Out of a diverse set of proposals, six were selected to receive grants of $15,000. The resulting papers, hailing from diverse backgrounds in Europe, the US and South America, laid out a range of institutions and mechanisms designed to harness safely the powers of AI, from a ‘CERN for AI’ to ‘Fair Trade AI’ and a Global AGI agency. 

 Grants archive

 An archive of all grants provided within this grant program: Project title A Global AGI Agency 

 Amount recommended $15,000.00 Primary investigator Justin Bullock , University of Washington, USA Details Project Summary

 Justin Bullock at the University of Washington suggests as his mechanism a Global AGI Agency, which would be a public private partnership led by a US-EU-China coalition alongside major private national R&D labs within leading technology firms. This partnership would be institutionally housed by the IEEE and the UN together. In this approach, national governments would partner more closely with leading technology firms, subsidise projects, provide evaluation and feedback on models, and generate public good uses alongside the profit making uses.

 Project title Fair Trade AI 

 Amount recommended $15,000.00 Primary investigator Katharina Zuegel , Forum on Information and Democracy, France Details Project Summary

 Katharina Zuegel at the Forum on Information and Democracy in France proposes the development of a "Fair Trade AI" mechanism, inspired by the success of the Fair Trade certification model in promoting ethical practices within commodity markets. By leveraging Fair Trade principles, this project aims to foster the creation, deployment, and utilization of AGI systems that are ethical, trustworthy, and beneficial to society. Fair Trade’s success informs this project, including enhanced living and working conditions, economic empowerment of labor, cultural preservation, and environmental sustainability.

 Project title IAEA and CERN for AI 

 Amount recommended $15,000.00 Primary investigator Haydn Belfield , University of Cambridge, UK Details Project Summary

 Haydn Belfield, a researcher at the University of Cambridge’s Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, proposes two reinforcing institutions: an International AI Agency (IAIA) and CERN for AI. The IAIA would primarily serve as a monitoring and verification body, enforced by chip import restrictions: only countries that sign a verifiable commitment to certain safe compute practices

... (truncated, 19 KB total)
Resource ID: 5fc7b2aaf22c4cde