Skip to content
Longterm Wiki
Back

The Foundation Layer: A Philanthropic Guide to AI Safety

web
foundation-layer.ai·foundation-layer.ai/

This guide is aimed at major donors and philanthropists rather than researchers; it serves as a field-level overview and fundraising resource for the AI safety ecosystem, useful for understanding how AI safety is communicated to funding audiences.

Metadata

Importance: 58/100organizational reporteducational

Summary

A comprehensive guide by Tyler John (Effective Institutions Project) designed to persuade major philanthropists to fund AI safety work. It outlines AGI timelines, three categories of existential risk (loss of control, malicious use, power concentration), and proposes a five-pillar philanthropic strategy covering alignment science, nonproliferation, defensive technology, power distribution, and talent mobilization.

Key Points

  • Argues that AI poses catastrophic and existential risks across three axes: loss of human control over AI, malicious use by bad actors, and dangerous concentration of power.
  • Proposes a five-pillar philanthropic strategy: alignment science, nonproliferation, defensive technology, power distribution, and talent mobilization.
  • Provides a practical getting-started guide for donors, including specific recommended funds, organizations, and philanthropic advisors in AI safety.
  • Frames AI safety funding as one of the most important and neglected philanthropic opportunities, targeting audiences with significant capital to deploy.
  • Covers AGI timeline considerations and translates technical AI safety concerns into accessible language for a non-technical donor audience.

Cited by 2 pages

PageTypeQuality
Longtermist Funders (Overview)--3.0
The Foundation LayerOrganization3.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20264 KB
The Foundation Layer 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 The Foundation Layer 

 A philanthropic strategy for the AGI transition

 by TYLER JOHN

 Overview

 Executive Summary 

 About the Author 

 Introduction 

 II. The Exponential Trend 

 III. Civilization-scale Threats 

 IV. The Philanthropic Solution 

 V. The Case for Philanthropy 

 VI. Political Giving and Impact Investing 

 VII. Why the Problem Remains Neglected — For Now 

 Appendix A: How to Get Started 

 Appendix B: AI Consciousness 

 Appendix C: How AI Works 

 "An extremely useful report for any philanthropist interested in funding AI safety and preparedness."

 — Geoffrey Hinton, Nobel prize winner in physics, 2024

 About the Foundation Layer

 In the Cold War, philanthropists became the glue that held the world together. We’re poised to do it again today in the age of AI. By being laser focused on the problem, philanthropists to date have done as much for AGI safety and preparedness as governments and AI companies have with a tiny fraction of their resources, creating a more secure foundation layer on which society can build. But with a problem of this magnitude we’re going to need everyone, and more resources, approaches, and talent than ever before.

 In this report I make the case that there is a meaningful chance of AI that can do everything that humans can do in just a few years. This leads to civilization-scale threats: loss of control, the development of powerful new dual-use technologies like novel bioweapons, and the radical concentration of power. These problems have mostly clear, tractable solutions: machine learning research, defensive technologies, and governance approaches. But we have limited time to get them in place. 

 With ordinary technologies, we can iterate gradually over decades to create a society resilient to its impacts. But AI is not an ordinary technology. It is achieving faster progress and faster uptake than any technology before, with a much higher ceiling on what is possible, under weak institutions with limited technological expertise, and backed by 7 companies that account for 24% of global GDP. It is a technology that we interface with in natural language, that increasingly designs itself, and that has a real chance of automating all human decision-making in mere years.

 We need all hands on deck: Republican donors and Democrat donors, EU donors and Chinese donors, risk-taking donors and cautious donors, tech-friendly donors and tech-adversarial donors, empiricist donors and theorist donors, patient donors and impatient donors, private donors and public donors.

 If you are interested in philanthropy for the AGI transition, this is your guidebook — synthesized from five years of advice I've provided to dozens of philanthropists.

 Included is everything you need to get started. In Appendix A , you'll find numerous examples of funds and philanthropic advisors who can help you begin t

... (truncated, 4 KB total)
Resource ID: 9c6a24147b148206 | Stable ID: sid_0q8VknC1kA