Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusContent
Edited today1.2k words1 backlinksUpdated monthlyDue in 4 weeks
3QualityStub •5ImportancePeripheral5ResearchMinimal
Content5/13
LLM summaryScheduleEntityEdit history1Overview
Tables4/ ~5Diagrams0Int. links10/ ~10Ext. links2/ ~6Footnotes0/ ~4References1/ ~4Quotes0Accuracy0Backlinks1
Change History1
Investigate and integrate Foundation-layer.ai#1444 weeks ago

Investigated foundation-layer.ai, extracted content via Jina reader proxy, then integrated findings into the wiki: created a new organization page, added entity definition, added resource entry, and updated funders-overview with The Foundation Layer Fund and AISTOF.

Issues2
QualityRated 3 but structure suggests 73 (underrated by 70 points)
Links2 links could use <R> components

The Foundation Layer

Funder

The Foundation Layer

The Foundation Layer is a philanthropic guide and donor-advised fund created by Tyler John (Effective Institutions Project) that has facilitated over 100 grants exceeding \$70 million across AI safety. Launched in early 2026, the site synthesizes five years of advisory experience into a comprehensive guidebook proposing five intervention pillars for AI safety philanthropy.

TypeFunder
Related
Organizations
Longview PhilanthropyCoefficient Giving
1.2k words · 1 backlinks

Overview

The Foundation Layer is a philanthropic guide and donor-advised fund created by Tyler M. John, who leads AI work at the Effective Institutions Project (EIP). Launched in early 2026, the site synthesizes five years of AI safety philanthropic advisory into a comprehensive guidebook aimed at persuading major donors to fund AI safety work.

The Foundation Layer Fund, managed by Tyler John, has facilitated over 100 grants exceeding $70 million across five intervention areas: alignment science, nonproliferation of dangerous capabilities, defensive technology, power distribution, and talent mobilization. The fund is available through Every.org.

The site argues that "philanthropists became the glue that held the world together" during the Cold War and can play a similar role in addressing AI risks. It appeals for diverse participation across political, geographic, and ideological lines, stating "we need all hands on deck."

AttributeDetails
Websitefoundation-layer.ai
TypePhilanthropic guide and donor-advised fund
AuthorTyler M. John (Effective Institutions Project)
Fund Size$70M+ (100+ grants, cumulative)
Focus AreasAlignment, nonproliferation, defensive tech, power distribution, talent
LaunchEarly 2026

History

Tyler M. John holds a PhD in philosophy from Rutgers University and was a Global Priorities Fellow at the Forethought Foundation for Global Priorities Research. He previously built Longview Philanthropy's AI advisory team and co-edited The Long View: Essays on Policy, Philanthropy, and the Long-term Future with Natalie Cargill (Longview's founder). He later founded the Effective Institutions Project (EIP), which specializes in AI safety, geopolitics, and power concentration.

EIP organized what Tyler John describes as "the first major town hall style meetings with Schmidt's P150 in 2023" with over 60 funders attending. The Foundation Layer site represents the culmination of this advisory work, consolidating recommendations into a publicly accessible guide.

Five-Pillar Strategy

The Foundation Layer proposes a five-part philanthropic strategy for addressing AI risk:

Pillar 1: Alignment Science

Ensuring AI systems reliably follow human instructions. Key focus areas include:

  • Mechanistic interpretability: Understanding how AI models think by identifying and modifying their internal representations. Recent breakthroughs include Sparse Autoencoders that can identify and modify specific concepts within models.
  • AI Control: Developed by Redwood Research, this approach uses multiple AI systems to supervise and control potentially misaligned models.
  • Additional research needs: resisting jailbreaking, preventing adversarial fine-tuning, ensuring faithful chain-of-thought reasoning.

The guide highlights that the UK AI Security Institute identified $50 million in valuable research projects but only had $10 million available, with 832 applications requesting a total of $320 million.

Pillar 2: Nonproliferation of Dangerous Capabilities

Three approaches to preventing dangerous AI capabilities from spreading:

  1. Model Evaluations: Organizations like METR, Apollo Research, RAND Corporation, and AVERI systematically test AI systems for dangerous capabilities before deployment. The UK AI Safety Institute (AISI) conducts pre-deployment testing through formal agreements with major labs.
  2. AI Company Security: Current AI company security is described as comparable to "normal start-up level." Model weights fit on a 5TB hard drive, making theft feasible.
  3. Compute Governance: Leveraging the extreme concentration of AI chip supply (Nvidia designs, TSMC manufactures, ASML makes the lithography lasers) for tracking, verification, and compliance mechanisms.

Pillar 3: Defensive Technology

Building societal resilience against AI-enabled harms, with biodefense highlighted as the most neglected area:

  • Far-UVC light can eliminate 90% of coronaviruses in 8 minutes
  • Glycol vapors can achieve massive reductions in airborne pathogens within an hour
  • Blueprint Biosecurity is positioned to deploy tens of millions toward these solutions

Pillar 4: Distributing Power

Ensuring AGI benefits are broadly shared:

  • AI for social coordination: Google DeepMind's "Habermas machine" AI mediator helped British groups find common ground on divisive issues
  • Auditing AI decision-making: Requiring companies to publish and be audited against model specifications
  • Humanity in the loop: Proof-of-human-personhood systems and standards for when AI can substitute for human judgment
  • Economy after AGI: Research on post-AGI economics; the guide notes that only a handful of economists (Anton Korinek, Erik Brynjolfsson, Chad Jones, Daron Acemoglu, Philip Trammell) study these impacts

Pillar 5: Talent and Infrastructure

Mobilizing talent across sectors:

  • Nonprofit organizations: METR, Transluce, Goodfire, and Apollo Research for capability evaluation and model internals research
  • Talent pipelines: BlueDot Impact identified 1,000+ new AI safety/governance roles; Impact Academy built a million-person database of global technical researchers
  • State capacity: Horizon Institute for Public Service, Foundation for American Innovation, and TechCongress build government technical capacity
  • Information sharing: Civic AI Security Project and California's SB 53 for public disclosure requirements

Funding Landscape

FundManagerScaleFocus
The Foundation Layer FundTyler John$70M+ (100+ grants)Five intervention areas
AI Safety Tactical Opportunities Fund (AISTOF)JueYan Zhang$30M+ (150+ grants)Emerging opportunities
Longview Frontier AI FundLongview Philanthropy$13M raised, $11.1M disbursedResearch, engineering, advocacy
AdvisorTypeNotes
Effective Institutions ProjectPhilanthropic advisoryTyler John's org; AI safety, geopolitics
Longview PhilanthropyPhilanthropic advisory$85M+ toward AI risk; free services
Coefficient GivingFoundationLargest AI safety funder since 2015; $300M+ since 2017

Investment and Political Giving

The guide also covers investment vehicles (Juniper Ventures, Halcyon Ventures, Safe AI Fund, Entrepreneurs First, Seldon Lab) and political giving advisors (AI Policy Network, Americans for Responsible Innovation, Center for AI Safety Action Fund, Public First).

Key Claims and Statistics

ClaimSource
$500M+ in AI safety philanthropy has shaped legislation and governanceFoundation Layer site
AI companies invested $100M+ in a super PAC to oppose regulationFoundation Layer site
Nearly all White House AI positions held by advisors from Palantir, Scale AI, A16Z, and Mithril VCFoundation Layer site
CAIS Action Fund spent $270K on federal lobbying in 2024Foundation Layer site
Seldon Lab portfolio companies raised $10M+ and sold services to xAI and AnthropicFoundation Layer site
UK AISI received 832 applications requesting $320M but had only $10M availableFoundation Layer site

Criticism

Limitations

  • Single-author perspective: The guide represents the views of one advisor, not an institutional publication or peer-reviewed analysis.
  • Not yet indexed or peer-reviewed: As of February 2026, the site is not indexed by search engines and has no third-party reviews or commentary.
  • Verification challenges: Many statistics cited lack primary source links, making independent verification difficult.
  • Potential conflicts of interest: Tyler John manages The Foundation Layer Fund and advises at EIP, which could create incentives to frame the funding landscape in ways that favor his advisory services.

Key Uncertainties

  • Whether The Foundation Layer Fund's $70M+ figure represents grants facilitated, recommended, or directly disbursed
  • The degree of overlap between the Foundation Layer Fund and Longview Philanthropy's grantmaking (given Tyler John's prior role at Longview)
  • Whether the guide's AGI timeline estimates and risk framing are representative of expert consensus or reflect a particular perspective within the AI safety community

References

A comprehensive philanthropic guide by Tyler John (Effective Institutions Project) aimed at persuading major donors to fund AI safety. Covers AGI timelines, existential risks (loss of control, malicious use, power concentration), and proposes a five-pillar philanthropic strategy: alignment science, nonproliferation, defensive technology, power distribution, and talent mobilization. Includes a getting-started guide for donors with specific funds and advisors.

Structured Data

1 factView full profile →

All Facts

General
PropertyValueAs OfSource
Websitehttps://foundation-layer.ai

Related Pages

Top Related Pages

Analysis

AI Safety Intervention Effectiveness Matrix

Organizations

Blueprint Biosecurity

Concepts

Funders Overview