Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety – Future of Life Institute
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Future of Life Institute
The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety, run by the Future of Life Institute, funds researchers working on technical AI safety with an explicit focus on existential risk reduction, and includes notable conflict-of-interest provisions regarding employment at major AI labs.
Metadata
Summary
The Future of Life Institute offers postdoctoral fellowships funded by Vitalik Buterin to support researchers working on AI existential safety, providing $80,000 annual stipends plus research funds. Fellows must acknowledge FLI's position that joining major AGI-racing companies (Anthropic, Google DeepMind, Meta, OpenAI, xAI) within two years of completing the fellowship is harmful, and agree to donate half their compensation if they do so. The fellowship defines AI existential safety research broadly, covering interpretability, alignment, formal verification, and cybersecurity of advanced AI systems.
Key Points
- •Provides $80,000 annual stipend plus $10,000 research fund for postdocs at US, UK, and Canadian universities working on AI existential safety.
- •Includes a strong conflict-of-interest clause: fellows who join major AGI labs within 2 years must donate 50% of gross compensation to charity.
- •FLI explicitly names Anthropic, Google DeepMind, Meta, OpenAI, and xAI as companies whose safety teams are considered net negative for humanity.
- •Defines AI existential safety research to include interpretability, alignment, formal verification, and cybersecurity relevant to catastrophic risk reduction.
- •Run in partnership with the Beneficial AI Foundation (BAIF); past fellows include researchers from MIT, UC Berkeley CHAI, and Oxford.
Cached Content Preview
All Fellowships Technical Postdoctoral Fellowships
The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety is designed to support promising researchers for postdoctoral appointments who plan to work on AI existential safety research. Status: Closed for submissions Deadline: 5 January 2026 Fellows receive:
An annual $80,000 stipend at universities in the US, UK and Canada.
A $10,000 fund that can be used for research-related expenses such as travel and computing.
Invitations to virtual and in-person events where they will be able to interact with other researchers in the field.
See below for a definition of 'AI Existential Safety research' and additional eligibility criteria.
Questions about the fellowship or application process not answered on this page should be directed to grants@futureoflife.org
The Vitalik Buterin Fellowships in AI Existential Safety are run in partnership with the Beneficial AI Foundation (BAIF) .
FLI offers Buterin Fellowships in pursuit of a vibrant AI existential safety research community free from financial conflicts of interest.
Anyone awarded a fellowship will need to confirm the following: "I am aware of FLI’s assessment that moving from a Buterin Fellowship to working (even on a safety team) for a company that is
a) racing to build AGI/ASI, and
b) not pushing for strong binding AI regulation
is a net negative for humanity. I therefore agree that, if I accept a Buterin Fellowship and take a job at any such company (including Anthropic, GoogleDeepMind, Meta, OpenAI, or xAI) within 2 years of completing my Buterin Fellowship, I will donate half of my gross compensation each month to a charity mutually agreeable to me and FLI, including half of any stock options or bonuses.”
Grant winners
People that have been awarded grants within this grant program: Ekdeep Singh Lubana
Class of 2024 View profile Nandi Schoots
Oxford University
Class of 2024 View profile Dr. Peter S. Park
Massachusetts Institute of Technology
Class of 2023 View profile Nisan Stiennon
UC Berkeley - Center for Human-Compatible AI (CHAI)
Class of 2022 View profile Results
No results to show yet.
Request for Proposal AI Existential Safety Research Definition
FLI defines AI existential safety research as:
Research that analyzes the most probable ways in which AI technology could cause an existential catastrophe (that is: a catastrophe that permanently and drastically curtailshumanity’s potential, such as by causing human extinction), and which types of research could minimize existential risk (the risk of such catastrophes). Examples include: Outlining a set of technical problems and arguments that their solutions would reduce existential risk from AI, or arguing that existing such sets are misguided.
Concretely specifying properties of AI systems that significantly increase or decrease their probability of causing an existential catastrophe, and providing ways to measure such properties.
... (truncated, 7 KB total)e32169125b64651b