Longterm Wiki

Long-Term Future Fund Grant Rounds

Open
Grant Round
Funder Organization
Division
Long-Term Future Fund
Source
Description

Recurring grant rounds supporting organizations and individuals working on reducing existential risks, especially from advanced AI

Notes

Multiple rounds per year; managed by a committee of fund managers

Grants Awarded

545Total: $25M
GrantRecipientAmountDateStatus
Funding to start a longtermist org and support researchTransformative Futures Foresight Institute$495KOct 2022
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$489KJul 2018
2 years of GovAI salary and overheads for Robert Trager$402KJul 2022
Funding for Computer Science PhDDavid Reber$349KJan 2022
8 weeks scholars program to pair promising alignment researchers with renowned mentorsAI Safety Support$316KOct 2022
A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment researchPrinciples Of Intelligent Behavior In Biological And Social Systems$305KJan 2022
Funding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systemsKush Bhatia$275KJan 2022
6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowshipsAshgro Inc. (Fiscal Sponsor Of Apart)$273KOct 2023
Funding for a degree in the Biological Sciences at UCSD (University of California San Diego)Kristaps Zilgalvis$250KOct 2021
12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithmsBerkeley Existential Risk Initiative$250KOct 2021
PhD in Computer Science working on AI-safetyAmon Elders$250KJan 2021
Funding to trial a new London organization aiming to 10x the number of AI safety researchersJessica Cooper$234KJan 2022
2-year funding to run public and expert surveys on AI governance and forecastingNoemi Dreksler$232KOct 2021
Year-long salary for shard theory and RL mech int researchAlexander Turner$220KJan 2023
Funding 2 years of technical AI safety research to understand and mitigate risk from large foundation modelsJohn Burden$210KOct 2022
Support to build a forecasting platform based on user-created play-money prediction marketsStephen Grugett, James Grugett, Austin Chen$200KJan 2022
Persuasion Tournament for Existential RiskPhilip Tetlock, Ezra Karger, Pavel Atanasov$200KJul 2021
Computing resources and researcher salaries at a new deep learning + AI alignment research group at CambridgeDavid Krueger$200KJan 2021
1-year stipend to continue research on agency, focused on natural abstractionJohn Wentworth$200KJul 2023
One year of seed funding for a new AI interpretability research organisationJessica Rumbelow$195KJan 2023
Top-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residencyEffective Altruism Geneva$180KJul 2022
1-year salary for research in applications of natural abstractionJohn Wentworth$180KOct 2022
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)Center for Applied Rationality$174KJul 2018
European Summer Research Program focused on building the talent pipeline for global catastrophic risk researchersEffective Altruism Geneva$170KJan 2022
6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paperKaarel HäNni, Kay Kozaronek, Walter Laurito, And Georgios Kaklmanos$167KJan 2023
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)Centre for Effective Altruism$163KJul 2018
Organizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participantsEpistea, Z.S$160KJan 2024
6-month stipends to develop and apply a novel method for localizing information and computation in neural networksAlex Cloud, Jacob Goldman Wetzler, EvžEn Wybitul, Joseph Miller$160KJul 2024
PhD at CambridgeRichard Ngo$150KJul 2020
Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignmentNick Hay$150KOct 2021
Unrestricted donationCenter for Applied Rationality$150KApr 2019
1-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research.Darryl Wright$150KJul 2022
One year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's workMacrostrategy Research Initiative$150KJan 2025
6 month salary to improve the US regulatory environment for prediction marketsSolomon Sia$138KJul 2022
Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization.Legal Priorities Project$135KJan 2021
Help InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agentsBerkeley Existential Risk Initiative$135KJan 2022
Increase of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residenceEffective Altruism Geneva$135KJul 2022
Twelve month salary to work as a global rationality organizerSkyler Crossman$130KOct 2022
8-month salary for three people to investigate the origins of modularity in neural networksLucius Bushnaq, Callum McDougall, Avery Griffin$125KJul 2022
A private online platform for research-sharing amongst the AI governance communityThe AI Governance Archive (TAIGA)$125KJul 2024
6-month incubation program for technical AI safety research organizationsCatalyze Impact$123KOct 2023
Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safetyCaroline Jeanmaire$122KJan 2021
1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involvedRobert Miles$122KJul 2023
12-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilitiesNicholas Kees Dupuis$120KJan 2023
A fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and techGeneva Centre For Security Policy$120KApr 2024
General support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influencesAlexander Turner$115KJan 2023
Organizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participantsEpistea, Z.S$115KApr 2025
Experimentally testing generative AI's ability to persuade humans about hazardous topicsThomas Costello$115KJan 2024
Organize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022Czech Association For Effective Altruism (CZEA)$110KJul 2022
1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group)Nora Ammann$103KOct 2023
24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goodsLennart Stern$102KJan 2022
New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass publicExpii, Inc.$100KJan 2021
Supporting Vanessa with her AI alignment researchVanessa Kosoy$100KOct 2020
Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhDAryeh Englander$100KOct 2021
Funding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows)Nora Ammann$100KJan 2023
Develop and market video game to explain the Stop Button Problem to the public & STEM individualsLone Pine Games, LLC$100KJul 2022
2-year salary for work on the learning-theoretic AI alignment research agendaVanessa Kosoy$100KJan 2023
One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATSDavid Udell$100KOct 2022
1.5 year stipend for thorough investigation and analysis of AI lab scaling policiesAysja Johnson$100KJan 2025
Salary Top-Up for Timaeus' Employees & ContractorsTimaeus (Fiscally Sponsored By Ashgro, Inc.)$100KJan 2024
1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystemAlignment Ecosystem Development$99KOct 2023
6 month salary & operational expenses to start a cybersecurity & alignment risk assessment orgJeffrey Ladish$98KJan 2023
12-month salary to work on alignment research!Garrett Baker$96KOct 2022
1-year salary for upskilling in technical AI alignment researchChu Chen$96KOct 2022
6-month salary for independent alignment research in interpretability or controlThomas Kwa$95KJul 2023
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)80,000 Hours$91KJul 2018
Support for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actionsColumbia University$90KJan 2022
1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studiesShoshannah Tekofsky$90KJan 2023
Year-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base.Bryce Meyer$90KApr 2024
1-year salary and costs to connect, expand and enable the AGI governance and safety community in CanadaWyatt Tessari$87KJul 2022
Compute costs for experiments to evaluate different scalable oversight protocolsLewis Hammond$87KJan 2024
~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in ManilaBrian Tan$86KApr 2024
PhD in machine learning with a focus on AI alignmentDmitrii Krasheninnikov$86KJul 2021
For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fitRemmelt Ellen$85KJan 2021
12-month salary to work on ML models for detecting genetic engineering in pathogensJade Zaslavsky$85KOct 2022
6-month career transition and independent research in AI safety and risk mitigationJose Groh$85KJul 2024
Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involvedRobert Miles$82KJan 2022
12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal PaternalismSamuel Brown$81K2022
Research into the nature of optimization, knowledge, and agency, with relevance to AI alignmentAlex Flint$80KJul 2021
Building understanding of the structure of risks from AI to inform prioritizationDavid Manheim$80KOct 2021
Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safetyAI Safety Support$80KJan 2022
One year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields.Logan Strohl$80KJan 2021
Support to further develop a branch of rationality focused on patient and direct observationLogan Strohl$80KJul 2022
Developing weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 peopleMichael Pearce, Alice Riggs, Thomas Dooms$80KJul 2024
1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articlesNicky Case$80KJan 2025
3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferencesConstantin Weisser$80KJul 2024
One year stipend and compute budget, for full-time technical AI alignment researchDavid Udell$80KJul 2023
Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignmentClaire Short$80KJul 2024
12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour)Nicky Pochinkov$80KApr 2024
1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationalityLogan Strohl$80KApr 2023
Support for Jay Bailey for work in ML for AI SafetyJay Bailey$79KJul 2022
6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI SafetyAishwarya Saxena$78KApr 2024
DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relationsUniversity Of Oxford, Department Of Experimental Psychology$78KOct 2021
This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia.AI Safety Australia And New Zealand$77KJan 2024
Research on the links between short- and long-term AI policy while skilling up in technical MLJess Whittlestone$75KJul 2019
11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in FinlandSanteri Tani$73KJul 2024
A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan$73K2022
6-month salary for two people to find formalisms for modularity in neural networksLucius Bushnaq$73K2022
Cover participant stipends for AI Safety Camp Virtual 2023Remmelt Ellen$73K2022
A research & networking retreat for winners of the Eliciting Latent Knowledge contest$72KOct 2022
Open Online Course on “The Economics of AI” for Anton KorinekUniversity Of Virginia$72KJan 2021
Building infrastructure for the future of effective forecasting effortsOzzie Gooen$70KApr 2019
Fund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing researchRethink Priorities$70KJan 2021
A major expansion of the Metaculus prediction platform and its communityAnthony Aguirre$70KApr 2019
Research on EA and longtermismAaron Bergman$70KJul 2022
12-Month Salary and Compute Expenses to do AI Safety Research with LLMsNicky Pochinkov$70KJan 2023
6-month salary to develop an overview of the current state of AI alignment research, and begin contributingGergely Szucs$70KJul 2022
This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity.Nathaniel Monson$70KApr 2023
4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMsAxel HøJmark$70KJul 2024
1-year stipend for independent research primarily on high-level interpretabilityArun Jose$70KApr 2024
6 months salary for independent work centered on distillation and coordination in the AI governance & strategy spaceAlexander Lintz$70K2022
EU Tech Policy Fellowship with ~10 traineesTraining For Good$69KJul 2022
Support to conduct a research project collaboration on Compute GovernanceLennart Heim$68KJan 2022
4-month salary for two people to find formalisms for modularity in neural networksLucius Bushnaq$67KJan 2023
4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switchIváN Arcuschin Moreno$67KJan 2024
Increasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date.Anthony Aguirre$65KJan 2020
Funding the last year of my PhD on embedded agency, to free up my time from teachingDaniel Herrmann$64KOct 2022
Grant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration.Hunar Batra$63KJan 2023
One year part time spent on AI safety upskilling and concrete research projectsRoss Nordby$63KOct 2022
Meta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project)Yoav Tzfati$62KJan 2024
9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical researchMagdalena Wache$62KOct 2022
4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switchNiels Uit De Bos$62KJan 2024
WhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic InterpretabilityBrian Tan$61KJul 2023
Six month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverablesPhilip Quirke$61KOct 2023
Create AI safety videos, and offer communication and media support to AI safety orgs.Robert Miles$60KJul 2020
1-year salary for Adam Shimi to conduct independent research in AI AlignmentAdam Shimi$60KJan 2021
BERI will support SERI when university systems are unable to helpBerkeley Existential Risks Initiative$60KJan 2021
Paid internships for promising Oxford students to try out supervised AI Safety research projectsAI Safety Hub Ltd$60KJul 2022
12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-riskRoss Graham$60KJul 2022
Independent research and upskilling for one year, to transition from academic philosophy to AI alignment researchBrian Porter$60KOct 2022
Developing and maintaining projects/resources used by the EA and rationality communitiesSaid Achmiz$60KJan 2023
6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's LawThomas Kwa$60KApr 2024
4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language modelsRauno Arike, Elizabeth Donoway$60KJul 2024
6-month stipend for a small group of collaborators to continue research on the Agent Structure ProblemAlex Altair$60KJan 2024
4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension programDanielle Ensign$60KJan 2024
4-months stipend to apply mechanistic interpretability to a real-world application, hallucinationsJavier Ferrando MonsoníS And Oscar Balcells Obeso$60KJul 2024
6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitationTheodore Chapman$56KJan 2024
Create a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processing$55KJan 2020
Testing how the accuracy of impact forecasting varies with the timeframe of prediction.David Rhys Bernard$55KOct 2020
12-month salary to provide runway after finishing RSPThe Future Of Humanity Institute$55KJan 2021
This grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts.Akbir Khan$55KApr 2023
Research on how much language models can infer about their current user, and interpretability work on such inferencesEgg Syntax (Legal: Jesse Davis)$55KJan 2024
6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deceptionSara Price$55KJan 2024
Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safetyRobert Miles$55KJan 2023
6-month salary to upskill for AI safetyDaniel O'Connell$54K2022
6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainabilityAengus Lynch$52KJan 2024
4-month stipend for 3 people to create demonstrations of provably undetectable backdoorsAndrew Gritsevskiy$50KJan 2024
1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs.Paul Bricman$50K2022
12-month salary for researching value learningCharlie Steiner$50KJan 2022
Buy out of teaching assistant duties for the remaining two years of my PhD programMichael Zlatin$50KJan 2022
We’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting.The Center For Election Science$50KOct 2021
Unrestricted donationElicit (AI Research Tool)$50KApr 2019
An offline community hub for rationalists and EAsVyacheslav Matyuhin$50KApr 2019
6-month salary to work with Dan Hendrycks on research projects relevant to AI alignmentThomas Woodside$50KJan 2022
Conducting postdoctoral research at Harvard on the psychology of EA/long-termismLucius Caviola$50KApr 2019
Unrestricted donation$50KApr 2019
Compensation for a non-fiction book on threat of AGI for a general audienceDarren McKee$50KJul 2022
Support for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety fasterMarius Hobbhahn$50KJul 2022
6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGINicky Pochinkov$50KOct 2022
6-month salary to conduct AI alignment research circuits in decision transformersJoseph Bloom$50K2022
One year of funding to improve an established community hub for EA in LondonNewspeak House$50KJul 2022
6 months’ salary to upskill on technical AI safety through project work and studyingRusheb Shah$50KJan 2023
6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project.Jay Bailey$50KJan 2023
Data collection for a new paradigm for AI alignment based on downstream outcomes and human flourishingUniversity Of Massachusetts Amherst$50KJan 2024
6-months of part-time stipend to launch a new science journalism outlet focused on AI SafetyMordechai Rorvig$50KJan 2025
Mentored independent research and upskilling to transition from theoretical physics PhD to AI safetyEinar Urdshals$50KJul 2024
Studying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.ICole Wyeth$50KApr 2023
6-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchersBryce Meyer$50KApr 2023
Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forwardMichael Parker$50KJan 2024
3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignmentThe University Of Texas At Austin$50KApr 2024
8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topicVojtech Kovarik$49KJul 2024
This grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability projectBilal Chughtai$48KJan 2023
6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisationSamuel Brown$47KOct 2022
9 months of funding for an early-career alignment researcher, to work with Owain Evans and others.Max Kaufmann$45K2022
12-month support for independent AI alignment researchAryeh Brill$45KApr 2024
This grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research.Yuxiao Li$45KApr 2023
This grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence.Daniel Filan$45KApr 2024
researching methods to continuously monitor and analyse artificial agents for the purpose of control.Lee Sharkey$45KOct 2020
Piloting an EA hardware lab for prototyping hardware relevant to longtermist prioritiesAdam Rutkowski$44KOct 2022
Research to enable transition to AI SafetyVojtěCh KovaříK$43KOct 2019
Replacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemicsJoel Becker$42KJan 2021
6 to 12 month fundings to continue working on model psychology and evaluationP.H.I$42KJul 2023
6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIFMarcus Williams$42KOct 2023
Independent research on forecasting and optimal paths to improve the long-term - LTF fund$41KOct 2020
Exploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension)Lucy Farnik$41KJan 2024
6 month salary to work on mech interp research with mentorship from Prof David BauBilal Chughtai$41KJul 2023
6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategyWill Aldred$40KOct 2022
Working on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goalTushant Jha$40KJan 2020
Conducting independent research into AI forecasting and strategy questionsTegan McCaslin$40KOct 2019
Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchersShahar Avin$40KJan 2019
Support to create language model (LM) tools to aid alignment research through feedback and content generationLogan Smith$40KJan 2022
Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks.Damon Pourtahmaseb Sasi$40KOct 2019
6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations.Logan Smith$40KJan 2023
Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forwardMichael Parker$40K2022
6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RLJeremy Gillen$40KOct 2022
6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedulesLogan Smith$40KOct 2022
6-months stipend for transitioning to independent research on AI SafetyGlauber De Bona$40KApr 2024
6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time afterJoe Kwon$40KJan 2024
4-month salary for finding and characterising provably hard cases for mechanistic anomaly detectionAndis Draguns$40KJul 2024
Developing noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deploymentAdelin Kassler$40KJul 2024
Part-time salary for independent AI safety researchRoss Nordby$40KJul 2023
6 month salary for further pursuing sparse autoencoders for automatic feature findingLogan Smith$40KJul 2023
In MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networksMentaLeap$40KJul 2023
6-month stipend to work on techical alignment research as part of MATS 5.0 extension programCindy Wu$40KJan 2024
4-month stipend to research the mechanisms of refusal in chat LLMsOscar Balcells Obeso$40KJan 2024
Develop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viabilityDavid Abecassis$40KJul 2024
6-month stipend to continue independent interpretability researchSviatoslav Chalnev$40KJan 2024
Conference publication of interpretability and LM-steering resultsAlexander Turner$40KApr 2023
6-month salary to build experience in AI interpretability research before PhD applicationsZach Furman$40KApr 2023
6-month stipend for Sparse Autoencoder Mech Interp projectsLogan Smith$40KJan 2024
6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space interventionEric Easley$40KJul 2024
6 month stipend for SAE-circuitsLogan Smith$40KJul 2024
Producing video content on AI alignmentRobert Miles$39KApr 2019
6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyondFelix HofstäTter$39KJan 2024
Funding for a one year machine learning and computational statistics master’s at UCLShavindra Jayasekera$38KOct 2022
50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing dataBugSeq Bioinformatics Inc.$38KJan 2021
6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systemsLukas Fluri$37KJan 2024
6-month salary and compute budget for continuing work on mechanistic interpretability for attention layersKeith Wynroe$37KJul 2024
Additional funding for AI strategy PhD at Oxford / FHISöRen Mindermann$37KJul 2019
6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastropheSasha Cooper$36KJul 2022
A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economyAlexander Mann$36KJul 2023
6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing studySimon Lermen$36KApr 2024
Funding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights.Simon Skade$36KOct 2022
6 month SERI MATS London extension phase for continuing and scaling up the sparse coding projectHoagy Cunningham$35KJul 2023
Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicatesJoe Collman$35KJul 2021
6-month salary to develop tools to test the natural abstractions hypothesisJohn Wentworth$35KJan 2021
≤1-year salary for alignment work: assisting academics, skilling up, personal research and community buildingCharlie Griffin$35K2022
This grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research.Sviatoslav Chalnev$35KApr 2023
6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmarkRoman Soletskyi$35KJan 2024
Cataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way ForwardMichael Parker$35KJan 2022
4 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognitionArjun Panickssery$34KJan 2024
Summer research program on global catastrophic risks for Swiss (under)graduate studentsEffective Altruism Geneva$34KJan 2021
2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-foundersAligned AI$34KJan 2022
6-month salary to finish writing a book on international AI governance and three other smaller AI governance projectsJosé Jaime Villalobos Ruiz$34KApr 2024
Conducting independent research on cause prioritizationMichael Dickens$33KJan 2020
Undergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshopNathaniel Sharadin$33KJul 2023
Organising immersive workshops on meta skills and x-risk for STEM students at top universities.Tamara Borine$33KOct 2020
6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paperJonathan Ng$33KJan 2023
Identifying and resolving tensions between competition law and long-term AI strategyShin Shin Hua And Haydn Belfield$32KJan 2020
4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgismQuentin Feuillade Montixi$32KJan 2023
This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects.Dioptra (Informal Research Group Working On Evals)$32KJan 2024
Funding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew GentzelColeman Snell$32KApr 2024
6-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costsMorgan Simpson$32KApr 2023
Research on AI safetyMarius Hobbhahn$30KJan 2022
4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension programTeun Van Der Weij$30KJan 2024
Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral.Gavin Taylor$30KJul 2020
A study of safe exploration and robustness to distributional shift in biological complex systemsNikhil Kunapuli$30KApr 2019
Building towards a "Limited Agent Foundations" thesis on mild optimization and corrigibilityAlex Turner$30KApr 2019
Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loopJohn Wentworth$30KOct 2019
Strategic research and studying programmingEli Tyre$30KApr 2019
Identifying white space opportunities for technical projects to improve biosecurity and pandemic preparednessKyle Fish$30KOct 2019
Formalizing the side effect avoidance problem researchAlex Turner$30KJan 2020
Funding for full-time, independent research on agent foundationsDaniel Demski$30KOct 2019
Conducting independent research into AI forecasting and strategy questionsTegan McCaslin$30KApr 2019
Formalizing perceptual complexity with application to safe intelligence amplificationAnand Srinivasan$30KApr 2019
Multi-model approach to corporate and state actors relevant to existential risk mitigationDavid Manheim$30KJul 2019
A research agenda rigorously connecting the internal and external views of value synthesisDavid Girardo$30KApr 2019
6 months of independent alignment research and upskillingZhengbo Xiang (Alana)$30K2022
4-month salary for conceptual/theoretical research towards perfect world-model interpretabilityAndrey Tumas$30K2022
6 months salary to do independent AI alignment research focused on formal alignment and agent foundationsTamsin Leake$30K2022
6-month stipend to work on AI alignment research (automated redteaming, interpretability)Alex Infanger$30KApr 2024
6-month salary to transition to a career in AI safety while working on AI safety projectsDillon Bowen$30KJan 2024
4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension programAaquib Syed$30KJan 2024
Development of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms)Sahil Kulshrestha$30KApr 2024
4-month salary to continue work on AI Control as a MATS extensionVasil Georgiev$30KJul 2024
4-month stipend to continue work on AI Control as a MATS extensionCody Rushing$30KJul 2024
This grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment researchAlexander Turner$30KApr 2023
6-month stipend to work on safe and robust reasoning via mechanistically interpreting representationsSatvik Golechha$30KApr 2024
4-month stipend to continue work on AI Control as a MATS extensionTyler Tracy$30KJul 2024
Retroactive grant to study Goodhart effects on heavy-tailed distributionsThomas Kwa$30KJul 2023
6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictabilityFabian Schimpf$29K2022
8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHIJames Bernardi$28KJul 2021
Funding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019.Mikhail Yagudin$28KApr 2019
6-month salary for an AISC project and continuing independent mechanistic interpretability projectsChristopher Mathwin$28KApr 2023
6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arrangedThomas Moynihan$28KOct 2021
6-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigationTheo Knopfer$28KJul 2022
Grant to cover fees for a master's program in machine learningAndrei Alexandru$28KOct 2021
3-months salary for SERI MATS extention to work on internal concept extractionAnn Kathrin Dombrowski$27KJul 2023
5-month salary plus expenses to support civilizational resilience projects arising from SHELTER WeekendJoel Becker$27KOct 2022
12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agendaJacques Thibodeau$27KApr 2023
Building infrastructure to give existential risk researchers superforecasting ability with minimal overheadJacob Lagerros$27KApr 2019
Retroactive grant for managing the MATS program, 1.0 and 2.0SERI ML Alignment & Theory Scholars$27KOct 2022
Fund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival.University Of Otago, Wellington, New Zealand$27KJan 2022
9 months support for an in-depth YouTube channel about AI safety and how AI will impact us allDavid Williams King$27KJul 2024
4-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategiesKai Fronsdal$27KJul 2024
6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computationDavid Hahnemann, Luan Ademi$26KOct 2022
A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchersTessa Alexanian$26KApr 2019
6-months salary to accelerate my plans of upskilling in order to work on the issue of AI safetyKane Nicholson$26K2022
Financial support for career exploration and related project in AI alignment upon completion of Masters in Computer ScienceMax Clarke$26KOct 2022
Research and a report/paper on the the role of emergency powers in the governance of X-RiskDaniel Skeffington$26KJul 2022
3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendasSam Marks$26K2022
Stipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attentionMatthias Dellago$25KApr 2023
6-month salary for an AI alignment research project on the manipulation of humans by AIFelix HofstäTter$25K2022
4-month stipend to continue AI safety projectsHannah Erlebach$25KJan 2024
6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISSAI Safety Support$25KJul 2021
Exploring crucial considerations for decision-making around information hazardsWill Bradshaw$25KJan 2020
Supporting aspiring researchers of AI alignment to boost themselves into productivityJohannes Heidecke$25KApr 2019
Human Progress for Beginners children's bookJason Crawford$25KOct 2019
Developing algorithms, environments and tests for AI safety via debate.Joe Collman$25KJul 2020
Support for alignment theory agenda evaluationJack Ryan$25KJul 2022
Enabling prosaic alignment research with a multi-modal model on natural language and chessPhilipp Bongartz$25KJul 2022
6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordinationChloe Lee$25KJul 2022
12-month salary to transition career into technical alignment researchDan Valentine$25KOct 2022
Develop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-riskSuzy Shepherd$25KJan 2025
Support for AI alignment outreach in France (video/audio/text/events) & field-buildingJéRéMy Perret$25KOct 2022
Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymakingExistential Risk Observatory$24KOct 2023
Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciaryNick Hollman$24KOct 2020
12-month salary for independent research, upskilling, and finding a stable position in AI-SafetyRobert Kralisch$24KJan 2022
6-months salary for researching “Framing computational systems such that we can find meaningful concepts." & UpskillingMatthias Georg Mayer$24KOct 2022
3-month salary for SERI-MATS extensionMatt MacDermott$24KJan 2023
6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systemsJohannes C. Mayer$24KOct 2022
Make 12 more AXRP episodesDaniel Filan$24K2022
5-month salary to continue work on evaluating agent self-improvement capabilitiesCodruta Lugoj$23KApr 2024
Four months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial trainingAidan Ewart$23KJan 2024
Productivity coaching for effective altruists to increase their impactLynette Bye$23KJul 2019
To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safetyStag Lynn$23KJul 2019
6-month part-time (20h/week) salary to further develop and refine the feature visualization library LucentTom Lieberum$23KJan 2022
Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022William D'Alessandro$23KJan 2022
3-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentorAleksandar Makelov$23KJan 2024
4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform coStanford University$22KJul 2024
Educational Scholarship in AI AlignmentJaeson Booker$22KJan 2022
Pass on funds for Astral Codex Ten Everywhere meetupsSkyler Crossman$22KJan 2023
3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignmentJacques Thibodeau$22KJul 2022
5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projectsKeith Wynroe$22KJul 2023
Funds for a 6-month project contributing to the clarification of goal-directednessMorgan Rogers$22KJan 2022
One-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safetySteve Petersen$21KOct 2022
A 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025Caleb Rak$21KOct 2024
Characterizing the properties and constraints of complex systems and their external interactions to inform AI safety researchAlexander Siegenfeld$20KJul 2019
Performing independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John SalvatierConnor Flexman$20KApr 2019
Writing fiction to convey EA and rationality-related topicsMiranda Dixon Luinenburg$20KJul 2019
12-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goalsLauren Lee$20KApr 2019
Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentorsDavid Reber$20KOct 2021
6-month salary for research into preventing steganography in interpretable representations using multiple agentsHoagy Cunningham$20KOct 2022
4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fitMax Kaufmann$20K2022
3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobsPeter Ruschhaupt$20K2022
Part-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detectionNoga Aharony$20KJul 2022
3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel fundingHannah Erleabch$20KApr 2024
Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theoryWilson Wu$20KJul 2024
This grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program.Viktor Rehnberg$19KApr 2023
3-month salary for upskilling in PyTorch and AI safety research.Alex Infanger$19KJan 2023
10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretabilityBenedikt Hoeltgen$19KOct 2021
Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual booksElizabeth Van Nostrand$19KOct 2019
1-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evalsSumeet Motwani$19KJan 2024
Surveying the neglectedness of broad-spectrum antiviral developmentJaspreet Pannu (Jassi)$18KOct 2019
7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemicsToby Bonvoisin$18KJan 2021
Three months of blogging and movement building at the intersection of EA/longtermism and progress studiesNicholas (Nick) Whitaker$18KOct 2021
A two-day, career-focused workshop to inform and connect European EAs interested in AI governanceAlex Lintz$18KJan 2019
MATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systemsGarrett Baker$18KJan 2024
Slack money for increased productivity in AI Alignment researchAdam Shimi$17KJan 2022
4 month grant to upskill for AI governance work before starting Science and Technology Policy PhDConor McGlynn$17KJul 2022
Writing preliminary content for an encyclopedia of effective altruismPablo Stafforini$17KJan 2020
Top-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in CanadaWyatt Tessari$17K2022
5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistanceFor Collaborative Work With AI:FAR$17KJan 2025
One year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGIGunnar Zarncke$17KOct 2022
Funding to cover a visit to Boston for biosecurity workWill Bradshaw$16KOct 2021
Support to work on AI alignment researchMatt MacDermott$16KJan 2022
7-month stipend for organising AI Alignment Irvine (AIAI)Neil Crawford$16KJul 2024
4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual streamJoshua Reiners$16KJan 2023
2-6 months' stipend to financially cover my self-development in Machine Learning for alignment workJonathan Ng$16KOct 2022
Funding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified.Wikiciv Foundation$16K2022
4-month funding for independent alignment research and studyArun Jose$15KOct 2022
1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension programAbhay Sheshadri$15KJan 2024
Write a SF/F novel based on the EA community.Timothy Underwood$15KJan 2022
Support multiple SPARC project operations during 2021SPARC$15KJan 2021
Financial support for work on a biosecurity research project and workshop, and travel expensesSimon Grimm$15KJan 2022
3-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurityCaleb Withers$15KJan 2022
6-month salary for self-study to be more effective at AI alignment researchThomas Kehrenberg$15KJul 2022
2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing huMax RäUker$15K2022
4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentialsKurt Brown$15KApr 2023
Provides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support)Berkeley Existential Risk Initiative (BERI)$15KJan 2017
3-month salary to set up a distillation course helping new AI safety theory researchers to distill papersJonas Hallgren$15KJul 2022
5-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayalNikiforos Pittaras$14KJul 2022
6-month salary to skill up and gain experience to start working on AI safety full-timeMateusz BagińSki$14K2022
12-month salary to study and get into AI Safety Research and work on related EA projectsLuca De Leo$14KOct 2022
6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into RussianMaksim Vymenets$13KJan 2022
Educational scholarship in AI safetyPaul Colognese$13KJan 2022
4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer modelsTilman RäUker$13KApr 2024
3-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignmentSimon Lermen$13KApr 2023
Neural network interpretability researchNicholas Greig$13KJul 2022
3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganographyMikhail Baranchuk$13KApr 2024
4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systemsAlan Chan$12K2022
4 month salary to upskill in biosecurity and explore possible career paths in biosecurity.Finan Adamson$12KOct 2021
Buying out one year of my academic teaching so that I can spend time on AI alignment research insteadDavid Udell$12KJan 2022
4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research.Benjamin Sturgeon$12KJan 2023
3-month stipend to support research on the state of AI safety in China and implications for AI existential riskAndrew Zeng$12KApr 2024
SERI MATS 3-month extension to study knowledge removal in Language ModelsShashwat Goel$12KJul 2023
4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effectsDavid Rhys Bernard$12KOct 2021
Support funding during 2 years of an AI safety PhD at OxfordOndrej Bajgar$12KJul 2022
Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decadesZach Freitas Groff$11KJul 2021
Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & softwareGeorge Green$11KOct 2021
Support to work on biosecuritySculpting Evolution Group, MIT$11KJan 2022
Fine-tuning large language models for an interpretability challenge (compute costs)Andrei Alexandru$11K2022
12 week 0.6FT upskilling stipend for technical governance research managementMorgan Simpson$11KApr 2024
Stipends, work hours, and retreat costs for four extra students of CHERI’s summer research programEffective Altruism Geneva$11KJul 2021
Financial support to work part time on an academic project evaluating factors relevant to digital consciousnessDerek Shiller$11KOct 2022
6-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AIAlfred Harwood$11K2022
travel funding for participants in a workshop on the science of consciousness and current and near-term AI systemsRobert Long$11KJan 2023
4-month fund for full time AI safety technical and/or governance researchHarrison Gietz$11KApr 2023
7 months of coworking-space funding continuation, during interpretability research projectDavid Udell$11KJan 2024
$10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestrictedVaidehi Agarwalla$11KApr 2023
$10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability FellowshipBrian Tan$10KApr 2024
4-month stipend for a career transition period to explore roles in AI safety communicationsSarah Hastings Woodhouse$10KApr 2024
Aiming to implement AI alignment concepts in real-world applicationsElicit (AI Research Tool)$10KOct 2018
Funding for building agents with causal models of the world and using those models for impact minimization.Vincent Luczkow$10KJan 2020
Upskilling in ML in order to be able to do productive AI safety research sooner than otherwiseJoar Skalse$10KJul 2019
Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhDOrpheus Lummis$10KApr 2019
Upskilling investigation of AI Safety via debate and ML trainingJoe Collman$10KOct 2019
AI safety dinnersNeil Crawford$10KJul 2022
Funding to perform human evaluations for evaluating different machine learning methods for aligning language modelsRobert Kirk$10K2022
4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreatJonas Hallgren$10KOct 2022
Support for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp.Jan Kirchner$10KJul 2022
Support for research into applied technical AI alignment workPhilippe Rivet$10KJul 2022
Retrospective funding for research retreat on a decision-theory / cause-prioritization topic.Daniel Kokotajlo$10K2022
4-month part-time salary to work on interpretability projects with David Bau and Logan RiggsJannik Brinkmann$10KJul 2024
1 year PhD funding and compute funding to research a novel method for training prosociality into large language modelsScott Viteri$10KApr 2023
Virtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safetyOrpheus Lummis$10KJan 2024
12-month salary to set up a new org doing research and creating interventions to minimise lock-in riskFormation Research$10KOct 2024
6 month project - pending descriptionKristy Loke$10KApr 2023
6 month AI alignment internship stipend top-upMatt MacDermott$10KApr 2024
Payment for AI researchers when I interview / survey them about their perceptions of safetyVael Gates$10KJan 2022
Understanding the Impact of Lifting Government Interventions against COVID-19 TransmissionMrinank Sharma$10KOct 2020
Retroactive funding for GameBench paperDioptra (Josh Clymber'S AIS Research Community)$9KApr 2024
Support to work towards developing an early-warning system for future biological risksMichael McLaren$9KJan 2022
The Alignable Structures workshop in PhiladelphiaQuinn Dougherty$9KOct 2022
7 month salary to study a Graduate Diploma of International Affairs at The Australian National UniversityMatthew MacInnes$9KJan 2023
2-month salary to test suitability for technical AI alignment research and identify a research directionBart Bussmann$9KApr 2023
This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy.Carson Ezell$9KApr 2023
3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo ResearchSienka Dounia$9KJan 2024
Time costs over six months to publish a paper on the interaction of open science practices and bio-riskJames Smith$8KOct 2021
Funding for project transitioning from AI capabilities to AI Safety research.Gerold Csendes$8K2022
Funds to support travel for academic research projects relating to pandemic preparedness and biosecurityCharles Whittaker$8KOct 2022
6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audienceLiam Carroll$8K2022
Funding for salary and living expenses while continuing to develop a framework of optimisation.Alex Altair$8K2022
Scholarship for PhD student working on research related to AI SafetyJosiah Lopez Wild$8K2022
6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum.Amritanshu Prasad$8KApr 2023
Funding for (academic/technical) AI safety community events in LondonFrancis Rhys Ward$8KApr 2023
Supporting 3-month research periodCharlie Rogers Smith$8KJul 2020
Develop a research project on how to infer human's internal mental models from their behaviour using cognitive science modelingSofia Jativa Vega$8KJan 2020
6-month salary for part-time independent research on LM interpretability for AI alignmentAidan Ewart$8KJul 2023
This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign.Naoya Okamoto$8KJan 2023
9-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’Julian Guidote$7KJul 2024
4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendasCodruta Lugoj$7KApr 2023
Ten field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025$7KApr 2025
Financial support to help productivity and increase time of early career alignment researcherMax Kaufmann$7KJul 2022
4-month stipend for upskilling within the field of economic governance of AIRafael Andersson Lipcsey$7KOct 2023
PhD Stipend Top Up for CHAI PhD Student.Alex Turner$7KJan 2022
Funding to do research on understanding search in transformers at the AI safety camp during 14 weeksGuillaume Corlouer$7KApr 2023
Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicantJeffrey Ohl$7KJul 2022
A relocation grant to help me to move and settle into a PhD program and cover initial expensesEgor Zverev$7KOct 2022
6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems.Roman Leventov$7KApr 2023
Funding to support PhD in AI Safety at Imperial College London, technical research and community buildingFrancis Rhys Ward$6KJul 2022
8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AILuise Woehlke$6KApr 2024
6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundationsIváN Godoy$6KJan 2023
Funding to increase my impact as an early-career biosecurity researcherLennart Justen$6KOct 2022
Monthly seminar series on Guaranteed Safe AI, from July to December 2024Horizon Events$6KApr 2024
12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher BrassYashvardhan Sharma$6KApr 2024
General support for a forecasting teamSamotsvety Forecasting$6KOct 2023
1 month long literature review on in-context learning and its relevance to AI alignmentAlfie Lamerton$6KJan 2024
3-month part-time salary in order to work on AI governance projects and activitiesArran McCutcheon$6KJul 2023
Two workshops on strategic communications around AI safety, focused on the AI safety communityPhilip Trippenbach$6KJul 2024
Longtermist lessons from COVIDGavin Leech$6KJan 2022
Funding to cover 4-months of rent while attending a research group with the Cambridge AI Safety groupDavid Quarel$6K2022
3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AITomislav Kurtovic$6K2022
Funding for the AI Safety Nudge CompetitionAI Safety Nudge Competition$5KOct 2022
6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on basHebrew Universty$5KApr 2025
Funds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRsImperial College London$5KJul 2023
Organizing a workshop aimed at highlighting recent successes in the development of verified software.Gopal Sarma$5KJan 2020
Surveying experts on AI risk scenarios and working on other projects related to AI safety.Alexis Carlier$5KJul 2020
3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance"Chelsea Liang$5KOct 2021
Exploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strikeIsabel Johnson$5KJul 2022
Stipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeopleChris Patrick$5KJul 2022
One-year funding of Astral Codex Ten meetup in PhiladelphiaWesley Fenza$5KJan 2023
Reconstruction attacks in federated learningUniversity Of Cambridge/ None$5KJul 2022
Support to conduct work in AI safetyBenjamin Anderson$5K2022
3-month stipend for upskilling in AI Safety and potentially transition to a career in AlignmentAmrita A. Nair$5K2022
Spend 3 months (part time) assessing plausible pathways to slowing AIGideon Futerman$5KApr 2024
A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safetyChris Lakin$5KOct 2023
2 Months of living expenses while I try to establish a broad-spectrum antiviral research organizationHayden Peacock$5KJan 2024
9-month programme to help language and cognition scientists repurpose their existing skills for long-termist researchNikola Moore$5KJul 2024
A podcast mainly themed around AI x-risk, aimed at a non-technical audienceSarah Hastings Woodhouse$5KJan 2024
This grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research.Cindy Wu$5KApr 2023
One year funding of ACX meetup in Atlanta GeorgiaACX Atlanta$5KApr 2023
2 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fieldsKrzysztof Gwiazda$5KJul 2024
Create an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governanceMichel Justen$5KOct 2024
Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAOJacob Mendel$5KJan 2023
~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safetyAnson Ho$5KJan 2022
Stanford Artificial Intelligence Professional Program tutionMario Peng Lee$5KJul 2022
Funding a nordic conference for senior X-risk researchers and junior talents interested in entering the fieldEffektiv Altruism Sverige (EA Sweden)$5KOct 2021
6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI SafetySamuel Nellessen$5KOct 2022
300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics.Leah Pierson$5KOct 2022
Funding for having written AI safety distillation posts on the topic of membranes/boundariesChris Lakin$5KOct 2023
Support Sam's participation in ‘Mid-term AI impacts’ research projectSam Clarke$4KOct 2020
Retrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS programViktoria Malyasova$4KOct 2022
Office rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood ResearchEffective Altruism Geneva$4K2022
New laptop for technical AI safety researchPeter Barnett$4KJul 2022
Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition)Fabienne SandküHler$4KOct 2022
Support to cover the costs of leaving employment in order to pursue AI safety research.Kajetan Janiak$4K2022
Payment for part-time rationality community buildingBoston Astral Codex Ten$4KOct 2022
Starting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positionsKai Sandbrink$4KJul 2022
Equipment to improve productivity while doing AI Safety researchTim Farrelly$4KJul 2022
Purchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at OxfordBáLint Pataki$4KJul 2022
I am looking for a career transition grant to give me more time for job hunting & networkingAlexander Large$4KJan 2023
Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agricultureALLFED$4KJul 2021
Retroactive funding for running an alignment theory mentorship program with Evan HubingerOliver Zhang$4KJan 2022
Research project on the longevity and decay of universities, philanthropic foundations, and catholic ordersMaximilian Negele$4KOct 2020
6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulationsRutgers University, Department Of Philosophy$4KJul 2021
Travel Support to BWC RevCon & Side EventsTheo Knopfer$4KOct 2022
Website visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday ClockConor Barnes$4KJul 2022
4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agentDavid Reber$3KJan 2021
Funding to pay participants to test a forecasting training programLogan McNichols$3KOct 2021
3-month funding for part-time research into US ability to maintain food supply in an extreme pandemicAdin Richards$3KJan 2022
3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge.Benjamin Stewart$3KApr 2023
Research into the international viability of FHI's Windfall ClauseJohn Bridge$3KJul 2022
Support to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 yearsDavid Staley$3KJan 2023
5-month part time salary for collaborating on a research paper analyzing the implications of compute accessSage Bergerson$3K2022
(professional development grant) New laptop for technical AI safety researchMax Lamparth$3K2022
Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML codingJosiah Lopez Wild$3KJul 2022
A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity SummitHamza Tariq Chaudhry$3KOct 2022
Seeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accurateKunvar Thaman$3KApr 2024
1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignmentMarc Everin Carauleanu$2KJan 2021
Support for living expenses while doing PhD in AI safety - technical research and community building workFrancis Rhys Ward$2K2022
Weekend organised as a part of the co-founder matching process of a group to found a human data collection orgPatrick Gruban$2KOct 2022
Support for self-study in data science and forecasting, to upskill within a GCBR research careerBenjamin Stewart$2KOct 2021
Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc.Jingyi Wang$2KJan 2023
I would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good.Ruth Grace Wong$2KJan 2022
Support to work on Aisafety.camp project, impact of human dogmatism on trainingKevin Wang$2KJul 2022
Research (and self-study) project designed to map and offer preliminary assessment of AI ideal governance researchRory Gillis$2KJul 2022
Economic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical researchAntonio Franca$2KOct 2022
This grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction.Tristan Williams$2KJan 2024
Compute for experiment about how steganography in large language models might arise as a result of benign optimizationFelix Binder$2KOct 2023
Grant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate studentSumeet Motwani$2KApr 2024
I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the FallZach Peck$2KOct 2022
Travel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI SafetyDhruvin Patel$2KJul 2024
6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp.Artem Karpov$2KApr 2023
Funding to attend BWC meeting to discuss transparency with country representatives & work on research projectRiya Sharma$2KJul 2023
Participation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interfaceFabio Haenel$2KJul 2021
AI safety researchLukas Berglund$2KOct 2022
Travel support to attend the Symposium on AGI Safety in Oxford in MaySmitha Milli$2KJan 2023
Travel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022Kadri Reis$2K2022
I'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute fundsJoshua Clymer$2KApr 2023
Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on TwitterAlex Turner$1KJan 2022
4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL modelsAbhijit Narayan S$1K2022
3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignmentAmrita A. Nair$1KOct 2022
2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make timeArdysatrio Haroen$745Oct 2022
10-month funding to study ML at university and AIS independentlyPatricio Vercesi$500Jan 2023