| 6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into Russian | Maksim Vymenets | — | $13,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into Russian |
| Working on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goal | Tushant Jha | — | $40,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Working on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goal |
| Characterizing the properties and constraints of complex systems and their external interactions to inform AI safety research | Alexander Siegenfeld | — | $20,000 | — | Jul 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Characterizing the properties and constraints of complex systems and their external interactions to inform AI safety research |
| 6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arranged | Thomas Moynihan | — | $27,819 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arranged |
| 12-month salary for researching value learning | Charlie Steiner | — | $50,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary for researching value learning |
| Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral. | Gavin Taylor | — | $30,000 | — | Jul 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral. |
| Support Sam's participation in ‘Mid-term AI impacts’ research project | Sam Clarke | — | $4,455 | — | Oct 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support Sam's participation in ‘Mid-term AI impacts’ research project |
| PhD at Cambridge | Richard Ngo | — | $150,000 | — | Jul 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] PhD at Cambridge |
| Funding a nordic conference for senior X-risk researchers and junior talents interested in entering the field | Effektiv Altruism Sverige (EA Sweden) | — | $4,562 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding a nordic conference for senior X-risk researchers and junior talents interested in entering the field |
| Funding for a degree in the Biological Sciences at UCSD (University of California San Diego) | Kristaps Zilgalvis | — | $250,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for a degree in the Biological Sciences at UCSD (University of California San Diego) |
| I would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good. | Ruth Grace Wong | — | $2,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] I would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good. |
| Research on AI safety | Marius Hobbhahn | — | $30,103 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research on AI safety |
| Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software | George Green | — | $11,400 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software |
| Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment | Nick Hay | — | $150,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment |
| Buy out of teaching assistant duties for the remaining two years of my PhD program | Michael Zlatin | — | $50,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Buy out of teaching assistant duties for the remaining two years of my PhD program |
| Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved | Robert Miles | — | $82,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved |
| Support to work on biosecurity | Sculpting Evolution Group, MIT | — | $11,400 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to work on biosecurity |
| Funding to trial a new London organization aiming to 10x the number of AI safety researchers | Jessica Cooper | — | $234,121 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to trial a new London organization aiming to 10x the number of AI safety researchers |
| Time costs over six months to publish a paper on the interaction of open science practices and bio-risk | James Smith | — | $8,324 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Time costs over six months to publish a paper on the interaction of open science practices and bio-risk |
| Research into the nature of optimization, knowledge, and agency, with relevance to AI alignment | Alex Flint | — | $80,000 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research into the nature of optimization, knowledge, and agency, with relevance to AI alignment |
| Producing video content on AI alignment | Robert Miles | — | $39,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Producing video content on AI alignment |
| Participation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interface | Fabio Haenel | — | $1,571 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Participation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interface |
| Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciary | Nick Hollman | — | $24,000 | — | Oct 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciary |
| Open Online Course on “The Economics of AI” for Anton Korinek | University of Virginia | — | $71,500 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Open Online Course on “The Economics of AI” for Anton Korinek |
| Organizing a workshop aimed at highlighting recent successes in the development of verified software. | Gopal Sarma | — | $5,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Organizing a workshop aimed at highlighting recent successes in the development of verified software. |
| Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization. | Legal Priorities Project | — | $135,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization. |
| 4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effects | David Rhys Bernard | — | $11,700 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effects |
| A study of safe exploration and robustness to distributional shift in biological complex systems | Nikhil Kunapuli | — | $30,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A study of safe exploration and robustness to distributional shift in biological complex systems |
| Conducting independent research into AI forecasting and strategy questions | Tegan McCaslin | — | $40,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Conducting independent research into AI forecasting and strategy questions |
| Conducting independent research on cause prioritization | Michael Dickens | — | $33,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Conducting independent research on cause prioritization |
| Building towards a "Limited Agent Foundations" thesis on mild optimization and corrigibility | Alex Turner | — | $30,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Building towards a "Limited Agent Foundations" thesis on mild optimization and corrigibility |
| 6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISS | AI Safety Support | — | $25,000 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISS |
| DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relations | University of Oxford, Department of Experimental Psychology | — | $77,500 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relations |
| Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop | John Wentworth | — | $30,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop |
| Surveying the neglectedness of broad-spectrum antiviral development | Jaspreet Pannu (Jassi) | — | $18,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Surveying the neglectedness of broad-spectrum antiviral development |
| Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual books | Elizabeth Van Nostrand | — | $19,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual books |
| 12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms | Berkeley Existential Risk Initiative | — | $250,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms |
| Exploring crucial considerations for decision-making around information hazards | Will Bradshaw | — | $25,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Exploring crucial considerations for decision-making around information hazards |
| Help InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agents | Berkeley Existential Risk Initiative | — | $135,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Help InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agents |
| Aiming to implement AI alignment concepts in real-world applications | 2VexoROapg | — | $10,000 | — | Oct 2018 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Aiming to implement AI alignment concepts in real-world applications |
| Funding for building agents with causal models of the world and using those models for impact minimization. | Vincent Luczkow | — | $10,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for building agents with causal models of the world and using those models for impact minimization. |
| Upskilling in ML in order to be able to do productive AI safety research sooner than otherwise | Joar Skalse | — | $10,000 | — | Jul 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Upskilling in ML in order to be able to do productive AI safety research sooner than otherwise |
| Identifying and resolving tensions between competition law and long-term AI strategy | Shin-Shin Hua and Haydn Belfield | — | $32,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Identifying and resolving tensions between competition law and long-term AI strategy |
| Stipends, work hours, and retreat costs for four extra students of CHERI’s summer research program | Effective Altruism Geneva | — | $11,094 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Stipends, work hours, and retreat costs for four extra students of CHERI’s summer research program |
| Supporting 3-month research period | Charlie Rogers-Smith | — | $7,900 | — | Jul 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Supporting 3-month research period |
| PhD in Computer Science working on AI-safety | Amon Elders | — | $250,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] PhD in Computer Science working on AI-safety |
| 4 month salary to upskill in biosecurity and explore possible career paths in biosecurity. | Finan Adamson | — | $12,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4 month salary to upskill in biosecurity and explore possible career paths in biosecurity. |
| New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass public | Expii, Inc. | — | $100,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass public |
| 3-month funding for part-time research into US ability to maintain food supply in an extreme pandemic | Adin Richards | — | $3,150 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month funding for part-time research into US ability to maintain food supply in an extreme pandemic |
| Grant to cover fees for a master's program in machine learning | Andrei Alexandru | — | $27,645 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Grant to cover fees for a master's program in machine learning |
| Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) | hvg9ecR3nA | — | $91,450 | — | Jul 2018 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) |
| Supporting Vanessa with her AI alignment research | Vanessa Kosoy | — | $100,000 | — | Oct 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Supporting Vanessa with her AI alignment research |
| Create a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processing | 106 | — | $55,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Create a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processing |
| Building understanding of the structure of risks from AI to inform prioritization | David Manheim | — | $80,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Building understanding of the structure of risks from AI to inform prioritization |
| Write a SF/F novel based on the EA community. | Timothy Underwood | — | $15,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Write a SF/F novel based on the EA community. |
| Educational scholarship in AI safety | Paul Colognese | — | $13,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Educational scholarship in AI safety |
| Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers | Shahar Avin | — | $40,000 | — | Jan 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers |
| Support to build a forecasting platform based on user-created play-money prediction markets | Stephen Grugett, James Grugett, Austin Chen | — | $200,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to build a forecasting platform based on user-created play-money prediction markets |
| Summer research program on global catastrophic risks for Swiss (under)graduate students | Effective Altruism Geneva | — | $34,064 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Summer research program on global catastrophic risks for Swiss (under)graduate students |
| Building infrastructure to give existential risk researchers superforecasting ability with minimal overhead | Jacob Lagerros | — | $27,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Building infrastructure to give existential risk researchers superforecasting ability with minimal overhead |
| Strategic research and studying programming | Eli Tyre | — | $30,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Strategic research and studying programming |
| Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety | AI Safety Support | — | $80,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety |
| 1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignment | Marc-Everin Carauleanu | — | $2,491 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignment |
| 4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agent | David Reber | — | $3,273 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agent |
| 7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemics | Toby Bonvoisin | — | $18,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemics |
| Performing independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John Salvatier | Connor Flexman | — | $20,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Performing independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John Salvatier |
| Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates | Joe Collman | — | $35,000 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates |
| Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculture | ALLFED | — | $3,600 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculture |
| Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhD | Aryeh Englander | — | $100,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhD |
| Independent research on forecasting and optimal paths to improve the long-term - LTF fund | 248 | — | $41,337 | — | Oct 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Independent research on forecasting and optimal paths to improve the long-term - LTF fund |
| Payment for AI researchers when I interview / survey them about their perceptions of safety | Vael Gates | — | $9,900 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Payment for AI researchers when I interview / survey them about their perceptions of safety |
| Cataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way Forward | Michael Parker | — | $34,500 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Cataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way Forward |
| Unrestricted donation | l5K9ZdbXww | — | $150,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Unrestricted donation |
| Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) | 231 | — | $488,994 | — | Jul 2018 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) |
| researching methods to continuously monitor and analyse artificial agents for the purpose of control. | Lee Sharkey | — | $44,668 | — | Oct 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] researching methods to continuously monitor and analyse artificial agents for the purpose of control. |
| Identifying white space opportunities for technical projects to improve biosecurity and pandemic preparedness | Kyle Fish | — | $30,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Identifying white space opportunities for technical projects to improve biosecurity and pandemic preparedness |
| 2-year funding to run public and expert surveys on AI governance and forecasting | Noemi Dreksler | — | $231,608 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2-year funding to run public and expert surveys on AI governance and forecasting |
| Persuasion Tournament for Existential Risk | Philip Tetlock, Ezra Karger, Pavel Atanasov | — | $200,000 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Persuasion Tournament for Existential Risk |
| Support to work towards developing an early-warning system for future biological risks | Michael McLaren | — | $9,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to work towards developing an early-warning system for future biological risks |
| Develop a research project on how to infer human's internal mental models from their behaviour using cognitive science modeling | Sofia Jativa Vega | — | $7,700 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Develop a research project on how to infer human's internal mental models from their behaviour using cognitive science modeling |
| Testing how the accuracy of impact forecasting varies with the timeframe of prediction. | David Rhys Bernard | — | $55,000 | — | Oct 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Testing how the accuracy of impact forecasting varies with the timeframe of prediction. |
| Surveying experts on AI risk scenarios and working on other projects related to AI safety. | Alexis Carlier | — | $5,000 | — | Jul 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Surveying experts on AI risk scenarios and working on other projects related to AI safety. |
| Funds for a 6-month project contributing to the clarification of goal-directedness | Morgan Rogers | — | $21,950 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funds for a 6-month project contributing to the clarification of goal-directedness |
| Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safety | Caroline Jeanmaire | — | $121,672 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safety |
| Funding to cover a visit to Boston for biosecurity work | Will Bradshaw | — | $16,456 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to cover a visit to Boston for biosecurity work |
| Retroactive funding for running an alignment theory mentorship program with Evan Hubinger | Oliver Zhang | — | $3,600 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Retroactive funding for running an alignment theory mentorship program with Evan Hubinger |
| Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) | l5K9ZdbXww | — | $174,021 | — | Jul 2018 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) |
| Supporting aspiring researchers of AI alignment to boost themselves into productivity | Johannes Heidecke | — | $25,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Supporting aspiring researchers of AI alignment to boost themselves into productivity |
| Human Progress for Beginners children's book | Jason Crawford | — | $25,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Human Progress for Beginners children's book |
| Replacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemics | Joel Becker | — | $42,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Replacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemics |
| Research to enable transition to AI Safety | Vojtěch Kovařík | — | $43,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research to enable transition to AI Safety |
| Formalizing the side effect avoidance problem research | Alex Turner | — | $30,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Formalizing the side effect avoidance problem research |
| Productivity coaching for effective altruists to increase their impact | Lynette Bye | — | $23,000 | — | Jul 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Productivity coaching for effective altruists to increase their impact |
| 50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing data | BugSeq Bioinformatics Inc. | — | $37,500 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing data |
| 6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulations | Rutgers University, Department of Philosophy | — | $3,500 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulations |
| Support for self-study in data science and forecasting, to upskill within a GCBR research career | Benjamin Stewart | — | $2,230 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support for self-study in data science and forecasting, to upskill within a GCBR research career |
| Create AI safety videos, and offer communication and media support to AI safety orgs. | Robert Miles | — | $60,000 | — | Jul 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Create AI safety videos, and offer communication and media support to AI safety orgs. |
| We’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting. | The Center for Election Science | — | $50,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] We’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting. |
| Developing algorithms, environments and tests for AI safety via debate. | Joe Collman | — | $25,000 | — | Jul 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Developing algorithms, environments and tests for AI safety via debate. |
| 2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-founders | Aligned AI | — | $33,762 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-founders |
| Writing fiction to convey EA and rationality-related topics | Miranda Dixon-Luinenburg | — | $20,000 | — | Jul 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Writing fiction to convey EA and rationality-related topics |
| Research on the links between short- and long-term AI policy while skilling up in technical ML | Jess Whittlestone | — | $75,080 | — | Jul 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research on the links between short- and long-term AI policy while skilling up in technical ML |
| 3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance" | Chelsea Liang | — | $5,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance" |
| Funding for full-time, independent research on agent foundations | Daniel Demski | — | $30,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for full-time, independent research on agent foundations |
| PhD in machine learning with a focus on AI alignment | Dmitrii Krasheninnikov | — | $85,530 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] PhD in machine learning with a focus on AI alignment |
| Buying out one year of my academic teaching so that I can spend time on AI alignment research instead | David Udell | — | $12,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Buying out one year of my academic teaching so that I can spend time on AI alignment research instead |
| Funding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019. | Mikhail Yagudin | — | $28,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019. |
| For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fit | Remmelt Ellen | — | $85,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fit |
| Provides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support) | Berkeley Existential Risk Initiative (BERI) | — | $14,838 | — | Jan 2017 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Provides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support) |
| Additional funding for AI strategy PhD at Oxford / FHI | Sören Mindermann | — | $36,982 | — | Jul 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Additional funding for AI strategy PhD at Oxford / FHI |
| 6-month salary to develop tools to test the natural abstractions hypothesis | John Wentworth | — | $35,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to develop tools to test the natural abstractions hypothesis |
| A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers | Tessa Alexanian | — | $26,250 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers |
| Conducting independent research into AI forecasting and strategy questions | Tegan McCaslin | — | $30,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Conducting independent research into AI forecasting and strategy questions |
| One year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields. | Logan Strohl | — | $80,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields. |
| Formalizing perceptual complexity with application to safe intelligence amplification | Anand Srinivasan | — | $30,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Formalizing perceptual complexity with application to safe intelligence amplification |
| Three months of blogging and movement building at the intersection of EA/longtermism and progress studies | Nicholas (Nick) Whitaker | — | $18,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Three months of blogging and movement building at the intersection of EA/longtermism and progress studies |
| Support multiple SPARC project operations during 2021 | SPARC | — | $15,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support multiple SPARC project operations during 2021 |
| Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decades | Zach Freitas-Groff | — | $11,440 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decades |
| A two-day, career-focused workshop to inform and connect European EAs interested in AI governance | Alex Lintz | — | $17,900 | — | Jan 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A two-day, career-focused workshop to inform and connect European EAs interested in AI governance |
| To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety | Stag Lynn | — | $23,000 | — | Jul 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety |
| Funding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systems | Kush Bhatia | — | $275,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systems |
| 10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretability | Benedikt Hoeltgen | — | $19,020 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretability |
| Increasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date. | Anthony Aguirre | — | $65,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Increasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date. |
| Multi-model approach to corporate and state actors relevant to existential risk mitigation | David Manheim | — | $30,000 | — | Jul 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Multi-model approach to corporate and state actors relevant to existential risk mitigation |
| 1-year salary for Adam Shimi to conduct independent research in AI Alignment | Adam Shimi | — | $60,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year salary for Adam Shimi to conduct independent research in AI Alignment |
| A research agenda rigorously connecting the internal and external views of value synthesis | David Girardo | — | $30,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A research agenda rigorously connecting the internal and external views of value synthesis |
| BERI will support SERI when university systems are unable to help | Berkeley Existential Risks Initiative | — | $60,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] BERI will support SERI when university systems are unable to help |
| Financial support for work on a biosecurity research project and workshop, and travel expenses | Simon Grimm | — | $15,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Financial support for work on a biosecurity research project and workshop, and travel expenses |
| 3-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurity | Caleb Withers | — | $15,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurity |
| Support to create language model (LM) tools to aid alignment research through feedback and content generation | Logan Smith | — | $40,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to create language model (LM) tools to aid alignment research through feedback and content generation |
| Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD | Orpheus Lummis | — | $10,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD |
| Longtermist lessons from COVID | Gavin Leech | — | $5,625 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Longtermist lessons from COVID |
| Writing preliminary content for an encyclopedia of effective altruism | Pablo Stafforini | — | $17,000 | — | Jan 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Writing preliminary content for an encyclopedia of effective altruism |
| Understanding the Impact of Lifting Government Interventions against COVID-19 Transmission | Mrinank Sharma | — | $9,798 | — | Oct 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Understanding the Impact of Lifting Government Interventions against COVID-19 Transmission |
| Unrestricted donation | 2VexoROapg | — | $50,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Unrestricted donation |
| An offline community hub for rationalists and EAs | Vyacheslav Matyuhin | — | $50,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] An offline community hub for rationalists and EAs |
| Upskilling investigation of AI Safety via debate and ML training | Joe Collman | — | $10,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Upskilling investigation of AI Safety via debate and ML training |
| Computing resources and researcher salaries at a new deep learning + AI alignment research group at Cambridge | David Krueger | — | $200,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Computing resources and researcher salaries at a new deep learning + AI alignment research group at Cambridge |
| Funding to pay participants to test a forecasting training program | Logan McNichols | — | $3,200 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to pay participants to test a forecasting training program |
| Building infrastructure for the future of effective forecasting efforts | Ozzie Gooen | — | $70,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Building infrastructure for the future of effective forecasting efforts |
| Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks. | Damon Pourtahmaseb-Sasi | — | $40,000 | — | Oct 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks. |
| 8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHI | James Bernardi | — | $28,320 | — | Jul 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHI |
| 6-month salary to work with Dan Hendrycks on research projects relevant to AI alignment | Thomas Woodside | — | $50,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to work with Dan Hendrycks on research projects relevant to AI alignment |
| 12-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goals | Lauren Lee | — | $20,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goals |
| Conducting postdoctoral research at Harvard on the psychology of EA/long-termism | Lucius Caviola | — | $50,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Conducting postdoctoral research at Harvard on the psychology of EA/long-termism |
| 12-month salary to provide runway after finishing RSP | The Future of Humanity Institute | — | $55,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary to provide runway after finishing RSP |
| Educational Scholarship in AI Alignment | Jaeson Booker | — | $22,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Educational Scholarship in AI Alignment |
| Fund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing research | t0p43V5oLA | — | $70,000 | — | Jan 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Fund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing research |
| Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) | gNsqAes7Dw | — | $162,537 | — | Jul 2018 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) |
| Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitter | Alex Turner | — | $1,050 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitter |
| Unrestricted donation | 231 | — | $50,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Unrestricted donation |
| Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentors | David Reber | — | $20,000 | — | Oct 2021 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentors |
| 12-month salary for independent research, upskilling, and finding a stable position in AI-Safety | Robert Kralisch | — | $24,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary for independent research, upskilling, and finding a stable position in AI-Safety |
| A major expansion of the Metaculus prediction platform and its community | Anthony Aguirre | — | $70,000 | — | Apr 2019 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A major expansion of the Metaculus prediction platform and its community |
| Research project on the longevity and decay of universities, philanthropic foundations, and catholic orders | Maximilian Negele | — | $3,579 | — | Oct 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research project on the longevity and decay of universities, philanthropic foundations, and catholic orders |
| Organising immersive workshops on meta skills and x-risk for STEM students at top universities. | Tamara Borine | — | $32,660 | — | Oct 2020 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Organising immersive workshops on meta skills and x-risk for STEM students at top universities. |
| Support for alignment theory agenda evaluation | Jack Ryan | — | $25,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support for alignment theory agenda evaluation |
| AI safety dinners | Neil Crawford | — | $10,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] AI safety dinners |
| AI safety research | Lukas Berglund | — | $1,500 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] AI safety research |
| Compensation for a non-fiction book on threat of AGI for a general audience | Darren McKee | — | $50,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Compensation for a non-fiction book on threat of AGI for a general audience |
| Funding to perform human evaluations for evaluating different machine learning methods for aligning language models | Robert Kirk | — | $10,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to perform human evaluations for evaluating different machine learning methods for aligning language models |
| Travel Support to BWC RevCon & Side Events | Theo Knopfer | — | $3,500 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Travel Support to BWC RevCon & Side Events |
| travel funding for participants in a workshop on the science of consciousness and current and near-term AI systems | Robert Long | — | $10,840 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] travel funding for participants in a workshop on the science of consciousness and current and near-term AI systems |
| Funding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows) | Nora Ammann | — | $100,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows) |
| Neural network interpretability research | Nicholas Greig | — | $12,990 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Neural network interpretability research |
| Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAO | Jacob Mendel | — | $4,910 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAO |
| 6 months of independent alignment research and upskilling | Zhengbo Xiang (Alana) | — | $30,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 months of independent alignment research and upskilling |
| Research into the international viability of FHI's Windfall Clause | John Bridge | — | $3,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research into the international viability of FHI's Windfall Clause |
| 6-month salary for research into preventing steganography in interpretable representations using multiple agents | Hoagy Cunningham | — | $20,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for research into preventing steganography in interpretable representations using multiple agents |
| Research on EA and longtermism | Aaron Bergman | — | $70,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research on EA and longtermism |
| 6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations. | Logan Smith | — | $40,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations. |
| 1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs. | Paul Bricman | — | $50,182 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs. |
| 6-month part-time (20h/week) salary to further develop and refine the feature visualization library Lucent | Tom Lieberum | — | $23,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month part-time (20h/week) salary to further develop and refine the feature visualization library Lucent |
| This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign. | Naoya Okamoto | — | $7,500 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign. |
| Support to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 years | David Staley | — | $3,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 years |
| Support for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety faster | Marius Hobbhahn | — | $50,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety faster |
| 12-month salary to study and get into AI Safety Research and work on related EA projects | Luca De Leo | — | $14,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary to study and get into AI Safety Research and work on related EA projects |
| 4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fit | Max Kaufmann | — | $20,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fit |
| Exploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strike | Isabel Johnson | — | $5,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Exploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strike |
| 6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastrophe | Sasha Cooper | — | $36,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastrophe |
| 6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paper | Jonathan Ng | — | $32,650 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paper |
| Financial support to help productivity and increase time of early career alignment researcher | Max Kaufmann | — | $7,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Financial support to help productivity and increase time of early career alignment researcher |
| 5-month part time salary for collaborating on a research paper analyzing the implications of compute access | Sage Bergerson | — | $2,500 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 5-month part time salary for collaborating on a research paper analyzing the implications of compute access |
| Support for living expenses while doing PhD in AI safety - technical research and community building work | Francis Rhys Ward | — | $2,305 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support for living expenses while doing PhD in AI safety - technical research and community building work |
| 6-month salary for self-study to be more effective at AI alignment research | Thomas Kehrenberg | — | $15,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for self-study to be more effective at AI alignment research |
| The Alignable Structures workshop in Philadelphia | Quinn Dougherty | — | $9,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] The Alignable Structures workshop in Philadelphia |
| New laptop for technical AI safety research | Peter Barnett | — | $4,099 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] New laptop for technical AI safety research |
| 10-month funding to study ML at university and AIS independently | Patricio Vercesi | — | $500 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 10-month funding to study ML at university and AIS independently |
| 6 month salary to improve the US regulatory environment for prediction markets | Solomon Sia | — | $138,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 month salary to improve the US regulatory environment for prediction markets |
| Develop and market video game to explain the Stop Button Problem to the public & STEM individuals | Lone Pine Games, LLC | — | $100,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Develop and market video game to explain the Stop Button Problem to the public & STEM individuals |
| A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan | 91 | — | $72,827 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan |
| Paid internships for promising Oxford students to try out supervised AI Safety research projects | AI Safety Hub Ltd | — | $60,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Paid internships for promising Oxford students to try out supervised AI Safety research projects |
| Starting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positions | Kai Sandbrink | — | $3,950 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Starting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positions |
| Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022 | William D'Alessandro | — | $22,570 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022 |
| Website visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday Clock | Conor Barnes | — | $3,500 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Website visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday Clock |
| 2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing hu | Max Räuker | — | $15,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing hu |
| Organize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022 | Czech Association for Effective Altruism (CZEA) | — | $110,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Organize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022 |
| 8 weeks scholars program to pair promising alignment researchers with renowned mentors | AI Safety Support | — | $316,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 8 weeks scholars program to pair promising alignment researchers with renowned mentors |
| Stanford Artificial Intelligence Professional Program tution | Mario Peng Lee | — | $4,785 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Stanford Artificial Intelligence Professional Program tution |
| (professional development grant) New laptop for technical AI safety research | Max Lamparth | — | $2,500 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] (professional development grant) New laptop for technical AI safety research |
| Year-long salary for shard theory and RL mech int research | Alexander Turner | — | $220,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Year-long salary for shard theory and RL mech int research |
| Stipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeople | Chris Patrick | — | $5,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Stipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeople |
| Support to further develop a branch of rationality focused on patient and direct observation | Logan Strohl | — | $80,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to further develop a branch of rationality focused on patient and direct observation |
| 1-year salary and costs to connect, expand and enable the AGI governance and safety community in Canada | Wyatt Tessari | — | $87,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year salary and costs to connect, expand and enable the AGI governance and safety community in Canada |
| 3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AI | Tomislav Kurtovic | — | $5,500 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AI |
| 6-month salary for two people to find formalisms for modularity in neural networks | Lucius Bushnaq | — | $72,560 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for two people to find formalisms for modularity in neural networks |
| One-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safety | Steve Petersen | — | $20,815.2 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safety |
| 6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paper | Kaarel Hänni, Kay Kozaronek, Walter Laurito, and Georgios Kaklmanos | — | $167,480 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paper |
| European Summer Research Program focused on building the talent pipeline for global catastrophic risk researchers | Effective Altruism Geneva | — | $169,947 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] European Summer Research Program focused on building the talent pipeline for global catastrophic risk researchers |
| 4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreat | Jonas Hallgren | — | $10,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreat |
| Make 12 more AXRP episodes | Daniel Filan | — | $23,544 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Make 12 more AXRP episodes |
| 12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-risk | Ross Graham | — | $60,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-risk |
| 1-year salary for research in applications of natural abstraction | John Wentworth | — | $180,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year salary for research in applications of natural abstraction |
| Financial support to work part time on an academic project evaluating factors relevant to digital consciousness | Derek Shiller | — | $11,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Financial support to work part time on an academic project evaluating factors relevant to digital consciousness |
| 6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org | Jeffrey Ladish | — | $98,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org |
| 6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundations | Iván Godoy | — | $6,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundations |
| 3-month salary for upskilling in PyTorch and AI safety research. | Alex Infanger | — | $19,200 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month salary for upskilling in PyTorch and AI safety research. |
| 6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGI | Nicky Pochinkov | — | $50,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGI |
| Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition) | Fabienne Sandkühler | — | $4,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition) |
| Funding to cover 4-months of rent while attending a research group with the Cambridge AI Safety group | David Quarel | — | $5,613 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to cover 4-months of rent while attending a research group with the Cambridge AI Safety group |
| 6-month salary to conduct AI alignment research circuits in decision transformers | Joseph Bloom | — | $50,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to conduct AI alignment research circuits in decision transformers |
| 6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audience | Liam Carroll | — | $8,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audience |
| Funding for a one year machine learning and computational statistics master’s at UCL | Shavindra Jayasekera | — | $38,101 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for a one year machine learning and computational statistics master’s at UCL |
| Funding for project transitioning from AI capabilities to AI Safety research. | Gerold Csendes | — | $8,200 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for project transitioning from AI capabilities to AI Safety research. |
| Twelve month salary to work as a global rationality organizer | Skyler Crossman | — | $130,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Twelve month salary to work as a global rationality organizer |
| Support to work on Aisafety.camp project, impact of human dogmatism on training | Kevin Wang | — | $2,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to work on Aisafety.camp project, impact of human dogmatism on training |
| Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safety | Robert Miles | — | $54,962 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safety |
| 6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisation | Samuel Brown | — | $47,074 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisation |
| 5-month salary plus expenses to support civilizational resilience projects arising from SHELTER Weekend | Joel Becker | — | $27,248 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 5-month salary plus expenses to support civilizational resilience projects arising from SHELTER Weekend |
| One year of funding to improve an established community hub for EA in London | Newspeak House | — | $50,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One year of funding to improve an established community hub for EA in London |
| Support for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actions | Columbia University | — | $90,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actions |
| Financial support for career exploration and related project in AI alignment upon completion of Masters in Computer Science | Max Clarke | — | $26,077 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Financial support for career exploration and related project in AI alignment upon completion of Masters in Computer Science |
| 6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategy | Will Aldred | — | $40,250 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategy |
| 6 months salary for independent work centered on distillation and coordination in the AI governance & strategy space | Alexander Lintz | — | $69,940 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 months salary for independent work centered on distillation and coordination in the AI governance & strategy space |
| Support to cover the costs of leaving employment in order to pursue AI safety research. | Kajetan Janiak | — | $4,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to cover the costs of leaving employment in order to pursue AI safety research. |
| 6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictability | Fabian Schimpf | — | $28,875 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictability |
| PhD Stipend Top Up for CHAI PhD Student. | Alex Turner | — | $6,675 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] PhD Stipend Top Up for CHAI PhD Student. |
| Purchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at Oxford | Bálint Pataki | — | $3,640 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Purchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at Oxford |
| One year part time spent on AI safety upskilling and concrete research projects | Ross Nordby | — | $62,500 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One year part time spent on AI safety upskilling and concrete research projects |
| Pass on funds for Astral Codex Ten Everywhere meetups | Skyler Crossman | — | $22,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Pass on funds for Astral Codex Ten Everywhere meetups |
| Payment for part-time rationality community building | Boston Astral Codex Ten | — | $4,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Payment for part-time rationality community building |
| 4-month salary for two people to find formalisms for modularity in neural networks | Lucius Bushnaq | — | $67,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month salary for two people to find formalisms for modularity in neural networks |
| Travel support to attend the Symposium on AGI Safety in Oxford in May | Smitha Milli | — | $1,500 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Travel support to attend the Symposium on AGI Safety in Oxford in May |
| Funding the last year of my PhD on embedded agency, to free up my time from teaching | Daniel Herrmann | — | $64,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding the last year of my PhD on embedded agency, to free up my time from teaching |
| Funds to support travel for academic research projects relating to pandemic preparedness and biosecurity | Charles Whittaker | — | $8,150 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funds to support travel for academic research projects relating to pandemic preparedness and biosecurity |
| Funding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights. | Simon Skade | — | $35,625 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights. |
| 2 years of GovAI salary and overheads for Robert Trager | 172 | — | $401,537 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2 years of GovAI salary and overheads for Robert Trager |
| Support for Jay Bailey for work in ML for AI Safety | Jay Bailey | — | $79,120 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support for Jay Bailey for work in ML for AI Safety |
| 4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research. | Benjamin Sturgeon | — | $12,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research. |
| Support for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp. | Jan Kirchner | — | $10,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp. |
| 4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual stream | Joshua Reiners | — | $16,300 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual stream |
| Fine-tuning large language models for an interpretability challenge (compute costs) | Andrei Alexandru | — | $11,300 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Fine-tuning large language models for an interpretability challenge (compute costs) |
| Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward | Michael Parker | — | $40,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward |
| 12-month salary to work on alignment research! | Garrett Baker | — | $96,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary to work on alignment research! |
| Funding for Computer Science PhD | David Reber | — | $348,773 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for Computer Science PhD |
| 6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RL | Jeremy Gillen | — | $40,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RL |
| 4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL models | Abhijit Narayan S | — | $1,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL models |
| 12-month salary to work on ML models for detecting genetic engineering in pathogens | Jade Zaslavsky | — | $85,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary to work on ML models for detecting genetic engineering in pathogens |
| 2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make time | Ardysatrio Haroen | — | $745 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make time |
| Piloting an EA hardware lab for prototyping hardware relevant to longtermist priorities | Adam Rutkowski | — | $44,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Piloting an EA hardware lab for prototyping hardware relevant to longtermist priorities |
| Retroactive grant for managing the MATS program, 1.0 and 2.0 | SERI ML Alignment & Theory Scholars | — | $27,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Retroactive grant for managing the MATS program, 1.0 and 2.0 |
| Enabling prosaic alignment research with a multi-modal model on natural language and chess | Philipp Bongartz | — | $25,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Enabling prosaic alignment research with a multi-modal model on natural language and chess |
| 2-6 months' stipend to financially cover my self-development in Machine Learning for alignment work | Jonathan Ng | — | $16,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2-6 months' stipend to financially cover my self-development in Machine Learning for alignment work |
| 3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignment | Amrita A. Nair | — | $1,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignment |
| Top-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residency | Effective Altruism Geneva | — | $180,200 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Top-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residency |
| 6-months salary for researching “Framing computational systems such that we can find meaningful concepts." & Upskilling | Matthias Georg Mayer | — | $24,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-months salary for researching “Framing computational systems such that we can find meaningful concepts." & Upskilling |
| 6 months’ salary to upskill on technical AI safety through project work and studying | Rusheb Shah | — | $50,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 months’ salary to upskill on technical AI safety through project work and studying |
| 6-month salary for an AI alignment research project on the manipulation of humans by AI | Felix Hofstätter | — | $25,383 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for an AI alignment research project on the manipulation of humans by AI |
| 6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computation | David Hahnemann, Luan Ademi | — | $26,342 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computation |
| Support for research into applied technical AI alignment work | Philippe Rivet | — | $10,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support for research into applied technical AI alignment work |
| A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research | Principles of Intelligent Behavior in Biological and Social Systems | — | $305,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research |
| Increase of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residence | Effective Altruism Geneva | — | $134,532 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Increase of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residence |
| 5-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayal | Nikiforos Pittaras | — | $14,300 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 5-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayal |
| 12-Month Salary and Compute Expenses to do AI Safety Research with LLMs | Nicky Pochinkov | — | $70,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-Month Salary and Compute Expenses to do AI Safety Research with LLMs |
| I am looking for a career transition grant to give me more time for job hunting & networking | Alexander Large | — | $3,618 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] I am looking for a career transition grant to give me more time for job hunting & networking |
| Research and a report/paper on the the role of emergency powers in the governance of X-Risk | Daniel Skeffington | — | $26,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research and a report/paper on the the role of emergency powers in the governance of X-Risk |
| Equipment to improve productivity while doing AI Safety research | Tim Farrelly | — | $3,900 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Equipment to improve productivity while doing AI Safety research |
| 3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobs | Peter Ruschhaupt | — | $20,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobs |
| One-year funding of Astral Codex Ten meetup in Philadelphia | Wesley Fenza | — | $5,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One-year funding of Astral Codex Ten meetup in Philadelphia |
| Reconstruction attacks in federated learning | University of Cambridge/ None | — | $5,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Reconstruction attacks in federated learning |
| This grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability project | Bilal Chughtai | — | $47,500 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability project |
| Retrospective funding for research retreat on a decision-theory / cause-prioritization topic. | Daniel Kokotajlo | — | $10,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Retrospective funding for research retreat on a decision-theory / cause-prioritization topic. |
| Funding for the AI Safety Nudge Competition | AI Safety Nudge Competition | — | $5,200 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for the AI Safety Nudge Competition |
| Support to work on AI alignment research | Matt MacDermott | — | $16,341 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to work on AI alignment research |
| 9 months of funding for an early-career alignment researcher, to work with Owain Evans and others. | Max Kaufmann | — | $45,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 9 months of funding for an early-career alignment researcher, to work with Owain Evans and others. |
| Office rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood Research | Effective Altruism Geneva | — | $4,300 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Office rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood Research |
| One year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGI | Gunnar Zarncke | — | $16,600 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGI |
| I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the Fall | Zach Peck | — | $1,800 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the Fall |
| Funding 2 years of technical AI safety research to understand and mitigate risk from large foundation models | John Burden | — | $209,501 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding 2 years of technical AI safety research to understand and mitigate risk from large foundation models |
| Independent research and upskilling for one year, to transition from academic philosophy to AI alignment research | Brian Porter | — | $60,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Independent research and upskilling for one year, to transition from academic philosophy to AI alignment research |
| Part-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detection | Noga Aharony | — | $20,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Part-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detection |
| 6-months salary to accelerate my plans of upskilling in order to work on the issue of AI safety | Kane Nicholson | — | $26,150 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-months salary to accelerate my plans of upskilling in order to work on the issue of AI safety |
| Support funding during 2 years of an AI safety PhD at Oxford | Ondrej Bajgar | — | $11,579 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support funding during 2 years of an AI safety PhD at Oxford |
| 1-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research. | Darryl Wright | — | $150,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research. |
| Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc. | Jingyi Wang | — | $2,100 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc. |
| Developing and maintaining projects/resources used by the EA and rationality communities | Said Achmiz | — | $60,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Developing and maintaining projects/resources used by the EA and rationality communities |
| General support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influences | Alexander Turner | — | $115,411 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] General support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influences |
| Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML coding | Josiah Lopez-Wild | — | $2,500 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML coding |
| 6-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigation | Theo Knopfer | — | $27,800 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigation |
| 4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgism | Quentin Feuillade--Montixi | — | $32,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgism |
| Top-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in Canada | Wyatt Tessari | — | $17,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Top-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in Canada |
| 4 month grant to upskill for AI governance work before starting Science and Technology Policy PhD | Conor McGlynn | — | $17,220 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4 month grant to upskill for AI governance work before starting Science and Technology Policy PhD |
| 9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical research | Magdalena Wache | — | $62,040 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical research |
| 300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics. | Leah Pierson | — | $4,500 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics. |
| ≤1-year salary for alignment work: assisting academics, skilling up, personal research and community building | Charlie Griffin | — | $35,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] ≤1-year salary for alignment work: assisting academics, skilling up, personal research and community building |
| Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicant | Jeffrey Ohl | — | $6,557 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicant |
| 6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordination | Chloe Lee | — | $25,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordination |
| Research (and self-study) project designed to map and offer preliminary assessment of AI ideal governance research | Rory Gillis | — | $2,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research (and self-study) project designed to map and offer preliminary assessment of AI ideal governance research |
| Fund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival. | University of Otago, Wellington, New Zealand | — | $27,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Fund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival. |
| 6-month salary to develop an overview of the current state of AI alignment research, and begin contributing | Gergely Szucs | — | $70,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to develop an overview of the current state of AI alignment research, and begin contributing |
| Grant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration. | Hunar Batra | — | $63,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Grant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration. |
| 7 month salary to study a Graduate Diploma of International Affairs at The Australian National University | Matthew MacInnes | — | $9,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 7 month salary to study a Graduate Diploma of International Affairs at The Australian National University |
| Funding to start a longtermist org and support research | Transformative Futures Foresight Institute | — | $494,510 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to start a longtermist org and support research |
| Slack money for increased productivity in AI Alignment research | Adam Shimi | — | $17,355 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Slack money for increased productivity in AI Alignment research |
| 2-year salary for work on the learning-theoretic AI alignment research agenda | Vanessa Kosoy | — | $100,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2-year salary for work on the learning-theoretic AI alignment research agenda |
| Support to conduct work in AI safety | Benjamin Anderson | — | $5,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to conduct work in AI safety |
| Funding to support PhD in AI Safety at Imperial College London, technical research and community building | Francis Rhys Ward | — | $6,350 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to support PhD in AI Safety at Imperial College London, technical research and community building |
| 3-month salary for SERI-MATS extension | Matt MacDermott | — | $24,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month salary for SERI-MATS extension |
| A relocation grant to help me to move and settle into a PhD program and cover initial expenses | Egor Zverev | — | $6,500 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A relocation grant to help me to move and settle into a PhD program and cover initial expenses |
| Funding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified. | Wikiciv Foundation | — | $16,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified. |
| 6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project. | Jay Bailey | — | $50,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project. |
| 1-year salary for upskilling in technical AI alignment research | Chu Chen | — | $96,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year salary for upskilling in technical AI alignment research |
| 6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety | Samuel Nellessen | — | $4,524 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety |
| 4-month salary for conceptual/theoretical research towards perfect world-model interpretability | Andrey Tumas | — | $30,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month salary for conceptual/theoretical research towards perfect world-model interpretability |
| 6-month salary to skill up and gain experience to start working on AI safety full-time | Mateusz Bagiński | — | $14,136 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to skill up and gain experience to start working on AI safety full-time |
| 3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendas | Sam Marks | — | $26,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendas |
| 6 months salary to do independent AI alignment research focused on formal alignment and agent foundations | Tamsin Leake | — | $30,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 months salary to do independent AI alignment research focused on formal alignment and agent foundations |
| Funding for salary and living expenses while continuing to develop a framework of optimisation. | Alex Altair | — | $8,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for salary and living expenses while continuing to develop a framework of optimisation. |
| Retrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS program | Viktoria Malyasova | — | $4,400 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Retrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS program |
| Weekend organised as a part of the co-founder matching process of a group to found a human data collection org | Patrick Gruban | — | $2,300 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Weekend organised as a part of the co-founder matching process of a group to found a human data collection org |
| 1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studies | Shoshannah Tekofsky | — | $90,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studies |
| 3-month salary to set up a distillation course helping new AI safety theory researchers to distill papers | Jonas Hallgren | — | $14,600 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month salary to set up a distillation course helping new AI safety theory researchers to distill papers |
| 24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goods | Lennart Stern | — | $102,000 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goods |
| 6-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AI | Alfred Harwood | — | $11,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AI |
| Support for AI alignment outreach in France (video/audio/text/events) & field-building | Jérémy Perret | — | $24,800 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support for AI alignment outreach in France (video/audio/text/events) & field-building |
| 3-month stipend for upskilling in AI Safety and potentially transition to a career in Alignment | Amrita A. Nair | — | $5,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month stipend for upskilling in AI Safety and potentially transition to a career in Alignment |
| 4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systems | Alan Chan | — | $12,321 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systems |
| Scholarship for PhD student working on research related to AI Safety | Josiah Lopez-Wild | — | $8,000 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Scholarship for PhD student working on research related to AI Safety |
| 12-month salary to transition career into technical alignment research | Dan Valentine | — | $25,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary to transition career into technical alignment research |
| 6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedules | Logan Smith | — | $40,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedules |
| A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity Summit | Hamza Tariq Chaudhry | — | $2,500 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity Summit |
| 8-month salary for three people to investigate the origins of modularity in neural networks | Lucius Bushnaq, Callum McDougall, Avery Griffin | — | $125,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 8-month salary for three people to investigate the origins of modularity in neural networks |
| 12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal Paternalism | Samuel Brown | — | $81,402.42 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal Paternalism |
| A research & networking retreat for winners of the Eliciting Latent Knowledge contest | 36 | — | $72,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A research & networking retreat for winners of the Eliciting Latent Knowledge contest |
| 6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systems | Johannes C. Mayer | — | $24,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systems |
| Support to conduct a research project collaboration on Compute Governance | Lennart Heim | — | $67,800 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Support to conduct a research project collaboration on Compute Governance |
| 4-month funding for independent alignment research and study | Arun Jose | — | $15,478 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month funding for independent alignment research and study |
| EU Tech Policy Fellowship with ~10 trainees | Training For Good | — | $68,750 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] EU Tech Policy Fellowship with ~10 trainees |
| Funding to increase my impact as an early-career biosecurity researcher | Lennart Justen | — | $6,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to increase my impact as an early-career biosecurity researcher |
| ~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safety | Anson Ho | — | $4,800 | — | Jan 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] ~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safety |
| Economic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical research | Antonio Franca | — | $2,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Economic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical research |
| One year of seed funding for a new AI interpretability research organisation | Jessica Rumbelow | — | $195,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One year of seed funding for a new AI interpretability research organisation |
| Travel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022 | Kadri Reis | — | $1,500 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Travel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022 |
| One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATS | David Udell | — | $100,000 | — | Oct 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATS |
| 6-month salary to upskill for AI safety | Daniel O'Connell | — | $54,250 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to upskill for AI safety |
| 12-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilities | Nicholas Kees Dupuis | — | $120,000 | — | Jan 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilities |
| 3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignment | Jacques Thibodeau | — | $22,000 | — | Jul 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignment |
| Cover participant stipends for AI Safety Camp Virtual 2023 | Remmelt Ellen | — | $72,500 | — | 2022 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Cover participant stipends for AI Safety Camp Virtual 2023 |
| Developing weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 people | Michael Pearce, Alice Riggs, Thomas Dooms | — | $80,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Developing weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 people |
| 6-months stipend for transitioning to independent research on AI Safety | Glauber De Bona | — | $40,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-months stipend for transitioning to independent research on AI Safety |
| Spend 3 months (part time) assessing plausible pathways to slowing AI | Gideon Futerman | — | $5,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Spend 3 months (part time) assessing plausible pathways to slowing AI |
| 4-month part-time salary to work on interpretability projects with David Bau and Logan Riggs | Jannik Brinkmann | — | $10,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month part-time salary to work on interpretability projects with David Bau and Logan Riggs |
| 6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowships | Ashgro Inc. (fiscal sponsor of Apart) | — | $272,800 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowships |
| 1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articles | Nicky Case | — | $80,000 | — | Jan 2025 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articles |
| A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safety | Chris Lakin | — | $5,000 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safety |
| 3-month stipend to support research on the state of AI safety in China and implications for AI existential risk | Andrew Zeng | — | $12,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month stipend to support research on the state of AI safety in China and implications for AI existential risk |
| 3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferences | Constantin Weisser | — | $80,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferences |
| $10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability Fellowship | Brian Tan | — | $10,120 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] $10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability Fellowship |
| 1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group) | Nora Ammann | — | $102,500 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group) |
| This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity. | Nathaniel Monson | — | $70,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity. |
| 6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainability | Aengus Lynch | — | $52,118.5 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainability |
| 6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time after | Joe Kwon | — | $40,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time after |
| Data collection for a new paradigm for AI alignment based on downstream outcomes and human flourishing | University of Massachusetts Amherst | — | $50,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Data collection for a new paradigm for AI alignment based on downstream outcomes and human flourishing |
| 4-month salary for finding and characterising provably hard cases for mechanistic anomaly detection | Andis Draguns | — | $40,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month salary for finding and characterising provably hard cases for mechanistic anomaly detection |
| 3-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentor | Aleksandar Makelov | — | $22,500 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentor |
| This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia. | AI Safety Australia and New Zealand | — | $77,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia. |
| Exploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension) | Lucy Farnik | — | $41,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Exploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension) |
| 6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum. | Amritanshu Prasad | — | $8,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum. |
| 4-month stipend for a career transition period to explore roles in AI safety communications | Sarah Hastings-Woodhouse | — | $10,120 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend for a career transition period to explore roles in AI safety communications |
| 12 week 0.6FT upskilling stipend for technical governance research management | Morgan Simpson | — | $11,244 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12 week 0.6FT upskilling stipend for technical governance research management |
| 3-months salary for SERI MATS extention to work on internal concept extraction | Ann-Kathrin Dombrowski | — | $27,260 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-months salary for SERI MATS extention to work on internal concept extraction |
| 6-months of part-time stipend to launch a new science journalism outlet focused on AI Safety | Mordechai Rorvig | — | $50,000 | — | Jan 2025 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-months of part-time stipend to launch a new science journalism outlet focused on AI Safety |
| 6 to 12 month fundings to continue working on model psychology and evaluation | P.H.I | — | $42,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 to 12 month fundings to continue working on model psychology and evaluation |
| 4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switch | Niels uit de Bos | — | $62,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switch |
| This grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts. | Akbir Khan | — | $55,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts. |
| Ten field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025 | 301 | — | $7,118 | — | Apr 2025 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Ten field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025 |
| A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economy | Alexander Mann | — | $36,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economy |
| Developing noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deployment | Adelin Kassler | — | $40,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Developing noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deployment |
| 6-month salary and compute budget for continuing work on mechanistic interpretability for attention layers | Keith Wynroe | — | $37,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary and compute budget for continuing work on mechanistic interpretability for attention layers |
| 12-month support for independent AI alignment research | Aryeh Brill | — | $45,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month support for independent AI alignment research |
| 4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMs | Axel Højmark | — | $70,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMs |
| This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects. | Dioptra (informal research group working on evals) | — | $32,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects. |
| 4-month fund for full time AI safety technical and/or governance research | Harrison Gietz | — | $10,750 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month fund for full time AI safety technical and/or governance research |
| This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy. | Carson Ezell | — | $8,673 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy. |
| 4-month stipend to continue AI safety projects | Hannah Erlebach | — | $25,216 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend to continue AI safety projects |
| Part-time salary for independent AI safety research | Ross Nordby | — | $40,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Part-time salary for independent AI safety research |
| Grant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate student | Sumeet Motwani | — | $1,875 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Grant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate student |
| Mentored independent research and upskilling to transition from theoretical physics PhD to AI safety | Einar Urdshals | — | $50,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Mentored independent research and upskilling to transition from theoretical physics PhD to AI safety |
| 6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI Safety | Aishwarya Saxena | — | $77,544 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI Safety |
| 2-month salary to test suitability for technical AI alignment research and identify a research direction | Bart Bussmann | — | $8,800 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2-month salary to test suitability for technical AI alignment research and identify a research direction |
| Meta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project) | Yoav Tzfati | — | $62,150 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Meta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project) |
| Organizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participants | Epistea, z.s | — | $160,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Organizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participants |
| 1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension program | Abhay Sheshadri | — | $15,075 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension program |
| 1 year PhD funding and compute funding to research a novel method for training prosociality into large language models | Scott Viteri | — | $10,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1 year PhD funding and compute funding to research a novel method for training prosociality into large language models |
| 1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystem | Alignment Ecosystem Development | — | $99,330 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystem |
| 6-month salary for independent alignment research in interpretability or control | Thomas Kwa | — | $95,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for independent alignment research in interpretability or control |
| Funding to do research on understanding search in transformers at the AI safety camp during 14 weeks | Guillaume Corlouer | — | $6,636 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to do research on understanding search in transformers at the AI safety camp during 14 weeks |
| One year stipend and compute budget, for full-time technical AI alignment research | David Udell | — | $80,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One year stipend and compute budget, for full-time technical AI alignment research |
| 6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's Law | Thomas Kwa | — | $60,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's Law |
| 6 month salary for further pursuing sparse autoencoders for automatic feature finding | Logan Smith | — | $40,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 month salary for further pursuing sparse autoencoders for automatic feature finding |
| 5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistance | For Collaborative Work with AI:FAR | — | $16,698 | — | Jan 2025 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistance |
| 3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganography | Mikhail Baranchuk | — | $12,600 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganography |
| 6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing study | Simon Lermen | — | $36,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing study |
| In MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networks | MentaLeap | — | $40,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] In MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networks |
| Funding to attend BWC meeting to discuss transparency with country representatives & work on research project | Riya Sharma | — | $1,700 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding to attend BWC meeting to discuss transparency with country representatives & work on research project |
| 2 Months of living expenses while I try to establish a broad-spectrum antiviral research organization | Hayden Peacock | — | $5,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2 Months of living expenses while I try to establish a broad-spectrum antiviral research organization |
| 6-month stipend to work on AI alignment research (automated redteaming, interpretability) | Alex Infanger | — | $30,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to work on AI alignment research (automated redteaming, interpretability) |
| 12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agenda | Jacques Thibodeau | — | $27,108 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agenda |
| 1-year stipend to continue research on agency, focused on natural abstraction | John Wentworth | — | $200,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year stipend to continue research on agency, focused on natural abstraction |
| This grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research. | Yuxiao Li | — | $45,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research. |
| A 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025 | Caleb Rak | — | $20,700 | — | Oct 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025 |
| Undergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshop | Nathaniel Sharadin | — | $33,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Undergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshop |
| Monthly seminar series on Guaranteed Safe AI, from July to December 2024 | Horizon Events | — | $6,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Monthly seminar series on Guaranteed Safe AI, from July to December 2024 |
| This grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research. | Sviatoslav Chalnev | — | $35,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research. |
| 5-month salary to continue work on evaluating agent self-improvement capabilities | Codruta Lugoj | — | $23,360 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 5-month salary to continue work on evaluating agent self-improvement capabilities |
| 12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher Brass | Yashvardhan Sharma | — | $6,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher Brass |
| 4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform co | Stanford University | — | $22,324.5 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform co |
| Seeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accurate | Kunvar Thaman | — | $2,500 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Seeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accurate |
| 1-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evals | Sumeet Motwani | — | $19,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evals |
| 3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel funding | Hannah Erleabch | — | $20,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel funding |
| Six month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverables | Philip Quirke | — | $61,000 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Six month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverables |
| 6-month salary for part-time independent research on LM interpretability for AI alignment | Aidan Ewart | — | $7,700 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for part-time independent research on LM interpretability for AI alignment |
| 6-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costs | Morgan Simpson | — | $31,600 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costs |
| SERI MATS 3-month extension to study knowledge removal in Language Models | Shashwat Goel | — | $12,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] SERI MATS 3-month extension to study knowledge removal in Language Models |
| 6-month salary to transition to a career in AI safety while working on AI safety projects | Dillon Bowen | — | $30,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to transition to a career in AI safety while working on AI safety projects |
| I'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute funds | Joshua Clymer | — | $1,500 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] I'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute funds |
| 9-month programme to help language and cognition scientists repurpose their existing skills for long-termist research | Nikola Moore | — | $5,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 9-month programme to help language and cognition scientists repurpose their existing skills for long-termist research |
| 11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in Finland | Santeri Tani | — | $73,333.33 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in Finland |
| Compute costs for experiments to evaluate different scalable oversight protocols | Lewis Hammond | — | $86,600 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Compute costs for experiments to evaluate different scalable oversight protocols |
| 6-month salary to finish writing a book on international AI governance and three other smaller AI governance projects | José Jaime Villalobos Ruiz | — | $33,700 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to finish writing a book on international AI governance and three other smaller AI governance projects |
| This grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction. | Tristan Williams | — | $2,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction. |
| 6-month salary for an AISC project and continuing independent mechanistic interpretability projects | Christopher Mathwin | — | $28,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary for an AISC project and continuing independent mechanistic interpretability projects |
| 3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge. | Benjamin Stewart | — | $3,138 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge. |
| 4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension program | Aaquib Syed | — | $30,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension program |
| Retroactive funding for GameBench paper | Dioptra (Josh Clymber's AIS research community) | — | $9,072 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Retroactive funding for GameBench paper |
| A podcast mainly themed around AI x-risk, aimed at a non-technical audience | Sarah Hastings-Woodhouse | — | $5,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A podcast mainly themed around AI x-risk, aimed at a non-technical audience |
| ~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila | Brian Tan | — | $86,400 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] ~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila |
| 4-month stipend for upskilling within the field of economic governance of AI | Rafael Andersson Lipcsey | — | $7,000 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend for upskilling within the field of economic governance of AI |
| 4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentials | Kurt Brown | — | $15,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentials |
| 6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyond | Felix Hofstätter | — | $38,688 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyond |
| 5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projects | Keith Wynroe | — | $21,989 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projects |
| 6-month stipend to work on techical alignment research as part of MATS 5.0 extension program | Cindy Wu | — | $40,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to work on techical alignment research as part of MATS 5.0 extension program |
| Retroactive grant to study Goodhart effects on heavy-tailed distributions | Thomas Kwa | — | $29,760 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Retroactive grant to study Goodhart effects on heavy-tailed distributions |
| 6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systems | Lukas Fluri | — | $37,120 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systems |
| 9 months support for an in-depth YouTube channel about AI safety and how AI will impact us all | David Williams-King | — | $27,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 9 months support for an in-depth YouTube channel about AI safety and how AI will impact us all |
| Funding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew Gentzel | Coleman Snell | — | $31,650 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew Gentzel |
| 4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language models | Rauno Arike, Elizabeth Donoway | — | $60,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language models |
| 6-month career transition and independent research in AI safety and risk mitigation | Jose Groh | — | $85,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month career transition and independent research in AI safety and risk mitigation |
| This grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research. | Cindy Wu | — | $5,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research. |
| Two workshops on strategic communications around AI safety, focused on the AI safety community | Philip Trippenbach | — | $5,720 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Two workshops on strategic communications around AI safety, focused on the AI safety community |
| 6 month salary to work on mech interp research with mentorship from Prof David Bau | Bilal Chughtai | — | $41,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 month salary to work on mech interp research with mentorship from Prof David Bau |
| 6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmark | Roman Soletskyi | — | $35,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmark |
| Research on how much language models can infer about their current user, and interpretability work on such inferences | Egg Syntax (legal: Jesse Davis) | — | $55,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Research on how much language models can infer about their current user, and interpretability work on such inferences |
| 4-month stipend to research the mechanisms of refusal in chat LLMs | Oscar Balcells Obeso | — | $40,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend to research the mechanisms of refusal in chat LLMs |
| Virtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safety | Orpheus Lummis | — | $10,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Virtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safety |
| 4-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategies | Kai Fronsdal | — | $27,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategies |
| Develop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viability | David Abecassis | — | $40,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Develop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viability |
| A fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and tech | Geneva Centre for Security Policy | — | $120,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and tech |
| One year funding of ACX meetup in Atlanta Georgia | ACX Atlanta | — | $5,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One year funding of ACX meetup in Atlanta Georgia |
| 7 months of coworking-space funding continuation, during interpretability research project | David Udell | — | $10,500 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 7 months of coworking-space funding continuation, during interpretability research project |
| Stipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attention | Matthias Dellago | — | $25,491 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Stipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attention |
| Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymaking | Existential Risk Observatory | — | $24,339 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymaking |
| 7-month stipend for organising AI Alignment Irvine (AIAI) | Neil Crawford | — | $16,337 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 7-month stipend for organising AI Alignment Irvine (AIAI) |
| 6-month stipends to develop and apply a novel method for localizing information and computation in neural networks | Alex Cloud, Jacob Goldman-Wetzler, Evžen Wybitul, Joseph Miller | — | $160,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipends to develop and apply a novel method for localizing information and computation in neural networks |
| 9-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’ | Julian Guidote | — | $7,200 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 9-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’ |
| 6-month stipend to continue independent interpretability research | Sviatoslav Chalnev | — | $40,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to continue independent interpretability research |
| 4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switch | Iván Arcuschin Moreno | — | $67,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switch |
| WhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic Interpretability | Brian Tan | — | $61,460 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] WhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic Interpretability |
| 8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AI | Luise Woehlke | — | $6,230 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AI |
| 1-year stipend for independent research primarily on high-level interpretability | Arun Jose | — | $70,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1-year stipend for independent research primarily on high-level interpretability |
| Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignment | Claire Short | — | $80,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignment |
| Conference publication of interpretability and LM-steering results | Alexander Turner | — | $40,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Conference publication of interpretability and LM-steering results |
| 1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved | Robert Miles | — | $121,575 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved |
| 12-month salary to set up a new org doing research and creating interventions to minimise lock-in risk | Formation Research | — | $10,000 | — | Oct 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12-month salary to set up a new org doing research and creating interventions to minimise lock-in risk |
| 1.5 year stipend for thorough investigation and analysis of AI lab scaling policies | Aysja Johnson | — | $100,000 | — | Jan 2025 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1.5 year stipend for thorough investigation and analysis of AI lab scaling policies |
| 6 month SERI MATS London extension phase for continuing and scaling up the sparse coding project | Hoagy Cunningham | — | $35,300 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 month SERI MATS London extension phase for continuing and scaling up the sparse coding project |
| 4 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognition | Arjun Panickssery | — | $34,100 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognition |
| Studying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.I | Cole Wyeth | — | $50,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Studying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.I |
| Organizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participants | Epistea, z.s | — | $115,000 | — | Apr 2025 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Organizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participants |
| MATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systems | Garrett Baker | — | $17,500 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] MATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systems |
| 6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitation | Theodore Chapman | — | $55,660 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitation |
| One year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's work | Macrostrategy Research Initiative | — | $150,000 | — | Jan 2025 | — | funds.effectivealtruism.org | [Long-Term Future Fund] One year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's work |
| 6-month stipend for a small group of collaborators to continue research on the Agent Structure Problem | Alex Altair | — | $60,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend for a small group of collaborators to continue research on the Agent Structure Problem |
| 4-month stipend for 3 people to create demonstrations of provably undetectable backdoors | Andrew Gritsevskiy | — | $50,336 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend for 3 people to create demonstrations of provably undetectable backdoors |
| Development of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms) | Sahil Kulshrestha | — | $30,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Development of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms) |
| Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theory | Wilson Wu | — | $20,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theory |
| 4-month salary to continue work on AI Control as a MATS extension | Vasil Georgiev | — | $30,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month salary to continue work on AI Control as a MATS extension |
| 6-month salary to build experience in AI interpretability research before PhD applications | Zach Furman | — | $40,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to build experience in AI interpretability research before PhD applications |
| 2 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fields | Krzysztof Gwiazda | — | $5,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 2 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fields |
| Salary Top-Up for Timaeus' Employees & Contractors | Timaeus (Fiscally Sponsored by Ashgro, Inc.) | — | $100,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Salary Top-Up for Timaeus' Employees & Contractors |
| 6 month project - pending description | Kristy Loke | — | $10,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 month project - pending description |
| 3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo Research | Sienka Dounia | — | $8,500 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo Research |
| 6-month stipend for Sparse Autoencoder Mech Interp projects | Logan Smith | — | $40,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend for Sparse Autoencoder Mech Interp projects |
| 4-month stipend to continue work on AI Control as a MATS extension | Cody Rushing | — | $30,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend to continue work on AI Control as a MATS extension |
| 12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour) | Nicky Pochinkov | — | $80,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour) |
| 6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp. | Artem Karpov | — | $1,739 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp. |
| 6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on bas | Hebrew Universty | — | $5,200 | — | Apr 2025 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on bas |
| 1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationality | Logan Strohl | — | $80,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationality |
| Funding for having written AI safety distillation posts on the topic of membranes/boundaries | Chris Lakin | — | $4,500 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for having written AI safety distillation posts on the topic of membranes/boundaries |
| 4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension program | Danielle Ensign | — | $60,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension program |
| 4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension program | Teun van der Weij | — | $30,087 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension program |
| General support for a forecasting team | Samotsvety Forecasting | — | $6,000 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] General support for a forecasting team |
| This grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence. | Daniel Filan | — | $44,802 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence. |
| Year-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base. | Bryce Meyer | — | $90,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Year-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base. |
| This grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment research | Alexander Turner | — | $30,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment research |
| Funds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRs | Imperial College London | — | $5,090 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRs |
| 4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendas | Codruta Lugoj | — | $7,200 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendas |
| 6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deception | Sara Price | — | $55,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deception |
| 6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems. | Roman Leventov | — | $6,500 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems. |
| 6-month stipend to work on safe and robust reasoning via mechanistically interpreting representations | Satvik Golechha | — | $30,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to work on safe and robust reasoning via mechanistically interpreting representations |
| Develop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-risk | Suzy Shepherd | — | $25,000 | — | Jan 2025 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Develop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-risk |
| 4-month stipend to continue work on AI Control as a MATS extension | Tyler Tracy | — | $30,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-month stipend to continue work on AI Control as a MATS extension |
| $10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestricted | Vaidehi Agarwalla | — | $10,500 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] $10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestricted |
| 8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topic | Vojtech Kovarik | — | $49,333.33 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topic |
| 1 month long literature review on in-context learning and its relevance to AI alignment | Alfie Lamerton | — | $6,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 1 month long literature review on in-context learning and its relevance to AI alignment |
| 4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer models | Tilman Räuker | — | $13,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer models |
| 6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space intervention | Eric Easley | — | $40,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space intervention |
| Create an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governance | Michel Justen | — | $5,000 | — | Oct 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Create an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governance |
| A private online platform for research-sharing amongst the AI governance community | The AI Governance Archive (TAIGA) | — | $125,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] A private online platform for research-sharing amongst the AI governance community |
| 6-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchers | Bryce Meyer | — | $50,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchers |
| This grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program. | Viktor Rehnberg | — | $19,248 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] This grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program. |
| Four months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial training | Aidan Ewart | — | $23,100 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Four months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial training |
| 6-month incubation program for technical AI safety research organizations | Catalyze Impact | — | $122,507 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month incubation program for technical AI safety research organizations |
| 4-months stipend to apply mechanistic interpretability to a real-world application, hallucinations | Javier Ferrando Monsonís and Oscar Balcells Obeso | — | $60,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 4-months stipend to apply mechanistic interpretability to a real-world application, hallucinations |
| 3-month part-time salary in order to work on AI governance projects and activities | Arran McCutcheon | — | $6,000 | — | Jul 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month part-time salary in order to work on AI governance projects and activities |
| Funding for (academic/technical) AI safety community events in London | Francis Rhys Ward | — | $8,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Funding for (academic/technical) AI safety community events in London |
| Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward | Michael Parker | — | $50,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward |
| 3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignment | The University of Texas at Austin | — | $50,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignment |
| 6 month AI alignment internship stipend top-up | Matt MacDermott | — | $10,000 | — | Apr 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 month AI alignment internship stipend top-up |
| Travel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI Safety | Dhruvin Patel | — | $1,800 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Travel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI Safety |
| Experimentally testing generative AI's ability to persuade humans about hazardous topics | Thomas Costello | — | $115,000 | — | Jan 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Experimentally testing generative AI's ability to persuade humans about hazardous topics |
| 6 month stipend for SAE-circuits | Logan Smith | — | $40,000 | — | Jul 2024 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6 month stipend for SAE-circuits |
| 6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIF | Marcus Williams | — | $42,000 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIF |
| 3-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignment | Simon Lermen | — | $13,000 | — | Apr 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] 3-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignment |
| Compute for experiment about how steganography in large language models might arise as a result of benign optimization | Felix Binder | — | $2,000 | — | Oct 2023 | — | funds.effectivealtruism.org | [Long-Term Future Fund] Compute for experiment about how steganography in large language models might arise as a result of benign optimization |