Skip to content
Longterm Wiki

Long-Term Future Fund (LTFF)

Funder
Structured Facts
Database Records

Funding Programs

1
NameProgramTypeDescriptionDivisionIdCurrencyApplicationUrlDeadlineStatusSourceNotesSource check
Long-Term Future Fund Grant Roundsgrant-roundRecurring grant rounds supporting organizations and individuals working on reducing existential risks, especially from advanced AIUQWeFEzUpnUSDfunds.effectivealtruism.orgRollingopenfunds.effectivealtruism.orgMultiple rounds per year; managed by a committee of fund managers

Grants

545
NameAmountRecipientDateSourceNotesProgramIdSource check
6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into Russian$13KMaksim Vymenets2022-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into Russianxng_1vsce_
Working on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goal$40KTushant Jha2020-01funds.effectivealtruism.org[Long-Term Future Fund] Working on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goalxng_1vsce_
Characterizing the properties and constraints of complex systems and their external interactions to inform AI safety research$20KAlexander Siegenfeld2019-07funds.effectivealtruism.org[Long-Term Future Fund] Characterizing the properties and constraints of complex systems and their external interactions to inform AI safety researchxng_1vsce_
6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arranged$28KThomas Moynihan2021-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arrangedxng_1vsce_
12-month salary for researching value learning$50KCharlie Steiner2022-01funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary for researching value learningxng_1vsce_
Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral.$30KGavin Taylor2020-07funds.effectivealtruism.org[Long-Term Future Fund] Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral.xng_1vsce_
Support Sam's participation in ‘Mid-term AI impacts’ research project$4.5KSam Clarke2020-10funds.effectivealtruism.org[Long-Term Future Fund] Support Sam's participation in ‘Mid-term AI impacts’ research projectxng_1vsce_
PhD at Cambridge$150KRichard Ngo2020-07funds.effectivealtruism.org[Long-Term Future Fund] PhD at Cambridgexng_1vsce_
Funding a nordic conference for senior X-risk researchers and junior talents interested in entering the field$4.6KEffektiv Altruism Sverige (EA Sweden)2021-10funds.effectivealtruism.org[Long-Term Future Fund] Funding a nordic conference for senior X-risk researchers and junior talents interested in entering the fieldxng_1vsce_
Funding for a degree in the Biological Sciences at UCSD (University of California San Diego)$250KKristaps Zilgalvis2021-10funds.effectivealtruism.org[Long-Term Future Fund] Funding for a degree in the Biological Sciences at UCSD (University of California San Diego)xng_1vsce_
I would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good.$2KRuth Grace Wong2022-01funds.effectivealtruism.org[Long-Term Future Fund] I would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good.xng_1vsce_
Research on AI safety$30KMarius Hobbhahn2022-01funds.effectivealtruism.org[Long-Term Future Fund] Research on AI safetyxng_1vsce_
Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software$11KGeorge Green2021-10funds.effectivealtruism.org[Long-Term Future Fund] Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & softwarexng_1vsce_
Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment$150KNick Hay2021-10funds.effectivealtruism.org[Long-Term Future Fund] Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignmentxng_1vsce_
Buy out of teaching assistant duties for the remaining two years of my PhD program$50KMichael Zlatin2022-01funds.effectivealtruism.org[Long-Term Future Fund] Buy out of teaching assistant duties for the remaining two years of my PhD programxng_1vsce_
Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved$82KRobert Miles2022-01funds.effectivealtruism.org[Long-Term Future Fund] Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involvedxng_1vsce_
Support to work on biosecurity$11KSculpting Evolution Group, MIT2022-01funds.effectivealtruism.org[Long-Term Future Fund] Support to work on biosecurityxng_1vsce_
Funding to trial a new London organization aiming to 10x the number of AI safety researchers$234KJessica Cooper2022-01funds.effectivealtruism.org[Long-Term Future Fund] Funding to trial a new London organization aiming to 10x the number of AI safety researchersxng_1vsce_
Time costs over six months to publish a paper on the interaction of open science practices and bio-risk$8.3KJames Smith2021-10funds.effectivealtruism.org[Long-Term Future Fund] Time costs over six months to publish a paper on the interaction of open science practices and bio-riskxng_1vsce_
Research into the nature of optimization, knowledge, and agency, with relevance to AI alignment$80KAlex Flint2021-07funds.effectivealtruism.org[Long-Term Future Fund] Research into the nature of optimization, knowledge, and agency, with relevance to AI alignmentxng_1vsce_
Producing video content on AI alignment$39KRobert Miles2019-04funds.effectivealtruism.org[Long-Term Future Fund] Producing video content on AI alignmentxng_1vsce_
Participation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interface$1.6KFabio Haenel2021-07funds.effectivealtruism.org[Long-Term Future Fund] Participation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interfacexng_1vsce_
Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciary$24KNick Hollman2020-10funds.effectivealtruism.org[Long-Term Future Fund] Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciaryxng_1vsce_
Open Online Course on “The Economics of AI” for Anton Korinek$72KUniversity of Virginia2021-01funds.effectivealtruism.org[Long-Term Future Fund] Open Online Course on “The Economics of AI” for Anton Korinekxng_1vsce_
Organizing a workshop aimed at highlighting recent successes in the development of verified software.$5KGopal Sarma2020-01funds.effectivealtruism.org[Long-Term Future Fund] Organizing a workshop aimed at highlighting recent successes in the development of verified software.xng_1vsce_
Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization.$135KLegal Priorities Project2021-01funds.effectivealtruism.org[Long-Term Future Fund] Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization.xng_1vsce_
4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effects$12KDavid Rhys Bernard2021-10funds.effectivealtruism.org[Long-Term Future Fund] 4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effectsxng_1vsce_
A study of safe exploration and robustness to distributional shift in biological complex systems$30KNikhil Kunapuli2019-04funds.effectivealtruism.org[Long-Term Future Fund] A study of safe exploration and robustness to distributional shift in biological complex systemsxng_1vsce_
Conducting independent research into AI forecasting and strategy questions$40KTegan McCaslin2019-10funds.effectivealtruism.org[Long-Term Future Fund] Conducting independent research into AI forecasting and strategy questionsxng_1vsce_
Conducting independent research on cause prioritization$33KMichael Dickens2020-01funds.effectivealtruism.org[Long-Term Future Fund] Conducting independent research on cause prioritizationxng_1vsce_
Building towards a "Limited Agent Foundations" thesis on mild optimization and corrigibility$30KAlex Turner2019-04funds.effectivealtruism.org[Long-Term Future Fund] Building towards a "Limited Agent Foundations" thesis on mild optimization and corrigibilityxng_1vsce_
6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISS$25KAI Safety Support2021-07funds.effectivealtruism.org[Long-Term Future Fund] 6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISSxng_1vsce_
DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relations$78KUniversity of Oxford, Department of Experimental Psychology2021-10funds.effectivealtruism.org[Long-Term Future Fund] DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relationsxng_1vsce_
Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop$30KJohn Wentworth2019-10funds.effectivealtruism.org[Long-Term Future Fund] Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loopxng_1vsce_
Surveying the neglectedness of broad-spectrum antiviral development$18KJaspreet Pannu (Jassi)2019-10funds.effectivealtruism.org[Long-Term Future Fund] Surveying the neglectedness of broad-spectrum antiviral developmentxng_1vsce_
Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual books$19KElizabeth Van Nostrand2019-10funds.effectivealtruism.org[Long-Term Future Fund] Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual booksxng_1vsce_
12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms$250KBerkeley Existential Risk Initiative2021-10funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithmsxng_1vsce_
Exploring crucial considerations for decision-making around information hazards$25KWill Bradshaw2020-01funds.effectivealtruism.org[Long-Term Future Fund] Exploring crucial considerations for decision-making around information hazardsxng_1vsce_
Help InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agents$135KBerkeley Existential Risk Initiative2022-01funds.effectivealtruism.org[Long-Term Future Fund] Help InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agentsxng_1vsce_
Aiming to implement AI alignment concepts in real-world applications$10KElicit (AI Research Tool)2018-10funds.effectivealtruism.org[Long-Term Future Fund] Aiming to implement AI alignment concepts in real-world applicationsxng_1vsce_
Funding for building agents with causal models of the world and using those models for impact minimization.$10KVincent Luczkow2020-01funds.effectivealtruism.org[Long-Term Future Fund] Funding for building agents with causal models of the world and using those models for impact minimization.xng_1vsce_
Upskilling in ML in order to be able to do productive AI safety research sooner than otherwise$10KJoar Skalse2019-07funds.effectivealtruism.org[Long-Term Future Fund] Upskilling in ML in order to be able to do productive AI safety research sooner than otherwisexng_1vsce_
Identifying and resolving tensions between competition law and long-term AI strategy$32KShin-Shin Hua and Haydn Belfield2020-01funds.effectivealtruism.org[Long-Term Future Fund] Identifying and resolving tensions between competition law and long-term AI strategyxng_1vsce_
Stipends, work hours, and retreat costs for four extra students of CHERI’s summer research program$11KEffective Altruism Geneva2021-07funds.effectivealtruism.org[Long-Term Future Fund] Stipends, work hours, and retreat costs for four extra students of CHERI’s summer research programxng_1vsce_
Supporting 3-month research period$7.9KCharlie Rogers-Smith2020-07funds.effectivealtruism.org[Long-Term Future Fund] Supporting 3-month research periodxng_1vsce_
PhD in Computer Science working on AI-safety$250KAmon Elders2021-01funds.effectivealtruism.org[Long-Term Future Fund] PhD in Computer Science working on AI-safetyxng_1vsce_
4 month salary to upskill in biosecurity and explore possible career paths in biosecurity.$12KFinan Adamson2021-10funds.effectivealtruism.org[Long-Term Future Fund] 4 month salary to upskill in biosecurity and explore possible career paths in biosecurity.xng_1vsce_
New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass public$100KExpii, Inc.2021-01funds.effectivealtruism.org[Long-Term Future Fund] New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass publicxng_1vsce_
3-month funding for part-time research into US ability to maintain food supply in an extreme pandemic$3.1KAdin Richards2022-01funds.effectivealtruism.org[Long-Term Future Fund] 3-month funding for part-time research into US ability to maintain food supply in an extreme pandemicxng_1vsce_
Grant to cover fees for a master's program in machine learning$28KAndrei Alexandru2021-10funds.effectivealtruism.org[Long-Term Future Fund] Grant to cover fees for a master's program in machine learningxng_1vsce_
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$91K80,000 Hours2018-07funds.effectivealtruism.org[Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)xng_1vsce_
Supporting Vanessa with her AI alignment research$100KVanessa Kosoy2020-10funds.effectivealtruism.org[Long-Term Future Fund] Supporting Vanessa with her AI alignment researchxng_1vsce_
Create a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processing$55K2020-01funds.effectivealtruism.org[Long-Term Future Fund] Create a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processingxng_1vsce_
Building understanding of the structure of risks from AI to inform prioritization$80KDavid Manheim2021-10funds.effectivealtruism.org[Long-Term Future Fund] Building understanding of the structure of risks from AI to inform prioritizationxng_1vsce_
Write a SF/F novel based on the EA community.$15KTimothy Underwood2022-01funds.effectivealtruism.org[Long-Term Future Fund] Write a SF/F novel based on the EA community.xng_1vsce_
Educational scholarship in AI safety$13KPaul Colognese2022-01funds.effectivealtruism.org[Long-Term Future Fund] Educational scholarship in AI safetyxng_1vsce_
Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers$40KShahar Avin2019-01funds.effectivealtruism.org[Long-Term Future Fund] Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchersxng_1vsce_
Support to build a forecasting platform based on user-created play-money prediction markets$200KStephen Grugett, James Grugett, Austin Chen2022-01funds.effectivealtruism.org[Long-Term Future Fund] Support to build a forecasting platform based on user-created play-money prediction marketsxng_1vsce_
Summer research program on global catastrophic risks for Swiss (under)graduate students$34KEffective Altruism Geneva2021-01funds.effectivealtruism.org[Long-Term Future Fund] Summer research program on global catastrophic risks for Swiss (under)graduate studentsxng_1vsce_
Building infrastructure to give existential risk researchers superforecasting ability with minimal overhead$27KJacob Lagerros2019-04funds.effectivealtruism.org[Long-Term Future Fund] Building infrastructure to give existential risk researchers superforecasting ability with minimal overheadxng_1vsce_
Strategic research and studying programming$30KEli Tyre2019-04funds.effectivealtruism.org[Long-Term Future Fund] Strategic research and studying programmingxng_1vsce_
Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety$80KAI Safety Support2022-01funds.effectivealtruism.org[Long-Term Future Fund] Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safetyxng_1vsce_
1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignment$2.5KMarc-Everin Carauleanu2021-01funds.effectivealtruism.org[Long-Term Future Fund] 1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignmentxng_1vsce_
4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agent$3.3KDavid Reber2021-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agentxng_1vsce_
7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemics$18KToby Bonvoisin2021-01funds.effectivealtruism.org[Long-Term Future Fund] 7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemicsxng_1vsce_
Performing independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John Salvatier$20KConnor Flexman2019-04funds.effectivealtruism.org[Long-Term Future Fund] Performing independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John Salvatierxng_1vsce_
Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates$35KJoe Collman2021-07funds.effectivealtruism.org[Long-Term Future Fund] Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicatesxng_1vsce_
Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculture$3.6KAlliance to Feed the Earth in Disasters2021-07funds.effectivealtruism.org[Long-Term Future Fund] Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculturexng_1vsce_
Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhD$100KAryeh Englander2021-10funds.effectivealtruism.org[Long-Term Future Fund] Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhDxng_1vsce_
Independent research on forecasting and optimal paths to improve the long-term - LTF fund$41K2020-10funds.effectivealtruism.org[Long-Term Future Fund] Independent research on forecasting and optimal paths to improve the long-term - LTF fundxng_1vsce_
Payment for AI researchers when I interview / survey them about their perceptions of safety$9.9KVael Gates2022-01funds.effectivealtruism.org[Long-Term Future Fund] Payment for AI researchers when I interview / survey them about their perceptions of safetyxng_1vsce_
Cataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way Forward$35KMichael Parker2022-01funds.effectivealtruism.org[Long-Term Future Fund] Cataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way Forwardxng_1vsce_
Unrestricted donation$150KCenter for Applied Rationality2019-04funds.effectivealtruism.org[Long-Term Future Fund] Unrestricted donationxng_1vsce_
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$489K2018-07funds.effectivealtruism.org[Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)xng_1vsce_
researching methods to continuously monitor and analyse artificial agents for the purpose of control.$45KLee Sharkey2020-10funds.effectivealtruism.org[Long-Term Future Fund] researching methods to continuously monitor and analyse artificial agents for the purpose of control.xng_1vsce_
Identifying white space opportunities for technical projects to improve biosecurity and pandemic preparedness$30KKyle Fish2019-10funds.effectivealtruism.org[Long-Term Future Fund] Identifying white space opportunities for technical projects to improve biosecurity and pandemic preparednessxng_1vsce_
2-year funding to run public and expert surveys on AI governance and forecasting$232KNoemi Dreksler2021-10funds.effectivealtruism.org[Long-Term Future Fund] 2-year funding to run public and expert surveys on AI governance and forecastingxng_1vsce_
Persuasion Tournament for Existential Risk$200KPhilip Tetlock, Ezra Karger, Pavel Atanasov2021-07funds.effectivealtruism.org[Long-Term Future Fund] Persuasion Tournament for Existential Riskxng_1vsce_
Support to work towards developing an early-warning system for future biological risks$9KMichael McLaren2022-01funds.effectivealtruism.org[Long-Term Future Fund] Support to work towards developing an early-warning system for future biological risksxng_1vsce_
Develop a research project on how to infer human's internal mental models from their behaviour using cognitive science modeling$7.7KSofia Jativa Vega2020-01funds.effectivealtruism.org[Long-Term Future Fund] Develop a research project on how to infer human's internal mental models from their behaviour using cognitive science modelingxng_1vsce_
Testing how the accuracy of impact forecasting varies with the timeframe of prediction.$55KDavid Rhys Bernard2020-10funds.effectivealtruism.org[Long-Term Future Fund] Testing how the accuracy of impact forecasting varies with the timeframe of prediction.xng_1vsce_
Surveying experts on AI risk scenarios and working on other projects related to AI safety.$5KAlexis Carlier2020-07funds.effectivealtruism.org[Long-Term Future Fund] Surveying experts on AI risk scenarios and working on other projects related to AI safety.xng_1vsce_
Funds for a 6-month project contributing to the clarification of goal-directedness$22KMorgan Rogers2022-01funds.effectivealtruism.org[Long-Term Future Fund] Funds for a 6-month project contributing to the clarification of goal-directednessxng_1vsce_
Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safety$122KCaroline Jeanmaire2021-01funds.effectivealtruism.org[Long-Term Future Fund] Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safetyxng_1vsce_
Funding to cover a visit to Boston for biosecurity work$16KWill Bradshaw2021-10funds.effectivealtruism.org[Long-Term Future Fund] Funding to cover a visit to Boston for biosecurity workxng_1vsce_
Retroactive funding for running an alignment theory mentorship program with Evan Hubinger$3.6KOliver Zhang2022-01funds.effectivealtruism.org[Long-Term Future Fund] Retroactive funding for running an alignment theory mentorship program with Evan Hubingerxng_1vsce_
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$174KCenter for Applied Rationality2018-07funds.effectivealtruism.org[Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)xng_1vsce_
Supporting aspiring researchers of AI alignment to boost themselves into productivity$25KJohannes Heidecke2019-04funds.effectivealtruism.org[Long-Term Future Fund] Supporting aspiring researchers of AI alignment to boost themselves into productivityxng_1vsce_
Human Progress for Beginners children's book$25KJason Crawford2019-10funds.effectivealtruism.org[Long-Term Future Fund] Human Progress for Beginners children's bookxng_1vsce_
Replacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemics$42KJoel Becker2021-01funds.effectivealtruism.org[Long-Term Future Fund] Replacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemicsxng_1vsce_
Research to enable transition to AI Safety$43KVojtěch Kovařík2019-10funds.effectivealtruism.org[Long-Term Future Fund] Research to enable transition to AI Safetyxng_1vsce_
Formalizing the side effect avoidance problem research$30KAlex Turner2020-01funds.effectivealtruism.org[Long-Term Future Fund] Formalizing the side effect avoidance problem researchxng_1vsce_
Productivity coaching for effective altruists to increase their impact$23KLynette Bye2019-07funds.effectivealtruism.org[Long-Term Future Fund] Productivity coaching for effective altruists to increase their impactxng_1vsce_
50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing data$38KBugSeq Bioinformatics Inc.2021-01funds.effectivealtruism.org[Long-Term Future Fund] 50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing dataxng_1vsce_
6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulations$3.5KRutgers University, Department of Philosophy2021-07funds.effectivealtruism.org[Long-Term Future Fund] 6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulationsxng_1vsce_
Support for self-study in data science and forecasting, to upskill within a GCBR research career$2.2KBenjamin Stewart2021-10funds.effectivealtruism.org[Long-Term Future Fund] Support for self-study in data science and forecasting, to upskill within a GCBR research careerxng_1vsce_
Create AI safety videos, and offer communication and media support to AI safety orgs.$60KRobert Miles2020-07funds.effectivealtruism.org[Long-Term Future Fund] Create AI safety videos, and offer communication and media support to AI safety orgs.xng_1vsce_
We’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting.$50KThe Center for Election Science2021-10funds.effectivealtruism.org[Long-Term Future Fund] We’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting.xng_1vsce_
Developing algorithms, environments and tests for AI safety via debate.$25KJoe Collman2020-07funds.effectivealtruism.org[Long-Term Future Fund] Developing algorithms, environments and tests for AI safety via debate.xng_1vsce_
2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-founders$34KAligned AI2022-01funds.effectivealtruism.org[Long-Term Future Fund] 2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-foundersxng_1vsce_
Writing fiction to convey EA and rationality-related topics$20KMiranda Dixon-Luinenburg2019-07funds.effectivealtruism.org[Long-Term Future Fund] Writing fiction to convey EA and rationality-related topicsxng_1vsce_
Research on the links between short- and long-term AI policy while skilling up in technical ML$75KJess Whittlestone2019-07funds.effectivealtruism.org[Long-Term Future Fund] Research on the links between short- and long-term AI policy while skilling up in technical MLxng_1vsce_
3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance"$5KChelsea Liang2021-10funds.effectivealtruism.org[Long-Term Future Fund] 3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance"xng_1vsce_
Funding for full-time, independent research on agent foundations$30KDaniel Demski2019-10funds.effectivealtruism.org[Long-Term Future Fund] Funding for full-time, independent research on agent foundationsxng_1vsce_
PhD in machine learning with a focus on AI alignment$86KDmitrii Krasheninnikov2021-07funds.effectivealtruism.org[Long-Term Future Fund] PhD in machine learning with a focus on AI alignmentxng_1vsce_
Buying out one year of my academic teaching so that I can spend time on AI alignment research instead$12KDavid Udell2022-01funds.effectivealtruism.org[Long-Term Future Fund] Buying out one year of my academic teaching so that I can spend time on AI alignment research insteadxng_1vsce_
Funding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019.$28KMikhail Yagudin2019-04funds.effectivealtruism.org[Long-Term Future Fund] Funding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019.xng_1vsce_
For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fit$85KRemmelt Ellen2021-01funds.effectivealtruism.org[Long-Term Future Fund] For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fitxng_1vsce_
Provides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support)$15KBerkeley Existential Risk Initiative2017-01funds.effectivealtruism.org[Long-Term Future Fund] Provides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support)xng_1vsce_
Additional funding for AI strategy PhD at Oxford / FHI$37KSören Mindermann2019-07funds.effectivealtruism.org[Long-Term Future Fund] Additional funding for AI strategy PhD at Oxford / FHIxng_1vsce_
6-month salary to develop tools to test the natural abstractions hypothesis$35KJohn Wentworth2021-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to develop tools to test the natural abstractions hypothesisxng_1vsce_
A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers$26KTessa Alexanian2019-04funds.effectivealtruism.org[Long-Term Future Fund] A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchersxng_1vsce_
Conducting independent research into AI forecasting and strategy questions$30KTegan McCaslin2019-04funds.effectivealtruism.org[Long-Term Future Fund] Conducting independent research into AI forecasting and strategy questionsxng_1vsce_
One year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields.$80KLogan Strohl2021-01funds.effectivealtruism.org[Long-Term Future Fund] One year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields.xng_1vsce_
Formalizing perceptual complexity with application to safe intelligence amplification$30KAnand Srinivasan2019-04funds.effectivealtruism.org[Long-Term Future Fund] Formalizing perceptual complexity with application to safe intelligence amplificationxng_1vsce_
Three months of blogging and movement building at the intersection of EA/longtermism and progress studies$18KNicholas (Nick) Whitaker2021-10funds.effectivealtruism.org[Long-Term Future Fund] Three months of blogging and movement building at the intersection of EA/longtermism and progress studiesxng_1vsce_
Support multiple SPARC project operations during 2021$15KSPARC2021-01funds.effectivealtruism.org[Long-Term Future Fund] Support multiple SPARC project operations during 2021xng_1vsce_
Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decades$11KZach Freitas-Groff2021-07funds.effectivealtruism.org[Long-Term Future Fund] Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decadesxng_1vsce_
A two-day, career-focused workshop to inform and connect European EAs interested in AI governance$18KAlex Lintz2019-01funds.effectivealtruism.org[Long-Term Future Fund] A two-day, career-focused workshop to inform and connect European EAs interested in AI governancexng_1vsce_
To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety$23KStag Lynn2019-07funds.effectivealtruism.org[Long-Term Future Fund] To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safetyxng_1vsce_
Funding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systems$275KKush Bhatia2022-01funds.effectivealtruism.org[Long-Term Future Fund] Funding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systemsxng_1vsce_
10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretability$19KBenedikt Hoeltgen2021-10funds.effectivealtruism.org[Long-Term Future Fund] 10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretabilityxng_1vsce_
Increasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date.$65KAnthony Aguirre2020-01funds.effectivealtruism.org[Long-Term Future Fund] Increasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date.xng_1vsce_
Multi-model approach to corporate and state actors relevant to existential risk mitigation$30KDavid Manheim2019-07funds.effectivealtruism.org[Long-Term Future Fund] Multi-model approach to corporate and state actors relevant to existential risk mitigationxng_1vsce_
1-year salary for Adam Shimi to conduct independent research in AI Alignment$60KAdam Shimi2021-01funds.effectivealtruism.org[Long-Term Future Fund] 1-year salary for Adam Shimi to conduct independent research in AI Alignmentxng_1vsce_
A research agenda rigorously connecting the internal and external views of value synthesis$30KDavid Girardo2019-04funds.effectivealtruism.org[Long-Term Future Fund] A research agenda rigorously connecting the internal and external views of value synthesisxng_1vsce_
BERI will support SERI when university systems are unable to help$60KBerkeley Existential Risk Initiative2021-01funds.effectivealtruism.org[Long-Term Future Fund] BERI will support SERI when university systems are unable to helpxng_1vsce_
Financial support for work on a biosecurity research project and workshop, and travel expenses$15KSimon Grimm2022-01funds.effectivealtruism.org[Long-Term Future Fund] Financial support for work on a biosecurity research project and workshop, and travel expensesxng_1vsce_
3-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurity$15KCaleb Withers2022-01funds.effectivealtruism.org[Long-Term Future Fund] 3-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurityxng_1vsce_
Support to create language model (LM) tools to aid alignment research through feedback and content generation$40KLogan Smith2022-01funds.effectivealtruism.org[Long-Term Future Fund] Support to create language model (LM) tools to aid alignment research through feedback and content generationxng_1vsce_
Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD$10KOrpheus Lummis2019-04funds.effectivealtruism.org[Long-Term Future Fund] Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhDxng_1vsce_
Longtermist lessons from COVID$5.6KGavin Leech2022-01funds.effectivealtruism.org[Long-Term Future Fund] Longtermist lessons from COVIDxng_1vsce_
Writing preliminary content for an encyclopedia of effective altruism$17KPablo Stafforini2020-01funds.effectivealtruism.org[Long-Term Future Fund] Writing preliminary content for an encyclopedia of effective altruismxng_1vsce_
Understanding the Impact of Lifting Government Interventions against COVID-19 Transmission$9.8KMrinank Sharma2020-10funds.effectivealtruism.org[Long-Term Future Fund] Understanding the Impact of Lifting Government Interventions against COVID-19 Transmissionxng_1vsce_
Unrestricted donation$50KElicit (AI Research Tool)2019-04funds.effectivealtruism.org[Long-Term Future Fund] Unrestricted donationxng_1vsce_
An offline community hub for rationalists and EAs$50KVyacheslav Matyuhin2019-04funds.effectivealtruism.org[Long-Term Future Fund] An offline community hub for rationalists and EAsxng_1vsce_
Upskilling investigation of AI Safety via debate and ML training$10KJoe Collman2019-10funds.effectivealtruism.org[Long-Term Future Fund] Upskilling investigation of AI Safety via debate and ML trainingxng_1vsce_
Computing resources and researcher salaries at a new deep learning + AI alignment research group at Cambridge$200KDavid Krueger2021-01funds.effectivealtruism.org[Long-Term Future Fund] Computing resources and researcher salaries at a new deep learning + AI alignment research group at Cambridgexng_1vsce_
Funding to pay participants to test a forecasting training program$3.2KLogan McNichols2021-10funds.effectivealtruism.org[Long-Term Future Fund] Funding to pay participants to test a forecasting training programxng_1vsce_
Building infrastructure for the future of effective forecasting efforts$70KOzzie Gooen2019-04funds.effectivealtruism.org[Long-Term Future Fund] Building infrastructure for the future of effective forecasting effortsxng_1vsce_
Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks.$40KDamon Pourtahmaseb-Sasi2019-10funds.effectivealtruism.org[Long-Term Future Fund] Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks.xng_1vsce_
8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHI$28KJames Bernardi2021-07funds.effectivealtruism.org[Long-Term Future Fund] 8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHIxng_1vsce_
6-month salary to work with Dan Hendrycks on research projects relevant to AI alignment$50KThomas Woodside2022-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to work with Dan Hendrycks on research projects relevant to AI alignmentxng_1vsce_
12-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goals$20KLauren Lee2019-04funds.effectivealtruism.org[Long-Term Future Fund] 12-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goalsxng_1vsce_
Conducting postdoctoral research at Harvard on the psychology of EA/long-termism$50KLucius Caviola2019-04funds.effectivealtruism.org[Long-Term Future Fund] Conducting postdoctoral research at Harvard on the psychology of EA/long-termismxng_1vsce_
12-month salary to provide runway after finishing RSP$55KThe Future of Humanity Institute2021-01funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary to provide runway after finishing RSPxng_1vsce_
Educational Scholarship in AI Alignment$22KJaeson Booker2022-01funds.effectivealtruism.org[Long-Term Future Fund] Educational Scholarship in AI Alignmentxng_1vsce_
Fund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing research$70KRethink Priorities2021-01funds.effectivealtruism.org[Long-Term Future Fund] Fund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing researchxng_1vsce_
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$163KCentre for Effective Altruism2018-07funds.effectivealtruism.org[Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)xng_1vsce_
Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitter$1.1KAlex Turner2022-01funds.effectivealtruism.org[Long-Term Future Fund] Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitterxng_1vsce_
Unrestricted donation$50K2019-04funds.effectivealtruism.org[Long-Term Future Fund] Unrestricted donationxng_1vsce_
Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentors$20KDavid Reber2021-10funds.effectivealtruism.org[Long-Term Future Fund] Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentorsxng_1vsce_
12-month salary for independent research, upskilling, and finding a stable position in AI-Safety$24KRobert Kralisch2022-01funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary for independent research, upskilling, and finding a stable position in AI-Safetyxng_1vsce_
A major expansion of the Metaculus prediction platform and its community$70KAnthony Aguirre2019-04funds.effectivealtruism.org[Long-Term Future Fund] A major expansion of the Metaculus prediction platform and its communityxng_1vsce_
Research project on the longevity and decay of universities, philanthropic foundations, and catholic orders$3.6KMaximilian Negele2020-10funds.effectivealtruism.org[Long-Term Future Fund] Research project on the longevity and decay of universities, philanthropic foundations, and catholic ordersxng_1vsce_
Organising immersive workshops on meta skills and x-risk for STEM students at top universities.$33KTamara Borine2020-10funds.effectivealtruism.org[Long-Term Future Fund] Organising immersive workshops on meta skills and x-risk for STEM students at top universities.xng_1vsce_
Support for alignment theory agenda evaluation$25KJack Ryan2022-07funds.effectivealtruism.org[Long-Term Future Fund] Support for alignment theory agenda evaluationxng_1vsce_
AI safety dinners$10KNeil Crawford2022-07funds.effectivealtruism.org[Long-Term Future Fund] AI safety dinnersxng_1vsce_
AI safety research$1.5KLukas Berglund2022-10funds.effectivealtruism.org[Long-Term Future Fund] AI safety researchxng_1vsce_
Compensation for a non-fiction book on threat of AGI for a general audience$50KDarren McKee2022-07funds.effectivealtruism.org[Long-Term Future Fund] Compensation for a non-fiction book on threat of AGI for a general audiencexng_1vsce_
Funding to perform human evaluations for evaluating different machine learning methods for aligning language models$10KRobert Kirk2022funds.effectivealtruism.org[Long-Term Future Fund] Funding to perform human evaluations for evaluating different machine learning methods for aligning language modelsxng_1vsce_
Travel Support to BWC RevCon & Side Events$3.5KTheo Knopfer2022-10funds.effectivealtruism.org[Long-Term Future Fund] Travel Support to BWC RevCon & Side Eventsxng_1vsce_
travel funding for participants in a workshop on the science of consciousness and current and near-term AI systems$11KRobert Long2023-01funds.effectivealtruism.org[Long-Term Future Fund] travel funding for participants in a workshop on the science of consciousness and current and near-term AI systemsxng_1vsce_
Funding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows)$100KNora Ammann2023-01funds.effectivealtruism.org[Long-Term Future Fund] Funding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows)xng_1vsce_
Neural network interpretability research$13KNicholas Greig2022-07funds.effectivealtruism.org[Long-Term Future Fund] Neural network interpretability researchxng_1vsce_
Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAO$4.9KJacob Mendel2023-01funds.effectivealtruism.org[Long-Term Future Fund] Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAOxng_1vsce_
6 months of independent alignment research and upskilling$30KZhengbo Xiang (Alana)2022funds.effectivealtruism.org[Long-Term Future Fund] 6 months of independent alignment research and upskillingxng_1vsce_
Research into the international viability of FHI's Windfall Clause$3KJohn Bridge2022-07funds.effectivealtruism.org[Long-Term Future Fund] Research into the international viability of FHI's Windfall Clausexng_1vsce_
6-month salary for research into preventing steganography in interpretable representations using multiple agents$20KHoagy Cunningham2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for research into preventing steganography in interpretable representations using multiple agentsxng_1vsce_
Research on EA and longtermism$70KAaron Bergman2022-07funds.effectivealtruism.org[Long-Term Future Fund] Research on EA and longtermismxng_1vsce_
6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations.$40KLogan Smith2023-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations.xng_1vsce_
1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs.$50KPaul Bricman2022funds.effectivealtruism.org[Long-Term Future Fund] 1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs.xng_1vsce_
6-month part-time (20h/week) salary to further develop and refine the feature visualization library Lucent$23KTom Lieberum2022-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month part-time (20h/week) salary to further develop and refine the feature visualization library Lucentxng_1vsce_
This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign.$7.5KNaoya Okamoto2023-01funds.effectivealtruism.org[Long-Term Future Fund] This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign.xng_1vsce_
Support to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 years$3KDavid Staley2023-01funds.effectivealtruism.org[Long-Term Future Fund] Support to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 yearsxng_1vsce_
Support for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety faster$50KMarius Hobbhahn2022-07funds.effectivealtruism.org[Long-Term Future Fund] Support for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety fasterxng_1vsce_
12-month salary to study and get into AI Safety Research and work on related EA projects$14KLuca De Leo2022-10funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary to study and get into AI Safety Research and work on related EA projectsxng_1vsce_
4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fit$20KMax Kaufmann2022funds.effectivealtruism.org[Long-Term Future Fund] 4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fitxng_1vsce_
Exploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strike$5KIsabel Johnson2022-07funds.effectivealtruism.org[Long-Term Future Fund] Exploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strikexng_1vsce_
6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastrophe$36KSasha Cooper2022-07funds.effectivealtruism.org[Long-Term Future Fund] 6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastrophexng_1vsce_
6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paper$33KJonathan Ng2023-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paperxng_1vsce_
Financial support to help productivity and increase time of early career alignment researcher$7KMax Kaufmann2022-07funds.effectivealtruism.org[Long-Term Future Fund] Financial support to help productivity and increase time of early career alignment researcherxng_1vsce_
5-month part time salary for collaborating on a research paper analyzing the implications of compute access$2.5KSage Bergerson2022funds.effectivealtruism.org[Long-Term Future Fund] 5-month part time salary for collaborating on a research paper analyzing the implications of compute accessxng_1vsce_
Support for living expenses while doing PhD in AI safety - technical research and community building work$2.3KFrancis Rhys Ward2022funds.effectivealtruism.org[Long-Term Future Fund] Support for living expenses while doing PhD in AI safety - technical research and community building workxng_1vsce_
6-month salary for self-study to be more effective at AI alignment research$15KThomas Kehrenberg2022-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for self-study to be more effective at AI alignment researchxng_1vsce_
The Alignable Structures workshop in Philadelphia$9KQuinn Dougherty2022-10funds.effectivealtruism.org[Long-Term Future Fund] The Alignable Structures workshop in Philadelphiaxng_1vsce_
New laptop for technical AI safety research$4.1KPeter Barnett2022-07funds.effectivealtruism.org[Long-Term Future Fund] New laptop for technical AI safety researchxng_1vsce_
10-month funding to study ML at university and AIS independently$500Patricio Vercesi2023-01funds.effectivealtruism.org[Long-Term Future Fund] 10-month funding to study ML at university and AIS independentlyxng_1vsce_
6 month salary to improve the US regulatory environment for prediction markets$138KSolomon Sia2022-07funds.effectivealtruism.org[Long-Term Future Fund] 6 month salary to improve the US regulatory environment for prediction marketsxng_1vsce_
Develop and market video game to explain the Stop Button Problem to the public & STEM individuals$100KLone Pine Games, LLC2022-07funds.effectivealtruism.org[Long-Term Future Fund] Develop and market video game to explain the Stop Button Problem to the public & STEM individualsxng_1vsce_
A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan$73K2022funds.effectivealtruism.org[Long-Term Future Fund] A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japanxng_1vsce_
Paid internships for promising Oxford students to try out supervised AI Safety research projects$60KAI Safety Hub Ltd2022-07funds.effectivealtruism.org[Long-Term Future Fund] Paid internships for promising Oxford students to try out supervised AI Safety research projectsxng_1vsce_
Starting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positions$4KKai Sandbrink2022-07funds.effectivealtruism.org[Long-Term Future Fund] Starting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positionsxng_1vsce_
Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022$23KWilliam D'Alessandro2022-01funds.effectivealtruism.org[Long-Term Future Fund] Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022xng_1vsce_
Website visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday Clock$3.5KConor Barnes2022-07funds.effectivealtruism.org[Long-Term Future Fund] Website visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday Clockxng_1vsce_
2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing hu$15KMax Räuker2022funds.effectivealtruism.org[Long-Term Future Fund] 2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing huxng_1vsce_
Organize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022$110KCzech Association for Effective Altruism2022-07funds.effectivealtruism.org[Long-Term Future Fund] Organize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022xng_1vsce_
8 weeks scholars program to pair promising alignment researchers with renowned mentors$316KAI Safety Support2022-10funds.effectivealtruism.org[Long-Term Future Fund] 8 weeks scholars program to pair promising alignment researchers with renowned mentorsxng_1vsce_
Stanford Artificial Intelligence Professional Program tution$4.8KMario Peng Lee2022-07funds.effectivealtruism.org[Long-Term Future Fund] Stanford Artificial Intelligence Professional Program tutionxng_1vsce_
(professional development grant) New laptop for technical AI safety research$2.5KMax Lamparth2022funds.effectivealtruism.org[Long-Term Future Fund] (professional development grant) New laptop for technical AI safety researchxng_1vsce_
Year-long salary for shard theory and RL mech int research$220KAlexander Turner2023-01funds.effectivealtruism.org[Long-Term Future Fund] Year-long salary for shard theory and RL mech int researchxng_1vsce_
Stipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeople$5KChris Patrick2022-07funds.effectivealtruism.org[Long-Term Future Fund] Stipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeoplexng_1vsce_
Support to further develop a branch of rationality focused on patient and direct observation$80KLogan Strohl2022-07funds.effectivealtruism.org[Long-Term Future Fund] Support to further develop a branch of rationality focused on patient and direct observationxng_1vsce_
1-year salary and costs to connect, expand and enable the AGI governance and safety community in Canada$87KWyatt Tessari2022-07funds.effectivealtruism.org[Long-Term Future Fund] 1-year salary and costs to connect, expand and enable the AGI governance and safety community in Canadaxng_1vsce_
3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AI$5.5KTomislav Kurtovic2022funds.effectivealtruism.org[Long-Term Future Fund] 3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AIxng_1vsce_
6-month salary for two people to find formalisms for modularity in neural networks$73KLucius Bushnaq2022funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for two people to find formalisms for modularity in neural networksxng_1vsce_
One-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safety$21KSteve Petersen2022-10funds.effectivealtruism.org[Long-Term Future Fund] One-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safetyxng_1vsce_
6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paper$167KKaarel Hänni, Kay Kozaronek, Walter Laurito, and Georgios Kaklmanos2023-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paperxng_1vsce_
European Summer Research Program focused on building the talent pipeline for global catastrophic risk researchers$170KEffective Altruism Geneva2022-01funds.effectivealtruism.org[Long-Term Future Fund] European Summer Research Program focused on building the talent pipeline for global catastrophic risk researchersxng_1vsce_
4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreat$10KJonas Hallgren2022-10funds.effectivealtruism.org[Long-Term Future Fund] 4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreatxng_1vsce_
Make 12 more AXRP episodes$24KDaniel Filan2022funds.effectivealtruism.org[Long-Term Future Fund] Make 12 more AXRP episodesxng_1vsce_
12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-risk$60KRoss Graham2022-07funds.effectivealtruism.org[Long-Term Future Fund] 12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-riskxng_1vsce_
1-year salary for research in applications of natural abstraction$180KJohn Wentworth2022-10funds.effectivealtruism.org[Long-Term Future Fund] 1-year salary for research in applications of natural abstractionxng_1vsce_
Financial support to work part time on an academic project evaluating factors relevant to digital consciousness$11KDerek Shiller2022-10funds.effectivealtruism.org[Long-Term Future Fund] Financial support to work part time on an academic project evaluating factors relevant to digital consciousnessxng_1vsce_
6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org$98KJeffrey Ladish2023-01funds.effectivealtruism.org[Long-Term Future Fund] 6 month salary & operational expenses to start a cybersecurity & alignment risk assessment orgxng_1vsce_
6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundations$6KIván Godoy2023-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundationsxng_1vsce_
3-month salary for upskilling in PyTorch and AI safety research.$19KAlex Infanger2023-01funds.effectivealtruism.org[Long-Term Future Fund] 3-month salary for upskilling in PyTorch and AI safety research.xng_1vsce_
6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGI$50KNicky Pochinkov2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGIxng_1vsce_
Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition)$4KFabienne Sandkühler2022-10funds.effectivealtruism.org[Long-Term Future Fund] Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition)xng_1vsce_
Funding to cover 4-months of rent while attending a research group with the Cambridge AI Safety group$5.6KDavid Quarel2022funds.effectivealtruism.org[Long-Term Future Fund] Funding to cover 4-months of rent while attending a research group with the Cambridge AI Safety groupxng_1vsce_
6-month salary to conduct AI alignment research circuits in decision transformers$50KJoseph Bloom2022funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to conduct AI alignment research circuits in decision transformersxng_1vsce_
6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audience$8KLiam Carroll2022funds.effectivealtruism.org[Long-Term Future Fund] 6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audiencexng_1vsce_
Funding for a one year machine learning and computational statistics master’s at UCL$38KShavindra Jayasekera2022-10funds.effectivealtruism.org[Long-Term Future Fund] Funding for a one year machine learning and computational statistics master’s at UCLxng_1vsce_
Funding for project transitioning from AI capabilities to AI Safety research.$8.2KGerold Csendes2022funds.effectivealtruism.org[Long-Term Future Fund] Funding for project transitioning from AI capabilities to AI Safety research.xng_1vsce_
Twelve month salary to work as a global rationality organizer$130KSkyler Crossman2022-10funds.effectivealtruism.org[Long-Term Future Fund] Twelve month salary to work as a global rationality organizerxng_1vsce_
Support to work on Aisafety.camp project, impact of human dogmatism on training$2KKevin Wang2022-07funds.effectivealtruism.org[Long-Term Future Fund] Support to work on Aisafety.camp project, impact of human dogmatism on trainingxng_1vsce_
Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safety$55KRobert Miles2023-01funds.effectivealtruism.org[Long-Term Future Fund] Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safetyxng_1vsce_
6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisation$47KSamuel Brown2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisationxng_1vsce_
5-month salary plus expenses to support civilizational resilience projects arising from SHELTER Weekend$27KJoel Becker2022-10funds.effectivealtruism.org[Long-Term Future Fund] 5-month salary plus expenses to support civilizational resilience projects arising from SHELTER Weekendxng_1vsce_
One year of funding to improve an established community hub for EA in London$50KNewspeak House2022-07funds.effectivealtruism.org[Long-Term Future Fund] One year of funding to improve an established community hub for EA in Londonxng_1vsce_
Support for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actions$90KColumbia University2022-01funds.effectivealtruism.org[Long-Term Future Fund] Support for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actionsxng_1vsce_
Financial support for career exploration and related project in AI alignment upon completion of Masters in Computer Science$26KMax Clarke2022-10funds.effectivealtruism.org[Long-Term Future Fund] Financial support for career exploration and related project in AI alignment upon completion of Masters in Computer Sciencexng_1vsce_
6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategy$40KWill Aldred2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategyxng_1vsce_
6 months salary for independent work centered on distillation and coordination in the AI governance & strategy space$70KAlexander Lintz2022funds.effectivealtruism.org[Long-Term Future Fund] 6 months salary for independent work centered on distillation and coordination in the AI governance & strategy spacexng_1vsce_
Support to cover the costs of leaving employment in order to pursue AI safety research.$4KKajetan Janiak2022funds.effectivealtruism.org[Long-Term Future Fund] Support to cover the costs of leaving employment in order to pursue AI safety research.xng_1vsce_
6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictability$29KFabian Schimpf2022funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictabilityxng_1vsce_
PhD Stipend Top Up for CHAI PhD Student.$6.7KAlex Turner2022-01funds.effectivealtruism.org[Long-Term Future Fund] PhD Stipend Top Up for CHAI PhD Student.xng_1vsce_
Purchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at Oxford$3.6KBálint Pataki2022-07funds.effectivealtruism.org[Long-Term Future Fund] Purchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at Oxfordxng_1vsce_
One year part time spent on AI safety upskilling and concrete research projects$63KRoss Nordby2022-10funds.effectivealtruism.org[Long-Term Future Fund] One year part time spent on AI safety upskilling and concrete research projectsxng_1vsce_
Pass on funds for Astral Codex Ten Everywhere meetups$22KSkyler Crossman2023-01funds.effectivealtruism.org[Long-Term Future Fund] Pass on funds for Astral Codex Ten Everywhere meetupsxng_1vsce_
Payment for part-time rationality community building$4KBoston Astral Codex Ten2022-10funds.effectivealtruism.org[Long-Term Future Fund] Payment for part-time rationality community buildingxng_1vsce_
4-month salary for two people to find formalisms for modularity in neural networks$67KLucius Bushnaq2023-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month salary for two people to find formalisms for modularity in neural networksxng_1vsce_
Travel support to attend the Symposium on AGI Safety in Oxford in May$1.5KSmitha Milli2023-01funds.effectivealtruism.org[Long-Term Future Fund] Travel support to attend the Symposium on AGI Safety in Oxford in Mayxng_1vsce_
Funding the last year of my PhD on embedded agency, to free up my time from teaching$64KDaniel Herrmann2022-10funds.effectivealtruism.org[Long-Term Future Fund] Funding the last year of my PhD on embedded agency, to free up my time from teachingxng_1vsce_
Funds to support travel for academic research projects relating to pandemic preparedness and biosecurity$8.2KCharles Whittaker2022-10funds.effectivealtruism.org[Long-Term Future Fund] Funds to support travel for academic research projects relating to pandemic preparedness and biosecurityxng_1vsce_
Funding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights.$36KSimon Skade2022-10funds.effectivealtruism.org[Long-Term Future Fund] Funding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights.xng_1vsce_
2 years of GovAI salary and overheads for Robert Trager$402K2022-07funds.effectivealtruism.org[Long-Term Future Fund] 2 years of GovAI salary and overheads for Robert Tragerxng_1vsce_
Support for Jay Bailey for work in ML for AI Safety$79KJay Bailey2022-07funds.effectivealtruism.org[Long-Term Future Fund] Support for Jay Bailey for work in ML for AI Safetyxng_1vsce_
4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research.$12KBenjamin Sturgeon2023-01funds.effectivealtruism.org[Long-Term Future Fund] 4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research.xng_1vsce_
Support for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp.$10KJan Kirchner2022-07funds.effectivealtruism.org[Long-Term Future Fund] Support for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp.xng_1vsce_
4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual stream$16KJoshua Reiners2023-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual streamxng_1vsce_
Fine-tuning large language models for an interpretability challenge (compute costs)$11KAndrei Alexandru2022funds.effectivealtruism.org[Long-Term Future Fund] Fine-tuning large language models for an interpretability challenge (compute costs)xng_1vsce_
Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward$40KMichael Parker2022funds.effectivealtruism.org[Long-Term Future Fund] Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forwardxng_1vsce_
12-month salary to work on alignment research!$96KGarrett Baker2022-10funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary to work on alignment research!xng_1vsce_
Funding for Computer Science PhD$349KDavid Reber2022-01funds.effectivealtruism.org[Long-Term Future Fund] Funding for Computer Science PhDxng_1vsce_
6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RL$40KJeremy Gillen2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RLxng_1vsce_
4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL models$1KAbhijit Narayan S2022funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL modelsxng_1vsce_
12-month salary to work on ML models for detecting genetic engineering in pathogens$85KJade Zaslavsky2022-10funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary to work on ML models for detecting genetic engineering in pathogensxng_1vsce_
2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make time$745Ardysatrio Haroen2022-10funds.effectivealtruism.org[Long-Term Future Fund] 2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make timexng_1vsce_
Piloting an EA hardware lab for prototyping hardware relevant to longtermist priorities$44KAdam Rutkowski2022-10funds.effectivealtruism.org[Long-Term Future Fund] Piloting an EA hardware lab for prototyping hardware relevant to longtermist prioritiesxng_1vsce_
Retroactive grant for managing the MATS program, 1.0 and 2.0$27KMATS ML Alignment Theory Scholars program2022-10funds.effectivealtruism.org[Long-Term Future Fund] Retroactive grant for managing the MATS program, 1.0 and 2.0xng_1vsce_
Enabling prosaic alignment research with a multi-modal model on natural language and chess$25KPhilipp Bongartz2022-07funds.effectivealtruism.org[Long-Term Future Fund] Enabling prosaic alignment research with a multi-modal model on natural language and chessxng_1vsce_
2-6 months' stipend to financially cover my self-development in Machine Learning for alignment work$16KJonathan Ng2022-10funds.effectivealtruism.org[Long-Term Future Fund] 2-6 months' stipend to financially cover my self-development in Machine Learning for alignment workxng_1vsce_
3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignment$1KAmrita A. Nair2022-10funds.effectivealtruism.org[Long-Term Future Fund] 3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignmentxng_1vsce_
Top-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residency$180KEffective Altruism Geneva2022-07funds.effectivealtruism.org[Long-Term Future Fund] Top-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residencyxng_1vsce_
6-months salary for researching “Framing computational systems such that we can find meaningful concepts." & Upskilling$24KMatthias Georg Mayer2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6-months salary for researching “Framing computational systems such that we can find meaningful concepts." & Upskillingxng_1vsce_
6 months’ salary to upskill on technical AI safety through project work and studying$50KRusheb Shah2023-01funds.effectivealtruism.org[Long-Term Future Fund] 6 months’ salary to upskill on technical AI safety through project work and studyingxng_1vsce_
6-month salary for an AI alignment research project on the manipulation of humans by AI$25KFelix Hofstätter2022funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for an AI alignment research project on the manipulation of humans by AIxng_1vsce_
6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computation$26KDavid Hahnemann, Luan Ademi2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computationxng_1vsce_
Support for research into applied technical AI alignment work$10KPhilippe Rivet2022-07funds.effectivealtruism.org[Long-Term Future Fund] Support for research into applied technical AI alignment workxng_1vsce_
A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research$305KPrinciples of Intelligent Behavior in Biological and Social Systems2022-01funds.effectivealtruism.org[Long-Term Future Fund] A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment researchxng_1vsce_
Increase of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residence$135KEffective Altruism Geneva2022-07funds.effectivealtruism.org[Long-Term Future Fund] Increase of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residencexng_1vsce_
5-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayal$14KNikiforos Pittaras2022-07funds.effectivealtruism.org[Long-Term Future Fund] 5-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayalxng_1vsce_
12-Month Salary and Compute Expenses to do AI Safety Research with LLMs$70KNicky Pochinkov2023-01funds.effectivealtruism.org[Long-Term Future Fund] 12-Month Salary and Compute Expenses to do AI Safety Research with LLMsxng_1vsce_
I am looking for a career transition grant to give me more time for job hunting & networking$3.6KAlexander Large2023-01funds.effectivealtruism.org[Long-Term Future Fund] I am looking for a career transition grant to give me more time for job hunting & networkingxng_1vsce_
Research and a report/paper on the the role of emergency powers in the governance of X-Risk$26KDaniel Skeffington2022-07funds.effectivealtruism.org[Long-Term Future Fund] Research and a report/paper on the the role of emergency powers in the governance of X-Riskxng_1vsce_
Equipment to improve productivity while doing AI Safety research$3.9KTim Farrelly2022-07funds.effectivealtruism.org[Long-Term Future Fund] Equipment to improve productivity while doing AI Safety researchxng_1vsce_
3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobs$20KPeter Ruschhaupt2022funds.effectivealtruism.org[Long-Term Future Fund] 3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobsxng_1vsce_
One-year funding of Astral Codex Ten meetup in Philadelphia$5KWesley Fenza2023-01funds.effectivealtruism.org[Long-Term Future Fund] One-year funding of Astral Codex Ten meetup in Philadelphiaxng_1vsce_
Reconstruction attacks in federated learning$5KUniversity of Cambridge/ None2022-07funds.effectivealtruism.org[Long-Term Future Fund] Reconstruction attacks in federated learningxng_1vsce_
This grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability project$48KBilal Chughtai2023-01funds.effectivealtruism.org[Long-Term Future Fund] This grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability projectxng_1vsce_
Retrospective funding for research retreat on a decision-theory / cause-prioritization topic.$10KDaniel Kokotajlo2022funds.effectivealtruism.org[Long-Term Future Fund] Retrospective funding for research retreat on a decision-theory / cause-prioritization topic.xng_1vsce_
Funding for the AI Safety Nudge Competition$5.2KAI Safety Nudge Competition2022-10funds.effectivealtruism.org[Long-Term Future Fund] Funding for the AI Safety Nudge Competitionxng_1vsce_
Support to work on AI alignment research$16KMatt MacDermott2022-01funds.effectivealtruism.org[Long-Term Future Fund] Support to work on AI alignment researchxng_1vsce_
9 months of funding for an early-career alignment researcher, to work with Owain Evans and others.$45KMax Kaufmann2022funds.effectivealtruism.org[Long-Term Future Fund] 9 months of funding for an early-career alignment researcher, to work with Owain Evans and others.xng_1vsce_
Office rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood Research$4.3KEffective Altruism Geneva2022funds.effectivealtruism.org[Long-Term Future Fund] Office rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood Researchxng_1vsce_
One year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGI$17KGunnar Zarncke2022-10funds.effectivealtruism.org[Long-Term Future Fund] One year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGIxng_1vsce_
I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the Fall$1.8KZach Peck2022-10funds.effectivealtruism.org[Long-Term Future Fund] I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the Fallxng_1vsce_
Funding 2 years of technical AI safety research to understand and mitigate risk from large foundation models$210KJohn Burden2022-10funds.effectivealtruism.org[Long-Term Future Fund] Funding 2 years of technical AI safety research to understand and mitigate risk from large foundation modelsxng_1vsce_
Independent research and upskilling for one year, to transition from academic philosophy to AI alignment research$60KBrian Porter2022-10funds.effectivealtruism.org[Long-Term Future Fund] Independent research and upskilling for one year, to transition from academic philosophy to AI alignment researchxng_1vsce_
Part-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detection$20KNoga Aharony2022-07funds.effectivealtruism.org[Long-Term Future Fund] Part-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detectionxng_1vsce_
6-months salary to accelerate my plans of upskilling in order to work on the issue of AI safety$26KKane Nicholson2022funds.effectivealtruism.org[Long-Term Future Fund] 6-months salary to accelerate my plans of upskilling in order to work on the issue of AI safetyxng_1vsce_
Support funding during 2 years of an AI safety PhD at Oxford$12KOndrej Bajgar2022-07funds.effectivealtruism.org[Long-Term Future Fund] Support funding during 2 years of an AI safety PhD at Oxfordxng_1vsce_
1-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research.$150KDarryl Wright2022-07funds.effectivealtruism.org[Long-Term Future Fund] 1-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research.xng_1vsce_
Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc.$2.1KJingyi Wang2023-01funds.effectivealtruism.org[Long-Term Future Fund] Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc.xng_1vsce_
Developing and maintaining projects/resources used by the EA and rationality communities$60KSaid Achmiz2023-01funds.effectivealtruism.org[Long-Term Future Fund] Developing and maintaining projects/resources used by the EA and rationality communitiesxng_1vsce_
General support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influences$115KAlexander Turner2023-01funds.effectivealtruism.org[Long-Term Future Fund] General support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influencesxng_1vsce_
Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML coding$2.5KJosiah Lopez-Wild2022-07funds.effectivealtruism.org[Long-Term Future Fund] Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML codingxng_1vsce_
6-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigation$28KTheo Knopfer2022-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigationxng_1vsce_
4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgism$32KQuentin Feuillade--Montixi2023-01funds.effectivealtruism.org[Long-Term Future Fund] 4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgismxng_1vsce_
Top-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in Canada$17KWyatt Tessari2022funds.effectivealtruism.org[Long-Term Future Fund] Top-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in Canadaxng_1vsce_
4 month grant to upskill for AI governance work before starting Science and Technology Policy PhD$17KConor McGlynn2022-07funds.effectivealtruism.org[Long-Term Future Fund] 4 month grant to upskill for AI governance work before starting Science and Technology Policy PhDxng_1vsce_
9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical research$62KMagdalena Wache2022-10funds.effectivealtruism.org[Long-Term Future Fund] 9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical researchxng_1vsce_
300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics.$4.5KLeah Pierson2022-10funds.effectivealtruism.org[Long-Term Future Fund] 300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics.xng_1vsce_
≤1-year salary for alignment work: assisting academics, skilling up, personal research and community building$35KCharlie Griffin2022funds.effectivealtruism.org[Long-Term Future Fund] ≤1-year salary for alignment work: assisting academics, skilling up, personal research and community buildingxng_1vsce_
Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicant$6.6KJeffrey Ohl2022-07funds.effectivealtruism.org[Long-Term Future Fund] Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicantxng_1vsce_
6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordination$25KChloe Lee2022-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordinationxng_1vsce_
Research (and self-study) project designed to map and offer preliminary assessment of AI ideal governance research$2KRory Gillis2022-07funds.effectivealtruism.org[Long-Term Future Fund] Research (and self-study) project designed to map and offer preliminary assessment of AI ideal governance researchxng_1vsce_
Fund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival.$27KUniversity of Otago, Wellington, New Zealand2022-01funds.effectivealtruism.org[Long-Term Future Fund] Fund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival.xng_1vsce_
6-month salary to develop an overview of the current state of AI alignment research, and begin contributing$70KGergely Szucs2022-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to develop an overview of the current state of AI alignment research, and begin contributingxng_1vsce_
Grant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration.$63KHunar Batra2023-01funds.effectivealtruism.org[Long-Term Future Fund] Grant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration.xng_1vsce_
7 month salary to study a Graduate Diploma of International Affairs at The Australian National University$9KMatthew MacInnes2023-01funds.effectivealtruism.org[Long-Term Future Fund] 7 month salary to study a Graduate Diploma of International Affairs at The Australian National Universityxng_1vsce_
Funding to start a longtermist org and support research$495KTransformative Futures Foresight Institute2022-10funds.effectivealtruism.org[Long-Term Future Fund] Funding to start a longtermist org and support researchxng_1vsce_
Slack money for increased productivity in AI Alignment research$17KAdam Shimi2022-01funds.effectivealtruism.org[Long-Term Future Fund] Slack money for increased productivity in AI Alignment researchxng_1vsce_
2-year salary for work on the learning-theoretic AI alignment research agenda$100KVanessa Kosoy2023-01funds.effectivealtruism.org[Long-Term Future Fund] 2-year salary for work on the learning-theoretic AI alignment research agendaxng_1vsce_
Support to conduct work in AI safety$5KBenjamin Anderson2022funds.effectivealtruism.org[Long-Term Future Fund] Support to conduct work in AI safetyxng_1vsce_
Funding to support PhD in AI Safety at Imperial College London, technical research and community building$6.3KFrancis Rhys Ward2022-07funds.effectivealtruism.org[Long-Term Future Fund] Funding to support PhD in AI Safety at Imperial College London, technical research and community buildingxng_1vsce_
3-month salary for SERI-MATS extension$24KMatt MacDermott2023-01funds.effectivealtruism.org[Long-Term Future Fund] 3-month salary for SERI-MATS extensionxng_1vsce_
A relocation grant to help me to move and settle into a PhD program and cover initial expenses$6.5KEgor Zverev2022-10funds.effectivealtruism.org[Long-Term Future Fund] A relocation grant to help me to move and settle into a PhD program and cover initial expensesxng_1vsce_
Funding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified.$16KWikiciv Foundation2022funds.effectivealtruism.org[Long-Term Future Fund] Funding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified.xng_1vsce_
6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project.$50KJay Bailey2023-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project.xng_1vsce_
1-year salary for upskilling in technical AI alignment research$96KChu Chen2022-10funds.effectivealtruism.org[Long-Term Future Fund] 1-year salary for upskilling in technical AI alignment researchxng_1vsce_
6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety$4.5KSamuel Nellessen2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safetyxng_1vsce_
4-month salary for conceptual/theoretical research towards perfect world-model interpretability$30KAndrey Tumas2022funds.effectivealtruism.org[Long-Term Future Fund] 4-month salary for conceptual/theoretical research towards perfect world-model interpretabilityxng_1vsce_
6-month salary to skill up and gain experience to start working on AI safety full-time$14KMateusz Bagiński2022funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to skill up and gain experience to start working on AI safety full-timexng_1vsce_
3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendas$26KSam Marks2022funds.effectivealtruism.org[Long-Term Future Fund] 3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendasxng_1vsce_
6 months salary to do independent AI alignment research focused on formal alignment and agent foundations$30KTamsin Leake2022funds.effectivealtruism.org[Long-Term Future Fund] 6 months salary to do independent AI alignment research focused on formal alignment and agent foundationsxng_1vsce_
Funding for salary and living expenses while continuing to develop a framework of optimisation.$8KAlex Altair2022funds.effectivealtruism.org[Long-Term Future Fund] Funding for salary and living expenses while continuing to develop a framework of optimisation.xng_1vsce_
Retrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS program$4.4KViktoria Malyasova2022-10funds.effectivealtruism.org[Long-Term Future Fund] Retrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS programxng_1vsce_
Weekend organised as a part of the co-founder matching process of a group to found a human data collection org$2.3KPatrick Gruban2022-10funds.effectivealtruism.org[Long-Term Future Fund] Weekend organised as a part of the co-founder matching process of a group to found a human data collection orgxng_1vsce_
1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studies$90KShoshannah Tekofsky2023-01funds.effectivealtruism.org[Long-Term Future Fund] 1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studiesxng_1vsce_
3-month salary to set up a distillation course helping new AI safety theory researchers to distill papers$15KJonas Hallgren2022-07funds.effectivealtruism.org[Long-Term Future Fund] 3-month salary to set up a distillation course helping new AI safety theory researchers to distill papersxng_1vsce_
24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goods$102KLennart Stern2022-01funds.effectivealtruism.org[Long-Term Future Fund] 24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goodsxng_1vsce_
6-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AI$11KAlfred Harwood2022funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AIxng_1vsce_
Support for AI alignment outreach in France (video/audio/text/events) & field-building$25KJérémy Perret2022-10funds.effectivealtruism.org[Long-Term Future Fund] Support for AI alignment outreach in France (video/audio/text/events) & field-buildingxng_1vsce_
3-month stipend for upskilling in AI Safety and potentially transition to a career in Alignment$5KAmrita A. Nair2022funds.effectivealtruism.org[Long-Term Future Fund] 3-month stipend for upskilling in AI Safety and potentially transition to a career in Alignmentxng_1vsce_
4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systems$12KAlan Chan2022funds.effectivealtruism.org[Long-Term Future Fund] 4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systemsxng_1vsce_
Scholarship for PhD student working on research related to AI Safety$8KJosiah Lopez-Wild2022funds.effectivealtruism.org[Long-Term Future Fund] Scholarship for PhD student working on research related to AI Safetyxng_1vsce_
12-month salary to transition career into technical alignment research$25KDan Valentine2022-10funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary to transition career into technical alignment researchxng_1vsce_
6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedules$40KLogan Smith2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedulesxng_1vsce_
A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity Summit$2.5KHamza Tariq Chaudhry2022-10funds.effectivealtruism.org[Long-Term Future Fund] A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity Summitxng_1vsce_
8-month salary for three people to investigate the origins of modularity in neural networks$125KLucius Bushnaq, Callum McDougall, Avery Griffin2022-07funds.effectivealtruism.org[Long-Term Future Fund] 8-month salary for three people to investigate the origins of modularity in neural networksxng_1vsce_
12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal Paternalism$81KSamuel Brown2022funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal Paternalismxng_1vsce_
A research & networking retreat for winners of the Eliciting Latent Knowledge contest$72K2022-10funds.effectivealtruism.org[Long-Term Future Fund] A research & networking retreat for winners of the Eliciting Latent Knowledge contestxng_1vsce_
6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systems$24KJohannes C. Mayer2022-10funds.effectivealtruism.org[Long-Term Future Fund] 6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systemsxng_1vsce_
Support to conduct a research project collaboration on Compute Governance$68KLennart Heim2022-01funds.effectivealtruism.org[Long-Term Future Fund] Support to conduct a research project collaboration on Compute Governancexng_1vsce_
4-month funding for independent alignment research and study$15KArun Jose2022-10funds.effectivealtruism.org[Long-Term Future Fund] 4-month funding for independent alignment research and studyxng_1vsce_
EU Tech Policy Fellowship with ~10 trainees$69KTraining for Good2022-07funds.effectivealtruism.org[Long-Term Future Fund] EU Tech Policy Fellowship with ~10 traineesxng_1vsce_
Funding to increase my impact as an early-career biosecurity researcher$6KLennart Justen2022-10funds.effectivealtruism.org[Long-Term Future Fund] Funding to increase my impact as an early-career biosecurity researcherxng_1vsce_
~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safety$4.8KAnson Ho2022-01funds.effectivealtruism.org[Long-Term Future Fund] ~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safetyxng_1vsce_
Economic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical research$2KAntonio Franca2022-10funds.effectivealtruism.org[Long-Term Future Fund] Economic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical researchxng_1vsce_
One year of seed funding for a new AI interpretability research organisation$195KJessica Rumbelow2023-01funds.effectivealtruism.org[Long-Term Future Fund] One year of seed funding for a new AI interpretability research organisationxng_1vsce_
Travel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022$1.5KKadri Reis2022funds.effectivealtruism.org[Long-Term Future Fund] Travel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022xng_1vsce_
One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATS$100KDavid Udell2022-10funds.effectivealtruism.org[Long-Term Future Fund] One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATSxng_1vsce_
6-month salary to upskill for AI safety$54KDaniel O'Connell2022funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to upskill for AI safetyxng_1vsce_
12-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilities$120KNicholas Kees Dupuis2023-01funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilitiesxng_1vsce_
3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignment$22KJacques Thibodeau2022-07funds.effectivealtruism.org[Long-Term Future Fund] 3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignmentxng_1vsce_
Cover participant stipends for AI Safety Camp Virtual 2023$73KRemmelt Ellen2022funds.effectivealtruism.org[Long-Term Future Fund] Cover participant stipends for AI Safety Camp Virtual 2023xng_1vsce_
Developing weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 people$80KMichael Pearce, Alice Riggs, Thomas Dooms2024-07funds.effectivealtruism.org[Long-Term Future Fund] Developing weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 peoplexng_1vsce_
6-months stipend for transitioning to independent research on AI Safety$40KGlauber De Bona2024-04funds.effectivealtruism.org[Long-Term Future Fund] 6-months stipend for transitioning to independent research on AI Safetyxng_1vsce_
Spend 3 months (part time) assessing plausible pathways to slowing AI$5KGideon Futerman2024-04funds.effectivealtruism.org[Long-Term Future Fund] Spend 3 months (part time) assessing plausible pathways to slowing AIxng_1vsce_
4-month part-time salary to work on interpretability projects with David Bau and Logan Riggs$10KJannik Brinkmann2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-month part-time salary to work on interpretability projects with David Bau and Logan Riggsxng_1vsce_
6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowships$273KAshgro Inc. (fiscal sponsor of Apart)2023-10funds.effectivealtruism.org[Long-Term Future Fund] 6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowshipsxng_1vsce_
1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articles$80KNicky Case2025-01funds.effectivealtruism.org[Long-Term Future Fund] 1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articlesxng_1vsce_
A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safety$5KChris Lakin2023-10funds.effectivealtruism.org[Long-Term Future Fund] A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safetyxng_1vsce_
3-month stipend to support research on the state of AI safety in China and implications for AI existential risk$12KAndrew Zeng2024-04funds.effectivealtruism.org[Long-Term Future Fund] 3-month stipend to support research on the state of AI safety in China and implications for AI existential riskxng_1vsce_
3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferences$80KConstantin Weisser2024-07funds.effectivealtruism.org[Long-Term Future Fund] 3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferencesxng_1vsce_
$10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability Fellowship$10KBrian Tan2024-04funds.effectivealtruism.org[Long-Term Future Fund] $10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability Fellowshipxng_1vsce_
1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group)$103KNora Ammann2023-10funds.effectivealtruism.org[Long-Term Future Fund] 1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group)xng_1vsce_
This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity.$70KNathaniel Monson2023-04funds.effectivealtruism.org[Long-Term Future Fund] This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity.xng_1vsce_
6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainability$52KAengus Lynch2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainabilityxng_1vsce_
6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time after$40KJoe Kwon2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time afterxng_1vsce_
Data collection for a new paradigm for AI alignment based on downstream outcomes and human flourishing$50KUniversity of Massachusetts Amherst2024-01funds.effectivealtruism.org[Long-Term Future Fund] Data collection for a new paradigm for AI alignment based on downstream outcomes and human flourishingxng_1vsce_
4-month salary for finding and characterising provably hard cases for mechanistic anomaly detection$40KAndis Draguns2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-month salary for finding and characterising provably hard cases for mechanistic anomaly detectionxng_1vsce_
3-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentor$23KAleksandar Makelov2024-01funds.effectivealtruism.org[Long-Term Future Fund] 3-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentorxng_1vsce_
This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia.$77KAI Safety Australia and New Zealand2024-01funds.effectivealtruism.org[Long-Term Future Fund] This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia.xng_1vsce_
Exploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension)$41KLucy Farnik2024-01funds.effectivealtruism.org[Long-Term Future Fund] Exploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension)xng_1vsce_
6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum.$8KAmritanshu Prasad2023-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum.xng_1vsce_
4-month stipend for a career transition period to explore roles in AI safety communications$10KSarah Hastings-Woodhouse2024-04funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend for a career transition period to explore roles in AI safety communicationsxng_1vsce_
12 week 0.6FT upskilling stipend for technical governance research management$11KMorgan Simpson2024-04funds.effectivealtruism.org[Long-Term Future Fund] 12 week 0.6FT upskilling stipend for technical governance research managementxng_1vsce_
3-months salary for SERI MATS extention to work on internal concept extraction$27KAnn-Kathrin Dombrowski2023-07funds.effectivealtruism.org[Long-Term Future Fund] 3-months salary for SERI MATS extention to work on internal concept extractionxng_1vsce_
6-months of part-time stipend to launch a new science journalism outlet focused on AI Safety$50KMordechai Rorvig2025-01funds.effectivealtruism.org[Long-Term Future Fund] 6-months of part-time stipend to launch a new science journalism outlet focused on AI Safetyxng_1vsce_
6 to 12 month fundings to continue working on model psychology and evaluation$42KP.H.I2023-07funds.effectivealtruism.org[Long-Term Future Fund] 6 to 12 month fundings to continue working on model psychology and evaluationxng_1vsce_
4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switch$62KNiels uit de Bos2024-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switchxng_1vsce_
This grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts.$55KAkbir Khan2023-04funds.effectivealtruism.org[Long-Term Future Fund] This grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts.xng_1vsce_
Ten field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025$7.1K2025-04funds.effectivealtruism.org[Long-Term Future Fund] Ten field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025xng_1vsce_
A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economy$36KAlexander Mann2023-07funds.effectivealtruism.org[Long-Term Future Fund] A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economyxng_1vsce_
Developing noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deployment$40KAdelin Kassler2024-07funds.effectivealtruism.org[Long-Term Future Fund] Developing noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deploymentxng_1vsce_
6-month salary and compute budget for continuing work on mechanistic interpretability for attention layers$37KKeith Wynroe2024-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary and compute budget for continuing work on mechanistic interpretability for attention layersxng_1vsce_
12-month support for independent AI alignment research$45KAryeh Brill2024-04funds.effectivealtruism.org[Long-Term Future Fund] 12-month support for independent AI alignment researchxng_1vsce_
4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMs$70KAxel Højmark2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMsxng_1vsce_
This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects.$32KDioptra (informal research group working on evals)2024-01funds.effectivealtruism.org[Long-Term Future Fund] This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects.xng_1vsce_
4-month fund for full time AI safety technical and/or governance research$11KHarrison Gietz2023-04funds.effectivealtruism.org[Long-Term Future Fund] 4-month fund for full time AI safety technical and/or governance researchxng_1vsce_
This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy.$8.7KCarson Ezell2023-04funds.effectivealtruism.org[Long-Term Future Fund] This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy.xng_1vsce_
4-month stipend to continue AI safety projects$25KHannah Erlebach2024-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend to continue AI safety projectsxng_1vsce_
Part-time salary for independent AI safety research$40KRoss Nordby2023-07funds.effectivealtruism.org[Long-Term Future Fund] Part-time salary for independent AI safety researchxng_1vsce_
Grant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate student$1.9KSumeet Motwani2024-04funds.effectivealtruism.org[Long-Term Future Fund] Grant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate studentxng_1vsce_
Mentored independent research and upskilling to transition from theoretical physics PhD to AI safety$50KEinar Urdshals2024-07funds.effectivealtruism.org[Long-Term Future Fund] Mentored independent research and upskilling to transition from theoretical physics PhD to AI safetyxng_1vsce_
6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI Safety$78KAishwarya Saxena2024-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI Safetyxng_1vsce_
2-month salary to test suitability for technical AI alignment research and identify a research direction$8.8KBart Bussmann2023-04funds.effectivealtruism.org[Long-Term Future Fund] 2-month salary to test suitability for technical AI alignment research and identify a research directionxng_1vsce_
Meta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project)$62KYoav Tzfati2024-01funds.effectivealtruism.org[Long-Term Future Fund] Meta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project)xng_1vsce_
Organizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participants$160KEpistea, z.s2024-01funds.effectivealtruism.org[Long-Term Future Fund] Organizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participantsxng_1vsce_
1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension program$15KAbhay Sheshadri2024-01funds.effectivealtruism.org[Long-Term Future Fund] 1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension programxng_1vsce_
1 year PhD funding and compute funding to research a novel method for training prosociality into large language models$10KScott Viteri2023-04funds.effectivealtruism.org[Long-Term Future Fund] 1 year PhD funding and compute funding to research a novel method for training prosociality into large language modelsxng_1vsce_
1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystem$99KAlignment Ecosystem Development2023-10funds.effectivealtruism.org[Long-Term Future Fund] 1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystemxng_1vsce_
6-month salary for independent alignment research in interpretability or control$95KThomas Kwa2023-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for independent alignment research in interpretability or controlxng_1vsce_
Funding to do research on understanding search in transformers at the AI safety camp during 14 weeks$6.6KGuillaume Corlouer2023-04funds.effectivealtruism.org[Long-Term Future Fund] Funding to do research on understanding search in transformers at the AI safety camp during 14 weeksxng_1vsce_
One year stipend and compute budget, for full-time technical AI alignment research$80KDavid Udell2023-07funds.effectivealtruism.org[Long-Term Future Fund] One year stipend and compute budget, for full-time technical AI alignment researchxng_1vsce_
6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's Law$60KThomas Kwa2024-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's Lawxng_1vsce_
6 month salary for further pursuing sparse autoencoders for automatic feature finding$40KLogan Smith2023-07funds.effectivealtruism.org[Long-Term Future Fund] 6 month salary for further pursuing sparse autoencoders for automatic feature findingxng_1vsce_
5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistance$17KFor Collaborative Work with AI:FAR2025-01funds.effectivealtruism.org[Long-Term Future Fund] 5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistancexng_1vsce_
3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganography$13KMikhail Baranchuk2024-04funds.effectivealtruism.org[Long-Term Future Fund] 3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganographyxng_1vsce_
6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing study$36KSimon Lermen2024-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing studyxng_1vsce_
In MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networks$40KMentaLeap2023-07funds.effectivealtruism.org[Long-Term Future Fund] In MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networksxng_1vsce_
Funding to attend BWC meeting to discuss transparency with country representatives & work on research project$1.7KRiya Sharma2023-07funds.effectivealtruism.org[Long-Term Future Fund] Funding to attend BWC meeting to discuss transparency with country representatives & work on research projectxng_1vsce_
2 Months of living expenses while I try to establish a broad-spectrum antiviral research organization$5KHayden Peacock2024-01funds.effectivealtruism.org[Long-Term Future Fund] 2 Months of living expenses while I try to establish a broad-spectrum antiviral research organizationxng_1vsce_
6-month stipend to work on AI alignment research (automated redteaming, interpretability)$30KAlex Infanger2024-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to work on AI alignment research (automated redteaming, interpretability)xng_1vsce_
12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agenda$27KJacques Thibodeau2023-04funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agendaxng_1vsce_
1-year stipend to continue research on agency, focused on natural abstraction$200KJohn Wentworth2023-07funds.effectivealtruism.org[Long-Term Future Fund] 1-year stipend to continue research on agency, focused on natural abstractionxng_1vsce_
This grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research.$45KYuxiao Li2023-04funds.effectivealtruism.org[Long-Term Future Fund] This grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research.xng_1vsce_
A 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025$21KCaleb Rak2024-10funds.effectivealtruism.org[Long-Term Future Fund] A 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025xng_1vsce_
Undergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshop$33KNathaniel Sharadin2023-07funds.effectivealtruism.org[Long-Term Future Fund] Undergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshopxng_1vsce_
Monthly seminar series on Guaranteed Safe AI, from July to December 2024$6KHorizon Events2024-04funds.effectivealtruism.org[Long-Term Future Fund] Monthly seminar series on Guaranteed Safe AI, from July to December 2024xng_1vsce_
This grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research.$35KSviatoslav Chalnev2023-04funds.effectivealtruism.org[Long-Term Future Fund] This grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research.xng_1vsce_
5-month salary to continue work on evaluating agent self-improvement capabilities$23KCodruta Lugoj2024-04funds.effectivealtruism.org[Long-Term Future Fund] 5-month salary to continue work on evaluating agent self-improvement capabilitiesxng_1vsce_
12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher Brass$6KYashvardhan Sharma2024-04funds.effectivealtruism.org[Long-Term Future Fund] 12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher Brassxng_1vsce_
4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform co$22KStanford University2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform coxng_1vsce_
Seeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accurate$2.5KKunvar Thaman2024-04funds.effectivealtruism.org[Long-Term Future Fund] Seeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accuratexng_1vsce_
1-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evals$19KSumeet Motwani2024-01funds.effectivealtruism.org[Long-Term Future Fund] 1-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evalsxng_1vsce_
3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel funding$20KHannah Erleabch2024-04funds.effectivealtruism.org[Long-Term Future Fund] 3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel fundingxng_1vsce_
Six month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverables$61KPhilip Quirke2023-10funds.effectivealtruism.org[Long-Term Future Fund] Six month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverablesxng_1vsce_
6-month salary for part-time independent research on LM interpretability for AI alignment$7.7KAidan Ewart2023-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for part-time independent research on LM interpretability for AI alignmentxng_1vsce_
6-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costs$32KMorgan Simpson2023-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costsxng_1vsce_
SERI MATS 3-month extension to study knowledge removal in Language Models$12KShashwat Goel2023-07funds.effectivealtruism.org[Long-Term Future Fund] SERI MATS 3-month extension to study knowledge removal in Language Modelsxng_1vsce_
6-month salary to transition to a career in AI safety while working on AI safety projects$30KDillon Bowen2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to transition to a career in AI safety while working on AI safety projectsxng_1vsce_
I'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute funds$1.5KJoshua Clymer2023-04funds.effectivealtruism.org[Long-Term Future Fund] I'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute fundsxng_1vsce_
9-month programme to help language and cognition scientists repurpose their existing skills for long-termist research$5KNikola Moore2024-07funds.effectivealtruism.org[Long-Term Future Fund] 9-month programme to help language and cognition scientists repurpose their existing skills for long-termist researchxng_1vsce_
11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in Finland$73KSanteri Tani2024-07funds.effectivealtruism.org[Long-Term Future Fund] 11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in Finlandxng_1vsce_
Compute costs for experiments to evaluate different scalable oversight protocols$87KLewis Hammond2024-01funds.effectivealtruism.org[Long-Term Future Fund] Compute costs for experiments to evaluate different scalable oversight protocolsxng_1vsce_
6-month salary to finish writing a book on international AI governance and three other smaller AI governance projects$34KJosé Jaime Villalobos Ruiz2024-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to finish writing a book on international AI governance and three other smaller AI governance projectsxng_1vsce_
This grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction.$2KTristan Williams2024-01funds.effectivealtruism.org[Long-Term Future Fund] This grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction.xng_1vsce_
6-month salary for an AISC project and continuing independent mechanistic interpretability projects$28KChristopher Mathwin2023-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary for an AISC project and continuing independent mechanistic interpretability projectsxng_1vsce_
3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge.$3.1KBenjamin Stewart2023-04funds.effectivealtruism.org[Long-Term Future Fund] 3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge.xng_1vsce_
4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension program$30KAaquib Syed2024-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension programxng_1vsce_
Retroactive funding for GameBench paper$9.1KDioptra (Josh Clymber's AIS research community)2024-04funds.effectivealtruism.org[Long-Term Future Fund] Retroactive funding for GameBench paperxng_1vsce_
A podcast mainly themed around AI x-risk, aimed at a non-technical audience$5KSarah Hastings-Woodhouse2024-01funds.effectivealtruism.org[Long-Term Future Fund] A podcast mainly themed around AI x-risk, aimed at a non-technical audiencexng_1vsce_
~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila$86KBrian Tan2024-04funds.effectivealtruism.org[Long-Term Future Fund] ~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manilaxng_1vsce_
4-month stipend for upskilling within the field of economic governance of AI$7KRafael Andersson Lipcsey2023-10funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend for upskilling within the field of economic governance of AIxng_1vsce_
4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentials$15KKurt Brown2023-04funds.effectivealtruism.org[Long-Term Future Fund] 4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentialsxng_1vsce_
6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyond$39KFelix Hofstätter2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyondxng_1vsce_
5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projects$22KKeith Wynroe2023-07funds.effectivealtruism.org[Long-Term Future Fund] 5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projectsxng_1vsce_
6-month stipend to work on techical alignment research as part of MATS 5.0 extension program$40KCindy Wu2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to work on techical alignment research as part of MATS 5.0 extension programxng_1vsce_
Retroactive grant to study Goodhart effects on heavy-tailed distributions$30KThomas Kwa2023-07funds.effectivealtruism.org[Long-Term Future Fund] Retroactive grant to study Goodhart effects on heavy-tailed distributionsxng_1vsce_
6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systems$37KLukas Fluri2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systemsxng_1vsce_
9 months support for an in-depth YouTube channel about AI safety and how AI will impact us all$27KDavid Williams-King2024-07funds.effectivealtruism.org[Long-Term Future Fund] 9 months support for an in-depth YouTube channel about AI safety and how AI will impact us allxng_1vsce_
Funding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew Gentzel$32KColeman Snell2024-04funds.effectivealtruism.org[Long-Term Future Fund] Funding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew Gentzelxng_1vsce_
4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language models$60KRauno Arike, Elizabeth Donoway2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language modelsxng_1vsce_
6-month career transition and independent research in AI safety and risk mitigation$85KJose Groh2024-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month career transition and independent research in AI safety and risk mitigationxng_1vsce_
This grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research.$5KCindy Wu2023-04funds.effectivealtruism.org[Long-Term Future Fund] This grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research.xng_1vsce_
Two workshops on strategic communications around AI safety, focused on the AI safety community$5.7KPhilip Trippenbach2024-07funds.effectivealtruism.org[Long-Term Future Fund] Two workshops on strategic communications around AI safety, focused on the AI safety communityxng_1vsce_
6 month salary to work on mech interp research with mentorship from Prof David Bau$41KBilal Chughtai2023-07funds.effectivealtruism.org[Long-Term Future Fund] 6 month salary to work on mech interp research with mentorship from Prof David Bauxng_1vsce_
6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmark$35KRoman Soletskyi2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmarkxng_1vsce_
Research on how much language models can infer about their current user, and interpretability work on such inferences$55KEgg Syntax (legal: Jesse Davis)2024-01funds.effectivealtruism.org[Long-Term Future Fund] Research on how much language models can infer about their current user, and interpretability work on such inferencesxng_1vsce_
4-month stipend to research the mechanisms of refusal in chat LLMs$40KOscar Balcells Obeso2024-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend to research the mechanisms of refusal in chat LLMsxng_1vsce_
Virtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safety$10KOrpheus Lummis2024-01funds.effectivealtruism.org[Long-Term Future Fund] Virtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safetyxng_1vsce_
4-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategies$27KKai Fronsdal2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategiesxng_1vsce_
Develop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viability$40KDavid Abecassis2024-07funds.effectivealtruism.org[Long-Term Future Fund] Develop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viabilityxng_1vsce_
A fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and tech$120KGeneva Centre for Security Policy2024-04funds.effectivealtruism.org[Long-Term Future Fund] A fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and techxng_1vsce_
One year funding of ACX meetup in Atlanta Georgia$5KACX Atlanta2023-04funds.effectivealtruism.org[Long-Term Future Fund] One year funding of ACX meetup in Atlanta Georgiaxng_1vsce_
7 months of coworking-space funding continuation, during interpretability research project$11KDavid Udell2024-01funds.effectivealtruism.org[Long-Term Future Fund] 7 months of coworking-space funding continuation, during interpretability research projectxng_1vsce_
Stipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attention$25KMatthias Dellago2023-04funds.effectivealtruism.org[Long-Term Future Fund] Stipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attentionxng_1vsce_
Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymaking$24KExistential Risk Observatory2023-10funds.effectivealtruism.org[Long-Term Future Fund] Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymakingxng_1vsce_
7-month stipend for organising AI Alignment Irvine (AIAI)$16KNeil Crawford2024-07funds.effectivealtruism.org[Long-Term Future Fund] 7-month stipend for organising AI Alignment Irvine (AIAI)xng_1vsce_
6-month stipends to develop and apply a novel method for localizing information and computation in neural networks$160KAlex Cloud, Jacob Goldman-Wetzler, Evžen Wybitul, Joseph Miller2024-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipends to develop and apply a novel method for localizing information and computation in neural networksxng_1vsce_
9-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’$7.2KJulian Guidote2024-07funds.effectivealtruism.org[Long-Term Future Fund] 9-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’xng_1vsce_
6-month stipend to continue independent interpretability research$40KSviatoslav Chalnev2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to continue independent interpretability researchxng_1vsce_
4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switch$67KIván Arcuschin Moreno2024-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switchxng_1vsce_
WhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic Interpretability$61KBrian Tan2023-07funds.effectivealtruism.org[Long-Term Future Fund] WhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic Interpretabilityxng_1vsce_
8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AI$6.2KLuise Woehlke2024-04funds.effectivealtruism.org[Long-Term Future Fund] 8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AIxng_1vsce_
1-year stipend for independent research primarily on high-level interpretability$70KArun Jose2024-04funds.effectivealtruism.org[Long-Term Future Fund] 1-year stipend for independent research primarily on high-level interpretabilityxng_1vsce_
Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignment$80KClaire Short2024-07funds.effectivealtruism.org[Long-Term Future Fund] Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignmentxng_1vsce_
Conference publication of interpretability and LM-steering results$40KAlexander Turner2023-04funds.effectivealtruism.org[Long-Term Future Fund] Conference publication of interpretability and LM-steering resultsxng_1vsce_
1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved$122KRobert Miles2023-07funds.effectivealtruism.org[Long-Term Future Fund] 1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involvedxng_1vsce_
12-month salary to set up a new org doing research and creating interventions to minimise lock-in risk$10KFormation Research2024-10funds.effectivealtruism.org[Long-Term Future Fund] 12-month salary to set up a new org doing research and creating interventions to minimise lock-in riskxng_1vsce_
1.5 year stipend for thorough investigation and analysis of AI lab scaling policies$100KAysja Johnson2025-01funds.effectivealtruism.org[Long-Term Future Fund] 1.5 year stipend for thorough investigation and analysis of AI lab scaling policiesxng_1vsce_
6 month SERI MATS London extension phase for continuing and scaling up the sparse coding project$35KHoagy Cunningham2023-07funds.effectivealtruism.org[Long-Term Future Fund] 6 month SERI MATS London extension phase for continuing and scaling up the sparse coding projectxng_1vsce_
4 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognition$34KArjun Panickssery2024-01funds.effectivealtruism.org[Long-Term Future Fund] 4 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognitionxng_1vsce_
Studying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.I$50KCole Wyeth2023-04funds.effectivealtruism.org[Long-Term Future Fund] Studying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.Ixng_1vsce_
Organizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participants$115KEpistea, z.s2025-04funds.effectivealtruism.org[Long-Term Future Fund] Organizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participantsxng_1vsce_
MATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systems$18KGarrett Baker2024-01funds.effectivealtruism.org[Long-Term Future Fund] MATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systemsxng_1vsce_
6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitation$56KTheodore Chapman2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitationxng_1vsce_
One year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's work$150KMacrostrategy Research Initiative2025-01funds.effectivealtruism.org[Long-Term Future Fund] One year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's workxng_1vsce_
6-month stipend for a small group of collaborators to continue research on the Agent Structure Problem$60KAlex Altair2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend for a small group of collaborators to continue research on the Agent Structure Problemxng_1vsce_
4-month stipend for 3 people to create demonstrations of provably undetectable backdoors$50KAndrew Gritsevskiy2024-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend for 3 people to create demonstrations of provably undetectable backdoorsxng_1vsce_
Development of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms)$30KSahil Kulshrestha2024-04funds.effectivealtruism.org[Long-Term Future Fund] Development of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms)xng_1vsce_
Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theory$20KWilson Wu2024-07funds.effectivealtruism.org[Long-Term Future Fund] Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theoryxng_1vsce_
4-month salary to continue work on AI Control as a MATS extension$30KVasil Georgiev2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-month salary to continue work on AI Control as a MATS extensionxng_1vsce_
6-month salary to build experience in AI interpretability research before PhD applications$40KZach Furman2023-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to build experience in AI interpretability research before PhD applicationsxng_1vsce_
2 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fields$5KKrzysztof Gwiazda2024-07funds.effectivealtruism.org[Long-Term Future Fund] 2 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fieldsxng_1vsce_
Salary Top-Up for Timaeus' Employees & Contractors$100KTimaeus (Fiscally Sponsored by Ashgro, Inc.)2024-01funds.effectivealtruism.org[Long-Term Future Fund] Salary Top-Up for Timaeus' Employees & Contractorsxng_1vsce_
6 month project - pending description$10KKristy Loke2023-04funds.effectivealtruism.org[Long-Term Future Fund] 6 month project - pending descriptionxng_1vsce_
3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo Research$8.5KSienka Dounia2024-01funds.effectivealtruism.org[Long-Term Future Fund] 3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo Researchxng_1vsce_
6-month stipend for Sparse Autoencoder Mech Interp projects$40KLogan Smith2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend for Sparse Autoencoder Mech Interp projectsxng_1vsce_
4-month stipend to continue work on AI Control as a MATS extension$30KCody Rushing2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend to continue work on AI Control as a MATS extensionxng_1vsce_
12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour)$80KNicky Pochinkov2024-04funds.effectivealtruism.org[Long-Term Future Fund] 12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour)xng_1vsce_
6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp.$1.7KArtem Karpov2023-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp.xng_1vsce_
6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on bas$5.2KHebrew Universty2025-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on basxng_1vsce_
1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationality$80KLogan Strohl2023-04funds.effectivealtruism.org[Long-Term Future Fund] 1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationalityxng_1vsce_
Funding for having written AI safety distillation posts on the topic of membranes/boundaries$4.5KChris Lakin2023-10funds.effectivealtruism.org[Long-Term Future Fund] Funding for having written AI safety distillation posts on the topic of membranes/boundariesxng_1vsce_
4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension program$60KDanielle Ensign2024-01funds.effectivealtruism.org[Long-Term Future Fund] 4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension programxng_1vsce_
4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension program$30KTeun van der Weij2024-01funds.effectivealtruism.org[Long-Term Future Fund] 4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension programxng_1vsce_
General support for a forecasting team$6KSamotsvety Forecasting2023-10funds.effectivealtruism.org[Long-Term Future Fund] General support for a forecasting teamxng_1vsce_
This grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence.$45KDaniel Filan2024-04funds.effectivealtruism.org[Long-Term Future Fund] This grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence.xng_1vsce_
Year-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base.$90KBryce Meyer2024-04funds.effectivealtruism.org[Long-Term Future Fund] Year-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base.xng_1vsce_
This grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment research$30KAlexander Turner2023-04funds.effectivealtruism.org[Long-Term Future Fund] This grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment researchxng_1vsce_
Funds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRs$5.1KImperial College London2023-07funds.effectivealtruism.org[Long-Term Future Fund] Funds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRsxng_1vsce_
4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendas$7.2KCodruta Lugoj2023-04funds.effectivealtruism.org[Long-Term Future Fund] 4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendasxng_1vsce_
6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deception$55KSara Price2024-01funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deceptionxng_1vsce_
6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems.$6.5KRoman Leventov2023-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems.xng_1vsce_
6-month stipend to work on safe and robust reasoning via mechanistically interpreting representations$30KSatvik Golechha2024-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to work on safe and robust reasoning via mechanistically interpreting representationsxng_1vsce_
Develop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-risk$25KSuzy Shepherd2025-01funds.effectivealtruism.org[Long-Term Future Fund] Develop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-riskxng_1vsce_
4-month stipend to continue work on AI Control as a MATS extension$30KTyler Tracy2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-month stipend to continue work on AI Control as a MATS extensionxng_1vsce_
$10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestricted$11KVaidehi Agarwalla2023-04funds.effectivealtruism.org[Long-Term Future Fund] $10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestrictedxng_1vsce_
8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topic$49KVojtech Kovarik2024-07funds.effectivealtruism.org[Long-Term Future Fund] 8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topicxng_1vsce_
1 month long literature review on in-context learning and its relevance to AI alignment$6KAlfie Lamerton2024-01funds.effectivealtruism.org[Long-Term Future Fund] 1 month long literature review on in-context learning and its relevance to AI alignmentxng_1vsce_
4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer models$13KTilman Räuker2024-04funds.effectivealtruism.org[Long-Term Future Fund] 4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer modelsxng_1vsce_
6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space intervention$40KEric Easley2024-07funds.effectivealtruism.org[Long-Term Future Fund] 6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space interventionxng_1vsce_
Create an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governance$5KMichel Justen2024-10funds.effectivealtruism.org[Long-Term Future Fund] Create an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governancexng_1vsce_
A private online platform for research-sharing amongst the AI governance community$125KThe AI Governance Archive (TAIGA)2024-07funds.effectivealtruism.org[Long-Term Future Fund] A private online platform for research-sharing amongst the AI governance communityxng_1vsce_
6-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchers$50KBryce Meyer2023-04funds.effectivealtruism.org[Long-Term Future Fund] 6-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchersxng_1vsce_
This grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program.$19KViktor Rehnberg2023-04funds.effectivealtruism.org[Long-Term Future Fund] This grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program.xng_1vsce_
Four months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial training$23KAidan Ewart2024-01funds.effectivealtruism.org[Long-Term Future Fund] Four months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial trainingxng_1vsce_
6-month incubation program for technical AI safety research organizations$123KCatalyze Impact2023-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month incubation program for technical AI safety research organizationsxng_1vsce_
4-months stipend to apply mechanistic interpretability to a real-world application, hallucinations$60KJavier Ferrando Monsonís and Oscar Balcells Obeso2024-07funds.effectivealtruism.org[Long-Term Future Fund] 4-months stipend to apply mechanistic interpretability to a real-world application, hallucinationsxng_1vsce_
3-month part-time salary in order to work on AI governance projects and activities$6KArran McCutcheon2023-07funds.effectivealtruism.org[Long-Term Future Fund] 3-month part-time salary in order to work on AI governance projects and activitiesxng_1vsce_
Funding for (academic/technical) AI safety community events in London$8KFrancis Rhys Ward2023-04funds.effectivealtruism.org[Long-Term Future Fund] Funding for (academic/technical) AI safety community events in Londonxng_1vsce_
Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward$50KMichael Parker2024-01funds.effectivealtruism.org[Long-Term Future Fund] Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forwardxng_1vsce_
3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignment$50KThe University of Texas at Austin2024-04funds.effectivealtruism.org[Long-Term Future Fund] 3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignmentxng_1vsce_
6 month AI alignment internship stipend top-up$10KMatt MacDermott2024-04funds.effectivealtruism.org[Long-Term Future Fund] 6 month AI alignment internship stipend top-upxng_1vsce_
Travel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI Safety$1.8KDhruvin Patel2024-07funds.effectivealtruism.org[Long-Term Future Fund] Travel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI Safetyxng_1vsce_
Experimentally testing generative AI's ability to persuade humans about hazardous topics$115KThomas Costello2024-01funds.effectivealtruism.org[Long-Term Future Fund] Experimentally testing generative AI's ability to persuade humans about hazardous topicsxng_1vsce_
6 month stipend for SAE-circuits$40KLogan Smith2024-07funds.effectivealtruism.org[Long-Term Future Fund] 6 month stipend for SAE-circuitsxng_1vsce_
6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIF$42KMarcus Williams2023-10funds.effectivealtruism.org[Long-Term Future Fund] 6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIFxng_1vsce_
3-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignment$13KSimon Lermen2023-04funds.effectivealtruism.org[Long-Term Future Fund] 3-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignmentxng_1vsce_
Compute for experiment about how steganography in large language models might arise as a result of benign optimization$2KFelix Binder2023-10funds.effectivealtruism.org[Long-Term Future Fund] Compute for experiment about how steganography in large language models might arise as a result of benign optimizationxng_1vsce_
Internal Metadata
ID: sid_yA12C1KcjQ
Stable ID: sid_yA12C1KcjQ
Wiki ID: E543
Type: organization
YAML Source: packages/factbase/data/fb-entities/ltff.yaml
Facts: 0 structured (1 total)
Records: 546 in 2 collections