6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into Russian $13K Maksim Vymenets 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into Russian xng_1vsce_ Working on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goal $40K Tushant Jha Person Tushant Jha AI ethics and value alignment researcher at Future of Humanity Institute (FHI). Foresight Fellow (2022). 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Working on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goal xng_1vsce_ Characterizing the properties and constraints of complex systems and their external interactions to inform AI safety research $20K Alexander Siegenfeld 2019-07 funds.effectivealtruism.org [Long-Term Future Fund] Characterizing the properties and constraints of complex systems and their external interactions to inform AI safety research xng_1vsce_ 6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arranged $28K Thomas Moynihan 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arranged xng_1vsce_ 12-month salary for researching value learning $50K Charlie Steiner 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary for researching value learning xng_1vsce_ Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral. $30K Gavin Taylor 2020-07 funds.effectivealtruism.org [Long-Term Future Fund] Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral. xng_1vsce_ Support Sam's participation in ‘Mid-term AI impacts’ research project $4.5K Sam Clarke 2020-10 funds.effectivealtruism.org [Long-Term Future Fund] Support Sam's participation in ‘Mid-term AI impacts’ research project xng_1vsce_ PhD at Cambridge $150K Richard Ngo Person Richard Ngo Richard Ngo is an AI governance researcher who has worked at both OpenAI and Google DeepMind. He has written influential essays on AI alignment, the case for AI risk, and AGI safety. He is known fo... 2020-07 funds.effectivealtruism.org [Long-Term Future Fund] PhD at Cambridge xng_1vsce_ Funding a nordic conference for senior X-risk researchers and junior talents interested in entering the field $4.6K Effektiv Altruism Sverige (EA Sweden) Organization Effektiv Altruism Sverige (EA Sweden) Swedish chapter of the effective altruism movement. 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding a nordic conference for senior X-risk researchers and junior talents interested in entering the field xng_1vsce_ Funding for a degree in the Biological Sciences at UCSD (University of California San Diego) $250K Kristaps Zilgalvis 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding for a degree in the Biological Sciences at UCSD (University of California San Diego) xng_1vsce_ I would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good. $2K Ruth Grace Wong 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] I would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good. xng_1vsce_ Research on AI safety $30K Marius Hobbhahn 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Research on AI safety xng_1vsce_ Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software $11K George Green 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software xng_1vsce_ Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment $150K Nick Hay 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment xng_1vsce_ Buy out of teaching assistant duties for the remaining two years of my PhD program $50K Michael Zlatin 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Buy out of teaching assistant duties for the remaining two years of my PhD program xng_1vsce_ Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved $82K Robert Miles 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved xng_1vsce_ Support to work on biosecurity $11K Sculpting Evolution Group, MIT 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Support to work on biosecurity xng_1vsce_ Funding to trial a new London organization aiming to 10x the number of AI safety researchers $234K Jessica Cooper 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Funding to trial a new London organization aiming to 10x the number of AI safety researchers xng_1vsce_ Time costs over six months to publish a paper on the interaction of open science practices and bio-risk $8.3K James Smith 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Time costs over six months to publish a paper on the interaction of open science practices and bio-risk xng_1vsce_ Research into the nature of optimization, knowledge, and agency, with relevance to AI alignment $80K Alex Flint 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] Research into the nature of optimization, knowledge, and agency, with relevance to AI alignment xng_1vsce_ Producing video content on AI alignment $39K Robert Miles 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Producing video content on AI alignment xng_1vsce_ Participation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interface $1.6K Fabio Haenel 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] Participation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interface xng_1vsce_ Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciary $24K Nick Hollman 2020-10 funds.effectivealtruism.org [Long-Term Future Fund] Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciary xng_1vsce_ Open Online Course on “The Economics of AI” for Anton Korinek $72K University of Virginia Organization University of Virginia University in Charlottesville. Home to Anton Korinek (AI economics). 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] Open Online Course on “The Economics of AI” for Anton Korinek xng_1vsce_ Organizing a workshop aimed at highlighting recent successes in the development of verified software. $5K Gopal Sarma 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Organizing a workshop aimed at highlighting recent successes in the development of verified software. xng_1vsce_ Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization. $135K Legal Priorities Project Organization Legal Priorities Project Research organization studying legal questions relevant to reducing existential and catastrophic risks. 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization. xng_1vsce_ 4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effects $12K David Rhys Bernard 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] 4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effects xng_1vsce_ A study of safe exploration and robustness to distributional shift in biological complex systems $30K Nikhil Kunapuli 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] A study of safe exploration and robustness to distributional shift in biological complex systems xng_1vsce_ Conducting independent research into AI forecasting and strategy questions $40K Tegan McCaslin 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Conducting independent research into AI forecasting and strategy questions xng_1vsce_ Conducting independent research on cause prioritization $33K Michael Dickens 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Conducting independent research on cause prioritization xng_1vsce_ Building towards a "Limited Agent Foundations" thesis on mild optimization and corrigibility $30K Alex Turner 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Building towards a "Limited Agent Foundations" thesis on mild optimization and corrigibility xng_1vsce_ 6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISS $25K AI Safety Support Organization AI Safety Support Organization providing career guidance, mentoring, and community support for people transitioning into AI safety careers. 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] 6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISS xng_1vsce_ DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relations $78K University of Oxford, Department of Experimental Psychology 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relations xng_1vsce_ Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop $30K John Wentworth Person John Wentworth Independent alignment researcher. Known for work on natural abstractions and agent foundations. Previously associated with MIRI. 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop xng_1vsce_ Surveying the neglectedness of broad-spectrum antiviral development $18K Jaspreet Pannu (Jassi) 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Surveying the neglectedness of broad-spectrum antiviral development xng_1vsce_ Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual books $19K Elizabeth Van Nostrand Person Elizabeth Van Nostrand Consultant who worked with QURI (2019-2020) on evaluation-related posts and the Amplifying Generalist Research experiment. Writes the Aceso Under Glass blog. 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual books xng_1vsce_ 12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms $250K Berkeley Existential Risk Initiative Organization Berkeley Existential Risk Initiative Nonprofit supporting university-based existential risk research by providing operational and financial support. 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms xng_1vsce_ Exploring crucial considerations for decision-making around information hazards $25K Will Bradshaw 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Exploring crucial considerations for decision-making around information hazards xng_1vsce_ Help InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agents $135K Berkeley Existential Risk Initiative Organization Berkeley Existential Risk Initiative Nonprofit supporting university-based existential risk research by providing operational and financial support. 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Help InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agents xng_1vsce_ Aiming to implement AI alignment concepts in real-world applications $10K Elicit (AI Research Tool) Organization Elicit (AI Research Tool) Elicit is an AI research assistant with 5M+ researchers that searches 138M papers and automates literature reviews, founded by AI alignment researchers from Ought and funded by Coefficient Giving (... Quality: 63/100 2018-10 funds.effectivealtruism.org [Long-Term Future Fund] Aiming to implement AI alignment concepts in real-world applications xng_1vsce_ Funding for building agents with causal models of the world and using those models for impact minimization. $10K Vincent Luczkow 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Funding for building agents with causal models of the world and using those models for impact minimization. xng_1vsce_ Upskilling in ML in order to be able to do productive AI safety research sooner than otherwise $10K Joar Skalse 2019-07 funds.effectivealtruism.org [Long-Term Future Fund] Upskilling in ML in order to be able to do productive AI safety research sooner than otherwise xng_1vsce_ Identifying and resolving tensions between competition law and long-term AI strategy $32K Shin-Shin Hua and Haydn Belfield 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Identifying and resolving tensions between competition law and long-term AI strategy xng_1vsce_ Stipends, work hours, and retreat costs for four extra students of CHERI’s summer research program $11K Effective Altruism Geneva Organization Effective Altruism Geneva Geneva-based chapter of the effective altruism movement. 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] Stipends, work hours, and retreat costs for four extra students of CHERI’s summer research program xng_1vsce_ Supporting 3-month research period $7.9K Charlie Rogers-Smith 2020-07 funds.effectivealtruism.org [Long-Term Future Fund] Supporting 3-month research period xng_1vsce_ PhD in Computer Science working on AI-safety $250K Amon Elders 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] PhD in Computer Science working on AI-safety xng_1vsce_ 4 month salary to upskill in biosecurity and explore possible career paths in biosecurity. $12K Finan Adamson 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] 4 month salary to upskill in biosecurity and explore possible career paths in biosecurity. xng_1vsce_ New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass public $100K Expii, Inc. 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass public xng_1vsce_ 3-month funding for part-time research into US ability to maintain food supply in an extreme pandemic $3.1K Adin Richards 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] 3-month funding for part-time research into US ability to maintain food supply in an extreme pandemic xng_1vsce_ Grant to cover fees for a master's program in machine learning $28K Andrei Alexandru 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Grant to cover fees for a master's program in machine learning xng_1vsce_ Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) $91K 80,000 Hours Organization 80,000 Hours 80,000 Hours is the largest EA career organization, reaching 10M+ readers and reporting 3,000+ significant career plan changes, with 80% of $10M+ funding from Coefficient Giving. Since 2016 they've... Quality: 45/100 2018-07 funds.effectivealtruism.org [Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) xng_1vsce_ Supporting Vanessa with her AI alignment research $100K Vanessa Kosoy 2020-10 funds.effectivealtruism.org [Long-Term Future Fund] Supporting Vanessa with her AI alignment research xng_1vsce_ Create a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processing $55K — 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Create a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processing xng_1vsce_ Building understanding of the structure of risks from AI to inform prioritization $80K David Manheim 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Building understanding of the structure of risks from AI to inform prioritization xng_1vsce_ Write a SF/F novel based on the EA community. $15K Timothy Underwood 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Write a SF/F novel based on the EA community. xng_1vsce_ Educational scholarship in AI safety $13K Paul Colognese 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Educational scholarship in AI safety xng_1vsce_ Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers $40K Shahar Avin 2019-01 funds.effectivealtruism.org [Long-Term Future Fund] Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers xng_1vsce_ Support to build a forecasting platform based on user-created play-money prediction markets $200K Stephen Grugett, James Grugett, Austin Chen 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Support to build a forecasting platform based on user-created play-money prediction markets xng_1vsce_ Summer research program on global catastrophic risks for Swiss (under)graduate students $34K Effective Altruism Geneva Organization Effective Altruism Geneva Geneva-based chapter of the effective altruism movement. 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] Summer research program on global catastrophic risks for Swiss (under)graduate students xng_1vsce_ Building infrastructure to give existential risk researchers superforecasting ability with minimal overhead $27K Jacob Lagerros Person Jacob Lagerros Board Member of QURI. Has worked on forecasting initiatives and at the Future of Humanity Institute. Subsequently worked at LessWrong. 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Building infrastructure to give existential risk researchers superforecasting ability with minimal overhead xng_1vsce_ Strategic research and studying programming $30K Eli Tyre 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Strategic research and studying programming xng_1vsce_ Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety $80K AI Safety Support Organization AI Safety Support Organization providing career guidance, mentoring, and community support for people transitioning into AI safety careers. 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety xng_1vsce_ 1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignment $2.5K Marc-Everin Carauleanu 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] 1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignment xng_1vsce_ 4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agent $3.3K David Reber 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agent xng_1vsce_ 7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemics $18K Toby Bonvoisin 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] 7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemics xng_1vsce_ Performing independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John Salvatier $20K Connor Flexman 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Performing independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John Salvatier xng_1vsce_ Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates $35K Joe Collman 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates xng_1vsce_ Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculture $3.6K Alliance to Feed the Earth in Disasters Organization Alliance to Feed the Earth in Disasters Research organization focused on food supply resilience during global catastrophic events. 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculture xng_1vsce_ Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhD $100K Aryeh Englander 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhD xng_1vsce_ Independent research on forecasting and optimal paths to improve the long-term - LTF fund $41K — 2020-10 funds.effectivealtruism.org [Long-Term Future Fund] Independent research on forecasting and optimal paths to improve the long-term - LTF fund xng_1vsce_ Payment for AI researchers when I interview / survey them about their perceptions of safety $9.9K Vael Gates 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Payment for AI researchers when I interview / survey them about their perceptions of safety xng_1vsce_ Cataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way Forward $35K Michael Parker 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Cataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way Forward xng_1vsce_ Unrestricted donation $150K Center for Applied Rationality Organization Center for Applied Rationality Berkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-y... Quality: 62/100 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Unrestricted donation xng_1vsce_ Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) $489K — 2018-07 funds.effectivealtruism.org [Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) xng_1vsce_ researching methods to continuously monitor and analyse artificial agents for the purpose of control. $45K Lee Sharkey 2020-10 funds.effectivealtruism.org [Long-Term Future Fund] researching methods to continuously monitor and analyse artificial agents for the purpose of control. xng_1vsce_ Identifying white space opportunities for technical projects to improve biosecurity and pandemic preparedness $30K Kyle Fish 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Identifying white space opportunities for technical projects to improve biosecurity and pandemic preparedness xng_1vsce_ 2-year funding to run public and expert surveys on AI governance and forecasting $232K Noemi Dreksler 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] 2-year funding to run public and expert surveys on AI governance and forecasting xng_1vsce_ Persuasion Tournament for Existential Risk $200K Philip Tetlock, Ezra Karger, Pavel Atanasov 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] Persuasion Tournament for Existential Risk xng_1vsce_ Support to work towards developing an early-warning system for future biological risks $9K Michael McLaren 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Support to work towards developing an early-warning system for future biological risks xng_1vsce_ Develop a research project on how to infer human's internal mental models from their behaviour using cognitive science modeling $7.7K Sofia Jativa Vega 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Develop a research project on how to infer human's internal mental models from their behaviour using cognitive science modeling xng_1vsce_ Testing how the accuracy of impact forecasting varies with the timeframe of prediction. $55K David Rhys Bernard 2020-10 funds.effectivealtruism.org [Long-Term Future Fund] Testing how the accuracy of impact forecasting varies with the timeframe of prediction. xng_1vsce_ Surveying experts on AI risk scenarios and working on other projects related to AI safety. $5K Alexis Carlier 2020-07 funds.effectivealtruism.org [Long-Term Future Fund] Surveying experts on AI risk scenarios and working on other projects related to AI safety. xng_1vsce_ Funds for a 6-month project contributing to the clarification of goal-directedness $22K Morgan Rogers 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Funds for a 6-month project contributing to the clarification of goal-directedness xng_1vsce_ Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safety $122K Caroline Jeanmaire Person Caroline Jeanmaire AI governance researcher at UC Berkeley Center for Human-Compatible AI (CHAI). Foresight Fellow (2020). 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safety xng_1vsce_ Funding to cover a visit to Boston for biosecurity work $16K Will Bradshaw 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding to cover a visit to Boston for biosecurity work xng_1vsce_ Retroactive funding for running an alignment theory mentorship program with Evan Hubinger $3.6K Oliver Zhang Person Oliver Zhang Co-founder and Managing Director of the Center for AI Safety (CAIS). Co-founded CAIS in 2022 alongside Dan Hendrycks. Also co-founded ML Alignment Theory Scholars (MATS). Oversees organizational op... 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Retroactive funding for running an alignment theory mentorship program with Evan Hubinger xng_1vsce_ Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) $174K Center for Applied Rationality Organization Center for Applied Rationality Berkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-y... Quality: 62/100 2018-07 funds.effectivealtruism.org [Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) xng_1vsce_ Supporting aspiring researchers of AI alignment to boost themselves into productivity $25K Johannes Heidecke 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Supporting aspiring researchers of AI alignment to boost themselves into productivity xng_1vsce_ Human Progress for Beginners children's book $25K Jason Crawford 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Human Progress for Beginners children's book xng_1vsce_ Replacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemics $42K Joel Becker 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] Replacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemics xng_1vsce_ Research to enable transition to AI Safety $43K Vojtěch Kovařík 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Research to enable transition to AI Safety xng_1vsce_ Formalizing the side effect avoidance problem research $30K Alex Turner 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Formalizing the side effect avoidance problem research xng_1vsce_ Productivity coaching for effective altruists to increase their impact $23K Lynette Bye 2019-07 funds.effectivealtruism.org [Long-Term Future Fund] Productivity coaching for effective altruists to increase their impact xng_1vsce_ 50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing data $38K BugSeq Bioinformatics Inc. 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] 50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing data xng_1vsce_ 6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulations $3.5K Rutgers University, Department of Philosophy 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulations xng_1vsce_ Support for self-study in data science and forecasting, to upskill within a GCBR research career $2.2K Benjamin Stewart 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Support for self-study in data science and forecasting, to upskill within a GCBR research career xng_1vsce_ Create AI safety videos, and offer communication and media support to AI safety orgs. $60K Robert Miles 2020-07 funds.effectivealtruism.org [Long-Term Future Fund] Create AI safety videos, and offer communication and media support to AI safety orgs. xng_1vsce_ We’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting. $50K The Center for Election Science Organization The Center for Election Science Nonprofit advancing approval voting and other voting method reforms. Funded by SFF and Open Philanthropy. 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] We’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting. xng_1vsce_ Developing algorithms, environments and tests for AI safety via debate. $25K Joe Collman 2020-07 funds.effectivealtruism.org [Long-Term Future Fund] Developing algorithms, environments and tests for AI safety via debate. xng_1vsce_ 2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-founders $34K Aligned AI 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] 2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-founders xng_1vsce_ Writing fiction to convey EA and rationality-related topics $20K Miranda Dixon-Luinenburg 2019-07 funds.effectivealtruism.org [Long-Term Future Fund] Writing fiction to convey EA and rationality-related topics xng_1vsce_ Research on the links between short- and long-term AI policy while skilling up in technical ML $75K Jess Whittlestone 2019-07 funds.effectivealtruism.org [Long-Term Future Fund] Research on the links between short- and long-term AI policy while skilling up in technical ML xng_1vsce_ 3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance" $5K Chelsea Liang 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] 3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance" xng_1vsce_ Funding for full-time, independent research on agent foundations $30K Daniel Demski 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding for full-time, independent research on agent foundations xng_1vsce_ PhD in machine learning with a focus on AI alignment $86K Dmitrii Krasheninnikov 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] PhD in machine learning with a focus on AI alignment xng_1vsce_ Buying out one year of my academic teaching so that I can spend time on AI alignment research instead $12K David Udell 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Buying out one year of my academic teaching so that I can spend time on AI alignment research instead xng_1vsce_ Funding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019. $28K Mikhail Yagudin 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Funding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019. xng_1vsce_ For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fit $85K Remmelt Ellen 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fit xng_1vsce_ Provides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support) $15K Berkeley Existential Risk Initiative Organization Berkeley Existential Risk Initiative Nonprofit supporting university-based existential risk research by providing operational and financial support. 2017-01 funds.effectivealtruism.org [Long-Term Future Fund] Provides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support) xng_1vsce_ Additional funding for AI strategy PhD at Oxford / FHI $37K Sören Mindermann 2019-07 funds.effectivealtruism.org [Long-Term Future Fund] Additional funding for AI strategy PhD at Oxford / FHI xng_1vsce_ 6-month salary to develop tools to test the natural abstractions hypothesis $35K John Wentworth Person John Wentworth Independent alignment researcher. Known for work on natural abstractions and agent foundations. Previously associated with MIRI. 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to develop tools to test the natural abstractions hypothesis xng_1vsce_ A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers $26K Tessa Alexanian Person Tessa Alexanian Responsible biotechnology researcher and advocate. Foresight Fellow (2020). 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers xng_1vsce_ Conducting independent research into AI forecasting and strategy questions $30K Tegan McCaslin 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Conducting independent research into AI forecasting and strategy questions xng_1vsce_ One year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields. $80K Logan Strohl 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] One year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields. xng_1vsce_ Formalizing perceptual complexity with application to safe intelligence amplification $30K Anand Srinivasan 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Formalizing perceptual complexity with application to safe intelligence amplification xng_1vsce_ Three months of blogging and movement building at the intersection of EA/longtermism and progress studies $18K Nicholas (Nick) Whitaker 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Three months of blogging and movement building at the intersection of EA/longtermism and progress studies xng_1vsce_ Support multiple SPARC project operations during 2021 $15K SPARC 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] Support multiple SPARC project operations during 2021 xng_1vsce_ Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decades $11K Zach Freitas-Groff 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decades xng_1vsce_ A two-day, career-focused workshop to inform and connect European EAs interested in AI governance $18K Alex Lintz 2019-01 funds.effectivealtruism.org [Long-Term Future Fund] A two-day, career-focused workshop to inform and connect European EAs interested in AI governance xng_1vsce_ To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety $23K Stag Lynn 2019-07 funds.effectivealtruism.org [Long-Term Future Fund] To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety xng_1vsce_ Funding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systems $275K Kush Bhatia 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Funding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systems xng_1vsce_ 10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretability $19K Benedikt Hoeltgen 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] 10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretability xng_1vsce_ Increasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date. $65K Anthony Aguirre Person Anthony Aguirre Physicist and AI safety advocate serving as Executive Director of the Future of Life Institute and President of the Future of Life Foundation. Faggin Presidential Professor for Physics of Informati... 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Increasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date. xng_1vsce_ Multi-model approach to corporate and state actors relevant to existential risk mitigation $30K David Manheim 2019-07 funds.effectivealtruism.org [Long-Term Future Fund] Multi-model approach to corporate and state actors relevant to existential risk mitigation xng_1vsce_ 1-year salary for Adam Shimi to conduct independent research in AI Alignment $60K Adam Shimi 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] 1-year salary for Adam Shimi to conduct independent research in AI Alignment xng_1vsce_ A research agenda rigorously connecting the internal and external views of value synthesis $30K David Girardo 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] A research agenda rigorously connecting the internal and external views of value synthesis xng_1vsce_ BERI will support SERI when university systems are unable to help $60K Berkeley Existential Risk Initiative Organization Berkeley Existential Risk Initiative Nonprofit supporting university-based existential risk research by providing operational and financial support. 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] BERI will support SERI when university systems are unable to help xng_1vsce_ Financial support for work on a biosecurity research project and workshop, and travel expenses $15K Simon Grimm 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Financial support for work on a biosecurity research project and workshop, and travel expenses xng_1vsce_ 3-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurity $15K Caleb Withers 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] 3-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurity xng_1vsce_ Support to create language model (LM) tools to aid alignment research through feedback and content generation $40K Logan Smith 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Support to create language model (LM) tools to aid alignment research through feedback and content generation xng_1vsce_ Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD $10K Orpheus Lummis 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD xng_1vsce_ Longtermist lessons from COVID $5.6K Gavin Leech 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Longtermist lessons from COVID xng_1vsce_ Writing preliminary content for an encyclopedia of effective altruism $17K Pablo Stafforini 2020-01 funds.effectivealtruism.org [Long-Term Future Fund] Writing preliminary content for an encyclopedia of effective altruism xng_1vsce_ Understanding the Impact of Lifting Government Interventions against COVID-19 Transmission $9.8K Mrinank Sharma 2020-10 funds.effectivealtruism.org [Long-Term Future Fund] Understanding the Impact of Lifting Government Interventions against COVID-19 Transmission xng_1vsce_ Unrestricted donation $50K Elicit (AI Research Tool) Organization Elicit (AI Research Tool) Elicit is an AI research assistant with 5M+ researchers that searches 138M papers and automates literature reviews, founded by AI alignment researchers from Ought and funded by Coefficient Giving (... Quality: 63/100 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Unrestricted donation xng_1vsce_ An offline community hub for rationalists and EAs $50K Vyacheslav Matyuhin 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] An offline community hub for rationalists and EAs xng_1vsce_ Upskilling investigation of AI Safety via debate and ML training $10K Joe Collman 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Upskilling investigation of AI Safety via debate and ML training xng_1vsce_ Computing resources and researcher salaries at a new deep learning + AI alignment research group at Cambridge $200K David Krueger Person David Krueger David Krueger is an assistant professor at the University of Cambridge working on AI alignment and safety. His research focuses on understanding and mitigating risks from advanced AI, including wor... 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] Computing resources and researcher salaries at a new deep learning + AI alignment research group at Cambridge xng_1vsce_ Funding to pay participants to test a forecasting training program $3.2K Logan McNichols 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding to pay participants to test a forecasting training program xng_1vsce_ Building infrastructure for the future of effective forecasting efforts $70K Ozzie Gooen Person Ozzie Gooen Founder and Executive Director of QURI (Quantified Uncertainty Research Institute). Created Guesstimate (2016) and leads development of Squiggle, a probabilistic programming language for estimation... 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Building infrastructure for the future of effective forecasting efforts xng_1vsce_ Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks. $40K Damon Pourtahmaseb-Sasi 2019-10 funds.effectivealtruism.org [Long-Term Future Fund] Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks. xng_1vsce_ 8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHI $28K James Bernardi 2021-07 funds.effectivealtruism.org [Long-Term Future Fund] 8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHI xng_1vsce_ 6-month salary to work with Dan Hendrycks on research projects relevant to AI alignment $50K Thomas Woodside Person Thomas Woodside 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to work with Dan Hendrycks on research projects relevant to AI alignment xng_1vsce_ 12-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goals $20K Lauren Lee 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] 12-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goals xng_1vsce_ Conducting postdoctoral research at Harvard on the psychology of EA/long-termism $50K Lucius Caviola 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Conducting postdoctoral research at Harvard on the psychology of EA/long-termism xng_1vsce_ 12-month salary to provide runway after finishing RSP $55K The Future of Humanity Institute 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary to provide runway after finishing RSP xng_1vsce_ Educational Scholarship in AI Alignment $22K Jaeson Booker 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Educational Scholarship in AI Alignment xng_1vsce_ Fund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing research $70K Rethink Priorities Organization Rethink Priorities Rethink Priorities is a research organization founded in 2018 that grew from 2 to ~130 people by 2022, conducting evidence-based analysis across animal welfare, global health, and AI governance. Th... Quality: 60/100 2021-01 funds.effectivealtruism.org [Long-Term Future Fund] Fund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing research xng_1vsce_ Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) $163K Centre for Effective Altruism Organization Centre for Effective Altruism Oxford-based organization that coordinates the effective altruism movement, running EA Global conferences, supporting local groups, and maintaining the EA Forum. Quality: 78/100 2018-07 funds.effectivealtruism.org [Long-Term Future Fund] Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) xng_1vsce_ Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitter $1.1K Alex Turner 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitter xng_1vsce_ Unrestricted donation $50K — 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] Unrestricted donation xng_1vsce_ Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentors $20K David Reber 2021-10 funds.effectivealtruism.org [Long-Term Future Fund] Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentors xng_1vsce_ 12-month salary for independent research, upskilling, and finding a stable position in AI-Safety $24K Robert Kralisch 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary for independent research, upskilling, and finding a stable position in AI-Safety xng_1vsce_ A major expansion of the Metaculus prediction platform and its community $70K Anthony Aguirre Person Anthony Aguirre Physicist and AI safety advocate serving as Executive Director of the Future of Life Institute and President of the Future of Life Foundation. Faggin Presidential Professor for Physics of Informati... 2019-04 funds.effectivealtruism.org [Long-Term Future Fund] A major expansion of the Metaculus prediction platform and its community xng_1vsce_ Research project on the longevity and decay of universities, philanthropic foundations, and catholic orders $3.6K Maximilian Negele 2020-10 funds.effectivealtruism.org [Long-Term Future Fund] Research project on the longevity and decay of universities, philanthropic foundations, and catholic orders xng_1vsce_ Organising immersive workshops on meta skills and x-risk for STEM students at top universities. $33K Tamara Borine 2020-10 funds.effectivealtruism.org [Long-Term Future Fund] Organising immersive workshops on meta skills and x-risk for STEM students at top universities. xng_1vsce_ Support for alignment theory agenda evaluation $25K Jack Ryan 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Support for alignment theory agenda evaluation xng_1vsce_ AI safety dinners $10K Neil Crawford 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] AI safety dinners xng_1vsce_ AI safety research $1.5K Lukas Berglund 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] AI safety research xng_1vsce_ Compensation for a non-fiction book on threat of AGI for a general audience $50K Darren McKee 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Compensation for a non-fiction book on threat of AGI for a general audience xng_1vsce_ Funding to perform human evaluations for evaluating different machine learning methods for aligning language models $10K Robert Kirk 2022 funds.effectivealtruism.org [Long-Term Future Fund] Funding to perform human evaluations for evaluating different machine learning methods for aligning language models xng_1vsce_ Travel Support to BWC RevCon & Side Events $3.5K Theo Knopfer Person Theo Knopfer Researcher on emerging technology terrorism risk. Foresight Fellow (2023). 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Travel Support to BWC RevCon & Side Events xng_1vsce_ travel funding for participants in a workshop on the science of consciousness and current and near-term AI systems $11K Robert Long 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] travel funding for participants in a workshop on the science of consciousness and current and near-term AI systems xng_1vsce_ Funding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows) $100K Nora Ammann Person Nora Ammann Guaranteed Safe AI researcher at PIBBSS. Foresight Fellow (2024). 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Funding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows) xng_1vsce_ Neural network interpretability research $13K Nicholas Greig 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Neural network interpretability research xng_1vsce_ Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAO $4.9K Jacob Mendel 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAO xng_1vsce_ 6 months of independent alignment research and upskilling $30K Zhengbo Xiang (Alana) 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6 months of independent alignment research and upskilling xng_1vsce_ Research into the international viability of FHI's Windfall Clause $3K John Bridge 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Research into the international viability of FHI's Windfall Clause xng_1vsce_ 6-month salary for research into preventing steganography in interpretable representations using multiple agents $20K Hoagy Cunningham 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for research into preventing steganography in interpretable representations using multiple agents xng_1vsce_ Research on EA and longtermism $70K Aaron Bergman 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Research on EA and longtermism xng_1vsce_ 6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations. $40K Logan Smith 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations. xng_1vsce_ 1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs. $50K Paul Bricman 2022 funds.effectivealtruism.org [Long-Term Future Fund] 1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs. xng_1vsce_ 6-month part-time (20h/week) salary to further develop and refine the feature visualization library Lucent $23K Tom Lieberum 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month part-time (20h/week) salary to further develop and refine the feature visualization library Lucent xng_1vsce_ This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign. $7.5K Naoya Okamoto 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign. xng_1vsce_ Support to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 years $3K David Staley 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Support to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 years xng_1vsce_ Support for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety faster $50K Marius Hobbhahn 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Support for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety faster xng_1vsce_ 12-month salary to study and get into AI Safety Research and work on related EA projects $14K Luca De Leo 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary to study and get into AI Safety Research and work on related EA projects xng_1vsce_ 4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fit $20K Max Kaufmann 2022 funds.effectivealtruism.org [Long-Term Future Fund] 4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fit xng_1vsce_ Exploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strike $5K Isabel Johnson 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Exploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strike xng_1vsce_ 6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastrophe $36K Sasha Cooper 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastrophe xng_1vsce_ 6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paper $33K Jonathan Ng 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paper xng_1vsce_ Financial support to help productivity and increase time of early career alignment researcher $7K Max Kaufmann 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Financial support to help productivity and increase time of early career alignment researcher xng_1vsce_ 5-month part time salary for collaborating on a research paper analyzing the implications of compute access $2.5K Sage Bergerson 2022 funds.effectivealtruism.org [Long-Term Future Fund] 5-month part time salary for collaborating on a research paper analyzing the implications of compute access xng_1vsce_ Support for living expenses while doing PhD in AI safety - technical research and community building work $2.3K Francis Rhys Ward 2022 funds.effectivealtruism.org [Long-Term Future Fund] Support for living expenses while doing PhD in AI safety - technical research and community building work xng_1vsce_ 6-month salary for self-study to be more effective at AI alignment research $15K Thomas Kehrenberg 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for self-study to be more effective at AI alignment research xng_1vsce_ The Alignable Structures workshop in Philadelphia $9K Quinn Dougherty Person Quinn Dougherty Software Engineer who worked at QURI (2021-2022). Focused on epistemic public goods with involvement in AGI alignment and community-building projects. 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] The Alignable Structures workshop in Philadelphia xng_1vsce_ New laptop for technical AI safety research $4.1K Peter Barnett 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] New laptop for technical AI safety research xng_1vsce_ 10-month funding to study ML at university and AIS independently $500 Patricio Vercesi 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 10-month funding to study ML at university and AIS independently xng_1vsce_ 6 month salary to improve the US regulatory environment for prediction markets $138K Solomon Sia 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 6 month salary to improve the US regulatory environment for prediction markets xng_1vsce_ Develop and market video game to explain the Stop Button Problem to the public & STEM individuals $100K Lone Pine Games, LLC 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Develop and market video game to explain the Stop Button Problem to the public & STEM individuals xng_1vsce_ A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan $73K — 2022 funds.effectivealtruism.org [Long-Term Future Fund] A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan xng_1vsce_ Paid internships for promising Oxford students to try out supervised AI Safety research projects $60K AI Safety Hub Ltd 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Paid internships for promising Oxford students to try out supervised AI Safety research projects xng_1vsce_ Starting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positions $4K Kai Sandbrink 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Starting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positions xng_1vsce_ Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022 $23K William D'Alessandro 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022 xng_1vsce_ Website visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday Clock $3.5K Conor Barnes 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Website visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday Clock xng_1vsce_ 2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing hu $15K Max Räuker 2022 funds.effectivealtruism.org [Long-Term Future Fund] 2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing hu xng_1vsce_ Organize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022 $110K Czech Association for Effective Altruism Organization Czech Association for Effective Altruism Czech chapter of the effective altruism movement (CZEA). 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Organize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022 xng_1vsce_ 8 weeks scholars program to pair promising alignment researchers with renowned mentors $316K AI Safety Support Organization AI Safety Support Organization providing career guidance, mentoring, and community support for people transitioning into AI safety careers. 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 8 weeks scholars program to pair promising alignment researchers with renowned mentors xng_1vsce_ Stanford Artificial Intelligence Professional Program tution $4.8K Mario Peng Lee 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Stanford Artificial Intelligence Professional Program tution xng_1vsce_ (professional development grant) New laptop for technical AI safety research $2.5K Max Lamparth 2022 funds.effectivealtruism.org [Long-Term Future Fund] (professional development grant) New laptop for technical AI safety research xng_1vsce_ Year-long salary for shard theory and RL mech int research $220K Alexander Turner 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Year-long salary for shard theory and RL mech int research xng_1vsce_ Stipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeople $5K Chris Patrick 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Stipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeople xng_1vsce_ Support to further develop a branch of rationality focused on patient and direct observation $80K Logan Strohl 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Support to further develop a branch of rationality focused on patient and direct observation xng_1vsce_ 1-year salary and costs to connect, expand and enable the AGI governance and safety community in Canada $87K Wyatt Tessari 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 1-year salary and costs to connect, expand and enable the AGI governance and safety community in Canada xng_1vsce_ 3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AI $5.5K Tomislav Kurtovic 2022 funds.effectivealtruism.org [Long-Term Future Fund] 3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AI xng_1vsce_ 6-month salary for two people to find formalisms for modularity in neural networks $73K Lucius Bushnaq 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for two people to find formalisms for modularity in neural networks xng_1vsce_ One-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safety $21K Steve Petersen 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] One-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safety xng_1vsce_ 6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paper $167K Kaarel Hänni, Kay Kozaronek, Walter Laurito, and Georgios Kaklmanos 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paper xng_1vsce_ European Summer Research Program focused on building the talent pipeline for global catastrophic risk researchers $170K Effective Altruism Geneva Organization Effective Altruism Geneva Geneva-based chapter of the effective altruism movement. 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] European Summer Research Program focused on building the talent pipeline for global catastrophic risk researchers xng_1vsce_ 4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreat $10K Jonas Hallgren 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreat xng_1vsce_ Make 12 more AXRP episodes $24K Daniel Filan 2022 funds.effectivealtruism.org [Long-Term Future Fund] Make 12 more AXRP episodes xng_1vsce_ 12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-risk $60K Ross Graham 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-risk xng_1vsce_ 1-year salary for research in applications of natural abstraction $180K John Wentworth Person John Wentworth Independent alignment researcher. Known for work on natural abstractions and agent foundations. Previously associated with MIRI. 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 1-year salary for research in applications of natural abstraction xng_1vsce_ Financial support to work part time on an academic project evaluating factors relevant to digital consciousness $11K Derek Shiller 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Financial support to work part time on an academic project evaluating factors relevant to digital consciousness xng_1vsce_ 6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org $98K Jeffrey Ladish Person Jeffrey Ladish Biosecurity researcher and advocate. Foresight Fellow (2020). Works on reducing catastrophic biological risks. 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org xng_1vsce_ 6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundations $6K Iván Godoy 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundations xng_1vsce_ 3-month salary for upskilling in PyTorch and AI safety research. $19K Alex Infanger 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 3-month salary for upskilling in PyTorch and AI safety research. xng_1vsce_ 6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGI $50K Nicky Pochinkov 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGI xng_1vsce_ Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition) $4K Fabienne Sandkühler 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition) xng_1vsce_ Funding to cover 4-months of rent while attending a research group with the Cambridge AI Safety group $5.6K David Quarel 2022 funds.effectivealtruism.org [Long-Term Future Fund] Funding to cover 4-months of rent while attending a research group with the Cambridge AI Safety group xng_1vsce_ 6-month salary to conduct AI alignment research circuits in decision transformers $50K Joseph Bloom 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to conduct AI alignment research circuits in decision transformers xng_1vsce_ 6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audience $8K Liam Carroll 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audience xng_1vsce_ Funding for a one year machine learning and computational statistics master’s at UCL $38K Shavindra Jayasekera 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding for a one year machine learning and computational statistics master’s at UCL xng_1vsce_ Funding for project transitioning from AI capabilities to AI Safety research. $8.2K Gerold Csendes 2022 funds.effectivealtruism.org [Long-Term Future Fund] Funding for project transitioning from AI capabilities to AI Safety research. xng_1vsce_ Twelve month salary to work as a global rationality organizer $130K Skyler Crossman 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Twelve month salary to work as a global rationality organizer xng_1vsce_ Support to work on Aisafety.camp project, impact of human dogmatism on training $2K Kevin Wang 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Support to work on Aisafety.camp project, impact of human dogmatism on training xng_1vsce_ Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safety $55K Robert Miles 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safety xng_1vsce_ 6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisation $47K Samuel Brown 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisation xng_1vsce_ 5-month salary plus expenses to support civilizational resilience projects arising from SHELTER Weekend $27K Joel Becker 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 5-month salary plus expenses to support civilizational resilience projects arising from SHELTER Weekend xng_1vsce_ One year of funding to improve an established community hub for EA in London $50K Newspeak House 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] One year of funding to improve an established community hub for EA in London xng_1vsce_ Support for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actions $90K Columbia University Organization Columbia University Private Ivy League research university in New York City. 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Support for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actions xng_1vsce_ Financial support for career exploration and related project in AI alignment upon completion of Masters in Computer Science $26K Max Clarke 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Financial support for career exploration and related project in AI alignment upon completion of Masters in Computer Science xng_1vsce_ 6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategy $40K Will Aldred 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategy xng_1vsce_ 6 months salary for independent work centered on distillation and coordination in the AI governance & strategy space $70K Alexander Lintz 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6 months salary for independent work centered on distillation and coordination in the AI governance & strategy space xng_1vsce_ Support to cover the costs of leaving employment in order to pursue AI safety research. $4K Kajetan Janiak 2022 funds.effectivealtruism.org [Long-Term Future Fund] Support to cover the costs of leaving employment in order to pursue AI safety research. xng_1vsce_ 6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictability $29K Fabian Schimpf 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictability xng_1vsce_ PhD Stipend Top Up for CHAI PhD Student. $6.7K Alex Turner 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] PhD Stipend Top Up for CHAI PhD Student. xng_1vsce_ Purchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at Oxford $3.6K Bálint Pataki 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Purchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at Oxford xng_1vsce_ One year part time spent on AI safety upskilling and concrete research projects $63K Ross Nordby 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] One year part time spent on AI safety upskilling and concrete research projects xng_1vsce_ Pass on funds for Astral Codex Ten Everywhere meetups $22K Skyler Crossman 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Pass on funds for Astral Codex Ten Everywhere meetups xng_1vsce_ Payment for part-time rationality community building $4K Boston Astral Codex Ten 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Payment for part-time rationality community building xng_1vsce_ 4-month salary for two people to find formalisms for modularity in neural networks $67K Lucius Bushnaq 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month salary for two people to find formalisms for modularity in neural networks xng_1vsce_ Travel support to attend the Symposium on AGI Safety in Oxford in May $1.5K Smitha Milli 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Travel support to attend the Symposium on AGI Safety in Oxford in May xng_1vsce_ Funding the last year of my PhD on embedded agency, to free up my time from teaching $64K Daniel Herrmann 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding the last year of my PhD on embedded agency, to free up my time from teaching xng_1vsce_ Funds to support travel for academic research projects relating to pandemic preparedness and biosecurity $8.2K Charles Whittaker 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Funds to support travel for academic research projects relating to pandemic preparedness and biosecurity xng_1vsce_ Funding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights. $36K Simon Skade 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights. xng_1vsce_ 2 years of GovAI salary and overheads for Robert Trager $402K — 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 2 years of GovAI salary and overheads for Robert Trager xng_1vsce_ Support for Jay Bailey for work in ML for AI Safety $79K Jay Bailey 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Support for Jay Bailey for work in ML for AI Safety xng_1vsce_ 4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research. $12K Benjamin Sturgeon 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research. xng_1vsce_ Support for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp. $10K Jan Kirchner 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Support for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp. xng_1vsce_ 4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual stream $16K Joshua Reiners 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual stream xng_1vsce_ Fine-tuning large language models for an interpretability challenge (compute costs) $11K Andrei Alexandru 2022 funds.effectivealtruism.org [Long-Term Future Fund] Fine-tuning large language models for an interpretability challenge (compute costs) xng_1vsce_ Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward $40K Michael Parker 2022 funds.effectivealtruism.org [Long-Term Future Fund] Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward xng_1vsce_ 12-month salary to work on alignment research! $96K Garrett Baker 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary to work on alignment research! xng_1vsce_ Funding for Computer Science PhD $349K David Reber 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Funding for Computer Science PhD xng_1vsce_ 6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RL $40K Jeremy Gillen 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RL xng_1vsce_ 4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL models $1K Abhijit Narayan S 2022 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL models xng_1vsce_ 12-month salary to work on ML models for detecting genetic engineering in pathogens $85K Jade Zaslavsky 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary to work on ML models for detecting genetic engineering in pathogens xng_1vsce_ 2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make time $745 Ardysatrio Haroen 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make time xng_1vsce_ Piloting an EA hardware lab for prototyping hardware relevant to longtermist priorities $44K Adam Rutkowski 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Piloting an EA hardware lab for prototyping hardware relevant to longtermist priorities xng_1vsce_ Retroactive grant for managing the MATS program, 1.0 and 2.0 $27K MATS ML Alignment Theory Scholars program Organization MATS ML Alignment Theory Scholars program MATS is a well-documented 12-week fellowship program that has successfully trained 213 AI safety researchers with strong career outcomes (80% in alignment work) and research impact (160+ publicatio... Quality: 60/100 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Retroactive grant for managing the MATS program, 1.0 and 2.0 xng_1vsce_ Enabling prosaic alignment research with a multi-modal model on natural language and chess $25K Philipp Bongartz 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Enabling prosaic alignment research with a multi-modal model on natural language and chess xng_1vsce_ 2-6 months' stipend to financially cover my self-development in Machine Learning for alignment work $16K Jonathan Ng 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 2-6 months' stipend to financially cover my self-development in Machine Learning for alignment work xng_1vsce_ 3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignment $1K Amrita A. Nair 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignment xng_1vsce_ Top-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residency $180K Effective Altruism Geneva Organization Effective Altruism Geneva Geneva-based chapter of the effective altruism movement. 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Top-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residency xng_1vsce_ 6-months salary for researching “Framing computational systems such that we can find meaningful concepts." & Upskilling $24K Matthias Georg Mayer 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-months salary for researching “Framing computational systems such that we can find meaningful concepts." & Upskilling xng_1vsce_ 6 months’ salary to upskill on technical AI safety through project work and studying $50K Rusheb Shah 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 6 months’ salary to upskill on technical AI safety through project work and studying xng_1vsce_ 6-month salary for an AI alignment research project on the manipulation of humans by AI $25K Felix Hofstätter 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for an AI alignment research project on the manipulation of humans by AI xng_1vsce_ 6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computation $26K David Hahnemann, Luan Ademi 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computation xng_1vsce_ Support for research into applied technical AI alignment work $10K Philippe Rivet 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Support for research into applied technical AI alignment work xng_1vsce_ A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research $305K Principles of Intelligent Behavior in Biological and Social Systems 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research xng_1vsce_ Increase of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residence $135K Effective Altruism Geneva Organization Effective Altruism Geneva Geneva-based chapter of the effective altruism movement. 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Increase of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residence xng_1vsce_ 5-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayal $14K Nikiforos Pittaras 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 5-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayal xng_1vsce_ 12-Month Salary and Compute Expenses to do AI Safety Research with LLMs $70K Nicky Pochinkov 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 12-Month Salary and Compute Expenses to do AI Safety Research with LLMs xng_1vsce_ I am looking for a career transition grant to give me more time for job hunting & networking $3.6K Alexander Large 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] I am looking for a career transition grant to give me more time for job hunting & networking xng_1vsce_ Research and a report/paper on the the role of emergency powers in the governance of X-Risk $26K Daniel Skeffington 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Research and a report/paper on the the role of emergency powers in the governance of X-Risk xng_1vsce_ Equipment to improve productivity while doing AI Safety research $3.9K Tim Farrelly 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Equipment to improve productivity while doing AI Safety research xng_1vsce_ 3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobs $20K Peter Ruschhaupt 2022 funds.effectivealtruism.org [Long-Term Future Fund] 3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobs xng_1vsce_ One-year funding of Astral Codex Ten meetup in Philadelphia $5K Wesley Fenza 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] One-year funding of Astral Codex Ten meetup in Philadelphia xng_1vsce_ Reconstruction attacks in federated learning $5K University of Cambridge/ None 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Reconstruction attacks in federated learning xng_1vsce_ This grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability project $48K Bilal Chughtai 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] This grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability project xng_1vsce_ Retrospective funding for research retreat on a decision-theory / cause-prioritization topic. $10K Daniel Kokotajlo 2022 funds.effectivealtruism.org [Long-Term Future Fund] Retrospective funding for research retreat on a decision-theory / cause-prioritization topic. xng_1vsce_ Funding for the AI Safety Nudge Competition $5.2K AI Safety Nudge Competition 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding for the AI Safety Nudge Competition xng_1vsce_ Support to work on AI alignment research $16K Matt MacDermott 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Support to work on AI alignment research xng_1vsce_ 9 months of funding for an early-career alignment researcher, to work with Owain Evans and others. $45K Max Kaufmann 2022 funds.effectivealtruism.org [Long-Term Future Fund] 9 months of funding for an early-career alignment researcher, to work with Owain Evans and others. xng_1vsce_ Office rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood Research $4.3K Effective Altruism Geneva Organization Effective Altruism Geneva Geneva-based chapter of the effective altruism movement. 2022 funds.effectivealtruism.org [Long-Term Future Fund] Office rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood Research xng_1vsce_ One year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGI $17K Gunnar Zarncke 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] One year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGI xng_1vsce_ I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the Fall $1.8K Zach Peck 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the Fall xng_1vsce_ Funding 2 years of technical AI safety research to understand and mitigate risk from large foundation models $210K John Burden 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding 2 years of technical AI safety research to understand and mitigate risk from large foundation models xng_1vsce_ Independent research and upskilling for one year, to transition from academic philosophy to AI alignment research $60K Brian Porter 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Independent research and upskilling for one year, to transition from academic philosophy to AI alignment research xng_1vsce_ Part-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detection $20K Noga Aharony 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Part-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detection xng_1vsce_ 6-months salary to accelerate my plans of upskilling in order to work on the issue of AI safety $26K Kane Nicholson 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6-months salary to accelerate my plans of upskilling in order to work on the issue of AI safety xng_1vsce_ Support funding during 2 years of an AI safety PhD at Oxford $12K Ondrej Bajgar 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Support funding during 2 years of an AI safety PhD at Oxford xng_1vsce_ 1-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research. $150K Darryl Wright 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 1-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research. xng_1vsce_ Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc. $2.1K Jingyi Wang 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc. xng_1vsce_ Developing and maintaining projects/resources used by the EA and rationality communities $60K Said Achmiz 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Developing and maintaining projects/resources used by the EA and rationality communities xng_1vsce_ General support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influences $115K Alexander Turner 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] General support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influences xng_1vsce_ Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML coding $2.5K Josiah Lopez-Wild 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML coding xng_1vsce_ 6-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigation $28K Theo Knopfer Person Theo Knopfer Researcher on emerging technology terrorism risk. Foresight Fellow (2023). 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigation xng_1vsce_ 4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgism $32K Quentin Feuillade--Montixi 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgism xng_1vsce_ Top-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in Canada $17K Wyatt Tessari 2022 funds.effectivealtruism.org [Long-Term Future Fund] Top-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in Canada xng_1vsce_ 4 month grant to upskill for AI governance work before starting Science and Technology Policy PhD $17K Conor McGlynn 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 4 month grant to upskill for AI governance work before starting Science and Technology Policy PhD xng_1vsce_ 9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical research $62K Magdalena Wache 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical research xng_1vsce_ 300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics. $4.5K Leah Pierson 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics. xng_1vsce_ ≤1-year salary for alignment work: assisting academics, skilling up, personal research and community building $35K Charlie Griffin 2022 funds.effectivealtruism.org [Long-Term Future Fund] ≤1-year salary for alignment work: assisting academics, skilling up, personal research and community building xng_1vsce_ Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicant $6.6K Jeffrey Ohl 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicant xng_1vsce_ 6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordination $25K Chloe Lee 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordination xng_1vsce_ Research (and self-study) project designed to map and offer preliminary assessment of AI ideal governance research $2K Rory Gillis 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Research (and self-study) project designed to map and offer preliminary assessment of AI ideal governance research xng_1vsce_ Fund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival. $27K University of Otago, Wellington, New Zealand 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Fund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival. xng_1vsce_ 6-month salary to develop an overview of the current state of AI alignment research, and begin contributing $70K Gergely Szucs 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to develop an overview of the current state of AI alignment research, and begin contributing xng_1vsce_ Grant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration. $63K Hunar Batra 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] Grant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration. xng_1vsce_ 7 month salary to study a Graduate Diploma of International Affairs at The Australian National University $9K Matthew MacInnes 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 7 month salary to study a Graduate Diploma of International Affairs at The Australian National University xng_1vsce_ Funding to start a longtermist org and support research $495K Transformative Futures Foresight Institute 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding to start a longtermist org and support research xng_1vsce_ Slack money for increased productivity in AI Alignment research $17K Adam Shimi 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Slack money for increased productivity in AI Alignment research xng_1vsce_ 2-year salary for work on the learning-theoretic AI alignment research agenda $100K Vanessa Kosoy 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 2-year salary for work on the learning-theoretic AI alignment research agenda xng_1vsce_ Support to conduct work in AI safety $5K Benjamin Anderson 2022 funds.effectivealtruism.org [Long-Term Future Fund] Support to conduct work in AI safety xng_1vsce_ Funding to support PhD in AI Safety at Imperial College London, technical research and community building $6.3K Francis Rhys Ward 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] Funding to support PhD in AI Safety at Imperial College London, technical research and community building xng_1vsce_ 3-month salary for SERI-MATS extension $24K Matt MacDermott 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 3-month salary for SERI-MATS extension xng_1vsce_ A relocation grant to help me to move and settle into a PhD program and cover initial expenses $6.5K Egor Zverev 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] A relocation grant to help me to move and settle into a PhD program and cover initial expenses xng_1vsce_ Funding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified. $16K Wikiciv Foundation 2022 funds.effectivealtruism.org [Long-Term Future Fund] Funding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified. xng_1vsce_ 6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project. $50K Jay Bailey 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project. xng_1vsce_ 1-year salary for upskilling in technical AI alignment research $96K Chu Chen 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 1-year salary for upskilling in technical AI alignment research xng_1vsce_ 6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety $4.5K Samuel Nellessen Person Samuel Nellessen AI researcher at Radboud University. Foresight Fellow (2024). 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety xng_1vsce_ 4-month salary for conceptual/theoretical research towards perfect world-model interpretability $30K Andrey Tumas 2022 funds.effectivealtruism.org [Long-Term Future Fund] 4-month salary for conceptual/theoretical research towards perfect world-model interpretability xng_1vsce_ 6-month salary to skill up and gain experience to start working on AI safety full-time $14K Mateusz Bagiński 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to skill up and gain experience to start working on AI safety full-time xng_1vsce_ 3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendas $26K Sam Marks 2022 funds.effectivealtruism.org [Long-Term Future Fund] 3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendas xng_1vsce_ 6 months salary to do independent AI alignment research focused on formal alignment and agent foundations $30K Tamsin Leake 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6 months salary to do independent AI alignment research focused on formal alignment and agent foundations xng_1vsce_ Funding for salary and living expenses while continuing to develop a framework of optimisation. $8K Alex Altair 2022 funds.effectivealtruism.org [Long-Term Future Fund] Funding for salary and living expenses while continuing to develop a framework of optimisation. xng_1vsce_ Retrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS program $4.4K Viktoria Malyasova 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Retrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS program xng_1vsce_ Weekend organised as a part of the co-founder matching process of a group to found a human data collection org $2.3K Patrick Gruban 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Weekend organised as a part of the co-founder matching process of a group to found a human data collection org xng_1vsce_ 1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studies $90K Shoshannah Tekofsky 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studies xng_1vsce_ 3-month salary to set up a distillation course helping new AI safety theory researchers to distill papers $15K Jonas Hallgren 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 3-month salary to set up a distillation course helping new AI safety theory researchers to distill papers xng_1vsce_ 24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goods $102K Lennart Stern 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] 24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goods xng_1vsce_ 6-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AI $11K Alfred Harwood 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AI xng_1vsce_ Support for AI alignment outreach in France (video/audio/text/events) & field-building $25K Jérémy Perret 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Support for AI alignment outreach in France (video/audio/text/events) & field-building xng_1vsce_ 3-month stipend for upskilling in AI Safety and potentially transition to a career in Alignment $5K Amrita A. Nair 2022 funds.effectivealtruism.org [Long-Term Future Fund] 3-month stipend for upskilling in AI Safety and potentially transition to a career in Alignment xng_1vsce_ 4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systems $12K Alan Chan 2022 funds.effectivealtruism.org [Long-Term Future Fund] 4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systems xng_1vsce_ Scholarship for PhD student working on research related to AI Safety $8K Josiah Lopez-Wild 2022 funds.effectivealtruism.org [Long-Term Future Fund] Scholarship for PhD student working on research related to AI Safety xng_1vsce_ 12-month salary to transition career into technical alignment research $25K Dan Valentine 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary to transition career into technical alignment research xng_1vsce_ 6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedules $40K Logan Smith 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedules xng_1vsce_ A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity Summit $2.5K Hamza Tariq Chaudhry 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity Summit xng_1vsce_ 8-month salary for three people to investigate the origins of modularity in neural networks $125K Lucius Bushnaq, Callum McDougall, Avery Griffin 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 8-month salary for three people to investigate the origins of modularity in neural networks xng_1vsce_ 12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal Paternalism $81K Samuel Brown 2022 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal Paternalism xng_1vsce_ A research & networking retreat for winners of the Eliciting Latent Knowledge contest $72K — 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] A research & networking retreat for winners of the Eliciting Latent Knowledge contest xng_1vsce_ 6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systems $24K Johannes C. Mayer 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systems xng_1vsce_ Support to conduct a research project collaboration on Compute Governance $68K Lennart Heim Person Lennart Heim Compute governance researcher. Affiliated with RAND, Epoch AI, and GovAI. Leading expert on AI compute supply chains and governance. Co-authored RAND working papers on hardware-enabled mechanisms (... 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] Support to conduct a research project collaboration on Compute Governance xng_1vsce_ 4-month funding for independent alignment research and study $15K Arun Jose 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] 4-month funding for independent alignment research and study xng_1vsce_ EU Tech Policy Fellowship with ~10 trainees $69K Training for Good Organization Training for Good Professional development and training programs for high-impact careers. Website no longer available. 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] EU Tech Policy Fellowship with ~10 trainees xng_1vsce_ Funding to increase my impact as an early-career biosecurity researcher $6K Lennart Justen 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding to increase my impact as an early-career biosecurity researcher xng_1vsce_ ~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safety $4.8K Anson Ho 2022-01 funds.effectivealtruism.org [Long-Term Future Fund] ~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safety xng_1vsce_ Economic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical research $2K Antonio Franca 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] Economic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical research xng_1vsce_ One year of seed funding for a new AI interpretability research organisation $195K Jessica Rumbelow 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] One year of seed funding for a new AI interpretability research organisation xng_1vsce_ Travel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022 $1.5K Kadri Reis 2022 funds.effectivealtruism.org [Long-Term Future Fund] Travel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022 xng_1vsce_ One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATS $100K David Udell 2022-10 funds.effectivealtruism.org [Long-Term Future Fund] One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATS xng_1vsce_ 6-month salary to upskill for AI safety $54K Daniel O'Connell 2022 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to upskill for AI safety xng_1vsce_ 12-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilities $120K Nicholas Kees Dupuis 2023-01 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilities xng_1vsce_ 3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignment $22K Jacques Thibodeau 2022-07 funds.effectivealtruism.org [Long-Term Future Fund] 3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignment xng_1vsce_ Cover participant stipends for AI Safety Camp Virtual 2023 $73K Remmelt Ellen 2022 funds.effectivealtruism.org [Long-Term Future Fund] Cover participant stipends for AI Safety Camp Virtual 2023 xng_1vsce_ Developing weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 people $80K Michael Pearce, Alice Riggs, Thomas Dooms 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] Developing weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 people xng_1vsce_ 6-months stipend for transitioning to independent research on AI Safety $40K Glauber De Bona 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-months stipend for transitioning to independent research on AI Safety xng_1vsce_ Spend 3 months (part time) assessing plausible pathways to slowing AI $5K Gideon Futerman 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] Spend 3 months (part time) assessing plausible pathways to slowing AI xng_1vsce_ 4-month part-time salary to work on interpretability projects with David Bau and Logan Riggs $10K Jannik Brinkmann 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-month part-time salary to work on interpretability projects with David Bau and Logan Riggs xng_1vsce_ 6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowships $273K Ashgro Inc. (fiscal sponsor of Apart) 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] 6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowships xng_1vsce_ 1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articles $80K Nicky Case 2025-01 funds.effectivealtruism.org [Long-Term Future Fund] 1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articles xng_1vsce_ A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safety $5K Chris Lakin 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safety xng_1vsce_ 3-month stipend to support research on the state of AI safety in China and implications for AI existential risk $12K Andrew Zeng 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 3-month stipend to support research on the state of AI safety in China and implications for AI existential risk xng_1vsce_ 3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferences $80K Constantin Weisser 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferences xng_1vsce_ $10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability Fellowship $10K Brian Tan 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] $10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability Fellowship xng_1vsce_ 1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group) $103K Nora Ammann Person Nora Ammann Guaranteed Safe AI researcher at PIBBSS. Foresight Fellow (2024). 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] 1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group) xng_1vsce_ This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity. $70K Nathaniel Monson 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity. xng_1vsce_ 6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainability $52K Aengus Lynch 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainability xng_1vsce_ 6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time after $40K Joe Kwon 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time after xng_1vsce_ Data collection for a new paradigm for AI alignment based on downstream outcomes and human flourishing $50K University of Massachusetts Amherst 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Data collection for a new paradigm for AI alignment based on downstream outcomes and human flourishing xng_1vsce_ 4-month salary for finding and characterising provably hard cases for mechanistic anomaly detection $40K Andis Draguns 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-month salary for finding and characterising provably hard cases for mechanistic anomaly detection xng_1vsce_ 3-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentor $23K Aleksandar Makelov 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 3-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentor xng_1vsce_ This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia. $77K AI Safety Australia and New Zealand 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia. xng_1vsce_ Exploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension) $41K Lucy Farnik 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Exploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension) xng_1vsce_ 6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum. $8K Amritanshu Prasad 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum. xng_1vsce_ 4-month stipend for a career transition period to explore roles in AI safety communications $10K Sarah Hastings-Woodhouse 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend for a career transition period to explore roles in AI safety communications xng_1vsce_ 12 week 0.6FT upskilling stipend for technical governance research management $11K Morgan Simpson 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 12 week 0.6FT upskilling stipend for technical governance research management xng_1vsce_ 3-months salary for SERI MATS extention to work on internal concept extraction $27K Ann-Kathrin Dombrowski 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 3-months salary for SERI MATS extention to work on internal concept extraction xng_1vsce_ 6-months of part-time stipend to launch a new science journalism outlet focused on AI Safety $50K Mordechai Rorvig 2025-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-months of part-time stipend to launch a new science journalism outlet focused on AI Safety xng_1vsce_ 6 to 12 month fundings to continue working on model psychology and evaluation $42K P.H.I 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 6 to 12 month fundings to continue working on model psychology and evaluation xng_1vsce_ 4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switch $62K Niels uit de Bos 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switch xng_1vsce_ This grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts. $55K Akbir Khan 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] This grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts. xng_1vsce_ Ten field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025 $7.1K — 2025-04 funds.effectivealtruism.org [Long-Term Future Fund] Ten field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025 xng_1vsce_ A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economy $36K Alexander Mann 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economy xng_1vsce_ Developing noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deployment $40K Adelin Kassler 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] Developing noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deployment xng_1vsce_ 6-month salary and compute budget for continuing work on mechanistic interpretability for attention layers $37K Keith Wynroe 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary and compute budget for continuing work on mechanistic interpretability for attention layers xng_1vsce_ 12-month support for independent AI alignment research $45K Aryeh Brill 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 12-month support for independent AI alignment research xng_1vsce_ 4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMs $70K Axel Højmark 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMs xng_1vsce_ This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects. $32K Dioptra (informal research group working on evals) 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects. xng_1vsce_ 4-month fund for full time AI safety technical and/or governance research $11K Harrison Gietz 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 4-month fund for full time AI safety technical and/or governance research xng_1vsce_ This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy. $8.7K Carson Ezell 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy. xng_1vsce_ 4-month stipend to continue AI safety projects $25K Hannah Erlebach 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend to continue AI safety projects xng_1vsce_ Part-time salary for independent AI safety research $40K Ross Nordby 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] Part-time salary for independent AI safety research xng_1vsce_ Grant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate student $1.9K Sumeet Motwani 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] Grant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate student xng_1vsce_ Mentored independent research and upskilling to transition from theoretical physics PhD to AI safety $50K Einar Urdshals 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] Mentored independent research and upskilling to transition from theoretical physics PhD to AI safety xng_1vsce_ 6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI Safety $78K Aishwarya Saxena 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI Safety xng_1vsce_ 2-month salary to test suitability for technical AI alignment research and identify a research direction $8.8K Bart Bussmann 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 2-month salary to test suitability for technical AI alignment research and identify a research direction xng_1vsce_ Meta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project) $62K Yoav Tzfati 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Meta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project) xng_1vsce_ Organizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participants $160K Epistea, z.s 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Organizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participants xng_1vsce_ 1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension program $15K Abhay Sheshadri 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension program xng_1vsce_ 1 year PhD funding and compute funding to research a novel method for training prosociality into large language models $10K Scott Viteri 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 1 year PhD funding and compute funding to research a novel method for training prosociality into large language models xng_1vsce_ 1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystem $99K Alignment Ecosystem Development 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] 1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystem xng_1vsce_ 6-month salary for independent alignment research in interpretability or control $95K Thomas Kwa 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for independent alignment research in interpretability or control xng_1vsce_ Funding to do research on understanding search in transformers at the AI safety camp during 14 weeks $6.6K Guillaume Corlouer 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] Funding to do research on understanding search in transformers at the AI safety camp during 14 weeks xng_1vsce_ One year stipend and compute budget, for full-time technical AI alignment research $80K David Udell 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] One year stipend and compute budget, for full-time technical AI alignment research xng_1vsce_ 6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's Law $60K Thomas Kwa 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's Law xng_1vsce_ 6 month salary for further pursuing sparse autoencoders for automatic feature finding $40K Logan Smith 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 6 month salary for further pursuing sparse autoencoders for automatic feature finding xng_1vsce_ 5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistance $17K For Collaborative Work with AI:FAR 2025-01 funds.effectivealtruism.org [Long-Term Future Fund] 5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistance xng_1vsce_ 3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganography $13K Mikhail Baranchuk 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganography xng_1vsce_ 6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing study $36K Simon Lermen 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing study xng_1vsce_ In MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networks $40K MentaLeap 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] In MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networks xng_1vsce_ Funding to attend BWC meeting to discuss transparency with country representatives & work on research project $1.7K Riya Sharma 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] Funding to attend BWC meeting to discuss transparency with country representatives & work on research project xng_1vsce_ 2 Months of living expenses while I try to establish a broad-spectrum antiviral research organization $5K Hayden Peacock 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 2 Months of living expenses while I try to establish a broad-spectrum antiviral research organization xng_1vsce_ 6-month stipend to work on AI alignment research (automated redteaming, interpretability) $30K Alex Infanger 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to work on AI alignment research (automated redteaming, interpretability) xng_1vsce_ 12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agenda $27K Jacques Thibodeau 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agenda xng_1vsce_ 1-year stipend to continue research on agency, focused on natural abstraction $200K John Wentworth Person John Wentworth Independent alignment researcher. Known for work on natural abstractions and agent foundations. Previously associated with MIRI. 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 1-year stipend to continue research on agency, focused on natural abstraction xng_1vsce_ This grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research. $45K Yuxiao Li 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] This grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research. xng_1vsce_ A 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025 $21K Caleb Rak 2024-10 funds.effectivealtruism.org [Long-Term Future Fund] A 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025 xng_1vsce_ Undergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshop $33K Nathaniel Sharadin 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] Undergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshop xng_1vsce_ Monthly seminar series on Guaranteed Safe AI, from July to December 2024 $6K Horizon Events 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] Monthly seminar series on Guaranteed Safe AI, from July to December 2024 xng_1vsce_ This grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research. $35K Sviatoslav Chalnev 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] This grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research. xng_1vsce_ 5-month salary to continue work on evaluating agent self-improvement capabilities $23K Codruta Lugoj 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 5-month salary to continue work on evaluating agent self-improvement capabilities xng_1vsce_ 12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher Brass $6K Yashvardhan Sharma 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher Brass xng_1vsce_ 4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform co $22K Stanford University Organization Stanford University Private research university in Stanford, California. Home to the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the Center for International Security and Cooperation (CISAC). 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform co xng_1vsce_ Seeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accurate $2.5K Kunvar Thaman 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] Seeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accurate xng_1vsce_ 1-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evals $19K Sumeet Motwani 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 1-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evals xng_1vsce_ 3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel funding $20K Hannah Erleabch 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel funding xng_1vsce_ Six month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverables $61K Philip Quirke 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] Six month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverables xng_1vsce_ 6-month salary for part-time independent research on LM interpretability for AI alignment $7.7K Aidan Ewart 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for part-time independent research on LM interpretability for AI alignment xng_1vsce_ 6-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costs $32K Morgan Simpson 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costs xng_1vsce_ SERI MATS 3-month extension to study knowledge removal in Language Models $12K Shashwat Goel 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] SERI MATS 3-month extension to study knowledge removal in Language Models xng_1vsce_ 6-month salary to transition to a career in AI safety while working on AI safety projects $30K Dillon Bowen 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to transition to a career in AI safety while working on AI safety projects xng_1vsce_ I'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute funds $1.5K Joshua Clymer 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] I'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute funds xng_1vsce_ 9-month programme to help language and cognition scientists repurpose their existing skills for long-termist research $5K Nikola Moore 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 9-month programme to help language and cognition scientists repurpose their existing skills for long-termist research xng_1vsce_ 11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in Finland $73K Santeri Tani 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in Finland xng_1vsce_ Compute costs for experiments to evaluate different scalable oversight protocols $87K Lewis Hammond 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Compute costs for experiments to evaluate different scalable oversight protocols xng_1vsce_ 6-month salary to finish writing a book on international AI governance and three other smaller AI governance projects $34K José Jaime Villalobos Ruiz 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to finish writing a book on international AI governance and three other smaller AI governance projects xng_1vsce_ This grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction. $2K Tristan Williams 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] This grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction. xng_1vsce_ 6-month salary for an AISC project and continuing independent mechanistic interpretability projects $28K Christopher Mathwin 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary for an AISC project and continuing independent mechanistic interpretability projects xng_1vsce_ 3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge. $3.1K Benjamin Stewart 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge. xng_1vsce_ 4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension program $30K Aaquib Syed 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension program xng_1vsce_ Retroactive funding for GameBench paper $9.1K Dioptra (Josh Clymber's AIS research community) 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] Retroactive funding for GameBench paper xng_1vsce_ A podcast mainly themed around AI x-risk, aimed at a non-technical audience $5K Sarah Hastings-Woodhouse 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] A podcast mainly themed around AI x-risk, aimed at a non-technical audience xng_1vsce_ ~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila $86K Brian Tan 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] ~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila xng_1vsce_ 4-month stipend for upskilling within the field of economic governance of AI $7K Rafael Andersson Lipcsey 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend for upskilling within the field of economic governance of AI xng_1vsce_ 4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentials $15K Kurt Brown 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentials xng_1vsce_ 6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyond $39K Felix Hofstätter 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyond xng_1vsce_ 5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projects $22K Keith Wynroe 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projects xng_1vsce_ 6-month stipend to work on techical alignment research as part of MATS 5.0 extension program $40K Cindy Wu 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to work on techical alignment research as part of MATS 5.0 extension program xng_1vsce_ Retroactive grant to study Goodhart effects on heavy-tailed distributions $30K Thomas Kwa 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] Retroactive grant to study Goodhart effects on heavy-tailed distributions xng_1vsce_ 6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systems $37K Lukas Fluri 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systems xng_1vsce_ 9 months support for an in-depth YouTube channel about AI safety and how AI will impact us all $27K David Williams-King 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 9 months support for an in-depth YouTube channel about AI safety and how AI will impact us all xng_1vsce_ Funding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew Gentzel $32K Coleman Snell 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] Funding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew Gentzel xng_1vsce_ 4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language models $60K Rauno Arike, Elizabeth Donoway 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language models xng_1vsce_ 6-month career transition and independent research in AI safety and risk mitigation $85K Jose Groh 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month career transition and independent research in AI safety and risk mitigation xng_1vsce_ This grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research. $5K Cindy Wu 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] This grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research. xng_1vsce_ Two workshops on strategic communications around AI safety, focused on the AI safety community $5.7K Philip Trippenbach 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] Two workshops on strategic communications around AI safety, focused on the AI safety community xng_1vsce_ 6 month salary to work on mech interp research with mentorship from Prof David Bau $41K Bilal Chughtai 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 6 month salary to work on mech interp research with mentorship from Prof David Bau xng_1vsce_ 6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmark $35K Roman Soletskyi 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmark xng_1vsce_ Research on how much language models can infer about their current user, and interpretability work on such inferences $55K Egg Syntax (legal: Jesse Davis) 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Research on how much language models can infer about their current user, and interpretability work on such inferences xng_1vsce_ 4-month stipend to research the mechanisms of refusal in chat LLMs $40K Oscar Balcells Obeso 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend to research the mechanisms of refusal in chat LLMs xng_1vsce_ Virtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safety $10K Orpheus Lummis 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Virtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safety xng_1vsce_ 4-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategies $27K Kai Fronsdal 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategies xng_1vsce_ Develop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viability $40K David Abecassis 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] Develop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viability xng_1vsce_ A fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and tech $120K Geneva Centre for Security Policy 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] A fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and tech xng_1vsce_ One year funding of ACX meetup in Atlanta Georgia $5K ACX Atlanta 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] One year funding of ACX meetup in Atlanta Georgia xng_1vsce_ 7 months of coworking-space funding continuation, during interpretability research project $11K David Udell 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 7 months of coworking-space funding continuation, during interpretability research project xng_1vsce_ Stipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attention $25K Matthias Dellago 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] Stipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attention xng_1vsce_ Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymaking $24K Existential Risk Observatory Organization Existential Risk Observatory Organization focused on raising public awareness of existential risks. 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymaking xng_1vsce_ 7-month stipend for organising AI Alignment Irvine (AIAI) $16K Neil Crawford 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 7-month stipend for organising AI Alignment Irvine (AIAI) xng_1vsce_ 6-month stipends to develop and apply a novel method for localizing information and computation in neural networks $160K Alex Cloud, Jacob Goldman-Wetzler, Evžen Wybitul, Joseph Miller 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipends to develop and apply a novel method for localizing information and computation in neural networks xng_1vsce_ 9-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’ $7.2K Julian Guidote 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 9-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’ xng_1vsce_ 6-month stipend to continue independent interpretability research $40K Sviatoslav Chalnev 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to continue independent interpretability research xng_1vsce_ 4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switch $67K Iván Arcuschin Moreno 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switch xng_1vsce_ WhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic Interpretability $61K Brian Tan 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] WhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic Interpretability xng_1vsce_ 8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AI $6.2K Luise Woehlke 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AI xng_1vsce_ 1-year stipend for independent research primarily on high-level interpretability $70K Arun Jose 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 1-year stipend for independent research primarily on high-level interpretability xng_1vsce_ Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignment $80K Claire Short Person Claire Short AI and neuroscience researcher. Foresight Fellow (2024). 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignment xng_1vsce_ Conference publication of interpretability and LM-steering results $40K Alexander Turner 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] Conference publication of interpretability and LM-steering results xng_1vsce_ 1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved $122K Robert Miles 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved xng_1vsce_ 12-month salary to set up a new org doing research and creating interventions to minimise lock-in risk $10K Formation Research 2024-10 funds.effectivealtruism.org [Long-Term Future Fund] 12-month salary to set up a new org doing research and creating interventions to minimise lock-in risk xng_1vsce_ 1.5 year stipend for thorough investigation and analysis of AI lab scaling policies $100K Aysja Johnson 2025-01 funds.effectivealtruism.org [Long-Term Future Fund] 1.5 year stipend for thorough investigation and analysis of AI lab scaling policies xng_1vsce_ 6 month SERI MATS London extension phase for continuing and scaling up the sparse coding project $35K Hoagy Cunningham 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 6 month SERI MATS London extension phase for continuing and scaling up the sparse coding project xng_1vsce_ 4 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognition $34K Arjun Panickssery 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 4 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognition xng_1vsce_ Studying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.I $50K Cole Wyeth 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] Studying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.I xng_1vsce_ Organizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participants $115K Epistea, z.s 2025-04 funds.effectivealtruism.org [Long-Term Future Fund] Organizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participants xng_1vsce_ MATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systems $18K Garrett Baker 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] MATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systems xng_1vsce_ 6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitation $56K Theodore Chapman 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitation xng_1vsce_ One year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's work $150K Macrostrategy Research Initiative Organization Macrostrategy Research Initiative Nonprofit research organization founded by Nick Bostrom in 2024 after leaving Oxford and the closure of the Future of Humanity Institute. Focuses on macrostrategy research examining how present-day... 2025-01 funds.effectivealtruism.org [Long-Term Future Fund] One year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's work xng_1vsce_ 6-month stipend for a small group of collaborators to continue research on the Agent Structure Problem $60K Alex Altair 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend for a small group of collaborators to continue research on the Agent Structure Problem xng_1vsce_ 4-month stipend for 3 people to create demonstrations of provably undetectable backdoors $50K Andrew Gritsevskiy 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend for 3 people to create demonstrations of provably undetectable backdoors xng_1vsce_ Development of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms) $30K Sahil Kulshrestha 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] Development of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms) xng_1vsce_ Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theory $20K Wilson Wu 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theory xng_1vsce_ 4-month salary to continue work on AI Control as a MATS extension $30K Vasil Georgiev 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-month salary to continue work on AI Control as a MATS extension xng_1vsce_ 6-month salary to build experience in AI interpretability research before PhD applications $40K Zach Furman 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to build experience in AI interpretability research before PhD applications xng_1vsce_ 2 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fields $5K Krzysztof Gwiazda 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 2 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fields xng_1vsce_ Salary Top-Up for Timaeus' Employees & Contractors $100K Timaeus (Fiscally Sponsored by Ashgro, Inc.) 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Salary Top-Up for Timaeus' Employees & Contractors xng_1vsce_ 6 month project - pending description $10K Kristy Loke 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 6 month project - pending description xng_1vsce_ 3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo Research $8.5K Sienka Dounia 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo Research xng_1vsce_ 6-month stipend for Sparse Autoencoder Mech Interp projects $40K Logan Smith 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend for Sparse Autoencoder Mech Interp projects xng_1vsce_ 4-month stipend to continue work on AI Control as a MATS extension $30K Cody Rushing 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend to continue work on AI Control as a MATS extension xng_1vsce_ 12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour) $80K Nicky Pochinkov 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour) xng_1vsce_ 6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp. $1.7K Artem Karpov 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp. xng_1vsce_ 6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on bas $5.2K Hebrew Universty 2025-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on bas xng_1vsce_ 1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationality $80K Logan Strohl 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationality xng_1vsce_ Funding for having written AI safety distillation posts on the topic of membranes/boundaries $4.5K Chris Lakin 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] Funding for having written AI safety distillation posts on the topic of membranes/boundaries xng_1vsce_ 4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension program $60K Danielle Ensign 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension program xng_1vsce_ 4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension program $30K Teun van der Weij 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension program xng_1vsce_ General support for a forecasting team $6K Samotsvety Forecasting 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] General support for a forecasting team xng_1vsce_ This grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence. $45K Daniel Filan 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] This grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence. xng_1vsce_ Year-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base. $90K Bryce Meyer 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] Year-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base. xng_1vsce_ This grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment research $30K Alexander Turner 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] This grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment research xng_1vsce_ Funds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRs $5.1K Imperial College London 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] Funds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRs xng_1vsce_ 4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendas $7.2K Codruta Lugoj 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendas xng_1vsce_ 6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deception $55K Sara Price 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deception xng_1vsce_ 6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems. $6.5K Roman Leventov 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems. xng_1vsce_ 6-month stipend to work on safe and robust reasoning via mechanistically interpreting representations $30K Satvik Golechha 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to work on safe and robust reasoning via mechanistically interpreting representations xng_1vsce_ Develop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-risk $25K Suzy Shepherd 2025-01 funds.effectivealtruism.org [Long-Term Future Fund] Develop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-risk xng_1vsce_ 4-month stipend to continue work on AI Control as a MATS extension $30K Tyler Tracy 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-month stipend to continue work on AI Control as a MATS extension xng_1vsce_ $10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestricted $11K Vaidehi Agarwalla 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] $10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestricted xng_1vsce_ 8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topic $49K Vojtech Kovarik 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topic xng_1vsce_ 1 month long literature review on in-context learning and its relevance to AI alignment $6K Alfie Lamerton 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] 1 month long literature review on in-context learning and its relevance to AI alignment xng_1vsce_ 4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer models $13K Tilman Räuker 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer models xng_1vsce_ 6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space intervention $40K Eric Easley 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space intervention xng_1vsce_ Create an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governance $5K Michel Justen 2024-10 funds.effectivealtruism.org [Long-Term Future Fund] Create an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governance xng_1vsce_ A private online platform for research-sharing amongst the AI governance community $125K The AI Governance Archive (TAIGA) 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] A private online platform for research-sharing amongst the AI governance community xng_1vsce_ 6-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchers $50K Bryce Meyer 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 6-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchers xng_1vsce_ This grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program. $19K Viktor Rehnberg 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] This grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program. xng_1vsce_ Four months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial training $23K Aidan Ewart 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Four months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial training xng_1vsce_ 6-month incubation program for technical AI safety research organizations $123K Catalyze Impact 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month incubation program for technical AI safety research organizations xng_1vsce_ 4-months stipend to apply mechanistic interpretability to a real-world application, hallucinations $60K Javier Ferrando Monsonís and Oscar Balcells Obeso 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 4-months stipend to apply mechanistic interpretability to a real-world application, hallucinations xng_1vsce_ 3-month part-time salary in order to work on AI governance projects and activities $6K Arran McCutcheon 2023-07 funds.effectivealtruism.org [Long-Term Future Fund] 3-month part-time salary in order to work on AI governance projects and activities xng_1vsce_ Funding for (academic/technical) AI safety community events in London $8K Francis Rhys Ward 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] Funding for (academic/technical) AI safety community events in London xng_1vsce_ Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward $50K Michael Parker 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward xng_1vsce_ 3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignment $50K The University of Texas at Austin 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignment xng_1vsce_ 6 month AI alignment internship stipend top-up $10K Matt MacDermott 2024-04 funds.effectivealtruism.org [Long-Term Future Fund] 6 month AI alignment internship stipend top-up xng_1vsce_ Travel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI Safety $1.8K Dhruvin Patel 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] Travel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI Safety xng_1vsce_ Experimentally testing generative AI's ability to persuade humans about hazardous topics $115K Thomas Costello 2024-01 funds.effectivealtruism.org [Long-Term Future Fund] Experimentally testing generative AI's ability to persuade humans about hazardous topics xng_1vsce_ 6 month stipend for SAE-circuits $40K Logan Smith 2024-07 funds.effectivealtruism.org [Long-Term Future Fund] 6 month stipend for SAE-circuits xng_1vsce_ 6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIF $42K Marcus Williams 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] 6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIF xng_1vsce_ 3-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignment $13K Simon Lermen 2023-04 funds.effectivealtruism.org [Long-Term Future Fund] 3-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignment xng_1vsce_ Compute for experiment about how steganography in large language models might arise as a result of benign optimization $2K Felix Binder 2023-10 funds.effectivealtruism.org [Long-Term Future Fund] Compute for experiment about how steganography in large language models might arise as a result of benign optimization xng_1vsce_