| 6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into Russian | $13,000 | Jan 2022 |
| Working on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goal | $40,000 | Jan 2020 |
| Characterizing the properties and constraints of complex systems and their external interactions to inform AI safety research | $20,000 | Jul 2019 |
| 6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arranged | $27,819 | Oct 2021 |
| 12-month salary for researching value learning | $50,000 | Jan 2022 |
| Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral. | $30,000 | Jul 2020 |
| Support Sam's participation in ‘Mid-term AI impacts’ research project | $4,455 | Oct 2020 |
| PhD at Cambridge | $150,000 | Jul 2020 |
| Funding a nordic conference for senior X-risk researchers and junior talents interested in entering the field | $4,562 | Oct 2021 |
| Funding for a degree in the Biological Sciences at UCSD (University of California San Diego) | $250,000 | Oct 2021 |
| I would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good. | $2,000 | Jan 2022 |
| Research on AI safety | $30,103 | Jan 2022 |
| Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software | $11,400 | Oct 2021 |
| Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment | $150,000 | Oct 2021 |
| Buy out of teaching assistant duties for the remaining two years of my PhD program | $50,000 | Jan 2022 |
| Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved | $82,000 | Jan 2022 |
| Support to work on biosecurity | $11,400 | Jan 2022 |
| Funding to trial a new London organization aiming to 10x the number of AI safety researchers | $234,121 | Jan 2022 |
| Time costs over six months to publish a paper on the interaction of open science practices and bio-risk | $8,324 | Oct 2021 |
| Research into the nature of optimization, knowledge, and agency, with relevance to AI alignment | $80,000 | Jul 2021 |
| Producing video content on AI alignment | $39,000 | Apr 2019 |
| Participation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interface | $1,571 | Jul 2021 |
| Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciary | $24,000 | Oct 2020 |
| Open Online Course on “The Economics of AI” for Anton Korinek | $71,500 | Jan 2021 |
| Organizing a workshop aimed at highlighting recent successes in the development of verified software. | $5,000 | Jan 2020 |
| Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization. | $135,000 | Jan 2021 |
| 4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effects | $11,700 | Oct 2021 |
| A study of safe exploration and robustness to distributional shift in biological complex systems | $30,000 | Apr 2019 |
| Conducting independent research into AI forecasting and strategy questions | $40,000 | Oct 2019 |
| Conducting independent research on cause prioritization | $33,000 | Jan 2020 |
| Building towards a "Limited Agent Foundations" thesis on mild optimization and corrigibility | $30,000 | Apr 2019 |
| 6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISS | $25,000 | Jul 2021 |
| DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relations | $77,500 | Oct 2021 |
| Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop | $30,000 | Oct 2019 |
| Surveying the neglectedness of broad-spectrum antiviral development | $18,000 | Oct 2019 |
| Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual books | $19,000 | Oct 2019 |
| 12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms | $250,000 | Oct 2021 |
| Exploring crucial considerations for decision-making around information hazards | $25,000 | Jan 2020 |
| Help InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agents | $135,000 | Jan 2022 |
| Aiming to implement AI alignment concepts in real-world applications | $10,000 | Oct 2018 |
| Funding for building agents with causal models of the world and using those models for impact minimization. | $10,000 | Jan 2020 |
| Upskilling in ML in order to be able to do productive AI safety research sooner than otherwise | $10,000 | Jul 2019 |
| Identifying and resolving tensions between competition law and long-term AI strategy | $32,000 | Jan 2020 |
| Stipends, work hours, and retreat costs for four extra students of CHERI’s summer research program | $11,094 | Jul 2021 |
| Supporting 3-month research period | $7,900 | Jul 2020 |
| PhD in Computer Science working on AI-safety | $250,000 | Jan 2021 |
| 4 month salary to upskill in biosecurity and explore possible career paths in biosecurity. | $12,000 | Oct 2021 |
| New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass public | $100,000 | Jan 2021 |
| 3-month funding for part-time research into US ability to maintain food supply in an extreme pandemic | $3,150 | Jan 2022 |
| Grant to cover fees for a master's program in machine learning | $27,645 | Oct 2021 |
| Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) | $91,450 | Jul 2018 |
| Supporting Vanessa with her AI alignment research | $100,000 | Oct 2020 |
| Create a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processing | $55,000 | Jan 2020 |
| Building understanding of the structure of risks from AI to inform prioritization | $80,000 | Oct 2021 |
| Write a SF/F novel based on the EA community. | $15,000 | Jan 2022 |
| Educational scholarship in AI safety | $13,000 | Jan 2022 |
| Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers | $40,000 | Jan 2019 |
| Support to build a forecasting platform based on user-created play-money prediction markets | $200,000 | Jan 2022 |
| Summer research program on global catastrophic risks for Swiss (under)graduate students | $34,064 | Jan 2021 |
| Building infrastructure to give existential risk researchers superforecasting ability with minimal overhead | $27,000 | Apr 2019 |
| Strategic research and studying programming | $30,000 | Apr 2019 |
| Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety | $80,000 | Jan 2022 |
| 1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignment | $2,491 | Jan 2021 |
| 4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agent | $3,273 | Jan 2021 |
| 7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemics | $18,000 | Jan 2021 |
| Performing independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John Salvatier | $20,000 | Apr 2019 |
| Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates | $35,000 | Jul 2021 |
| Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculture | $3,600 | Jul 2021 |
| Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhD | $100,000 | Oct 2021 |
| Independent research on forecasting and optimal paths to improve the long-term - LTF fund | $41,337 | Oct 2020 |
| Payment for AI researchers when I interview / survey them about their perceptions of safety | $9,900 | Jan 2022 |
| Cataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way Forward | $34,500 | Jan 2022 |
| Unrestricted donation | $150,000 | Apr 2019 |
| Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) | $488,994 | Jul 2018 |
| researching methods to continuously monitor and analyse artificial agents for the purpose of control. | $44,668 | Oct 2020 |
| Identifying white space opportunities for technical projects to improve biosecurity and pandemic preparedness | $30,000 | Oct 2019 |
| 2-year funding to run public and expert surveys on AI governance and forecasting | $231,608 | Oct 2021 |
| Persuasion Tournament for Existential Risk | $200,000 | Jul 2021 |
| Support to work towards developing an early-warning system for future biological risks | $9,000 | Jan 2022 |
| Develop a research project on how to infer human's internal mental models from their behaviour using cognitive science modeling | $7,700 | Jan 2020 |
| Testing how the accuracy of impact forecasting varies with the timeframe of prediction. | $55,000 | Oct 2020 |
| Surveying experts on AI risk scenarios and working on other projects related to AI safety. | $5,000 | Jul 2020 |
| Funds for a 6-month project contributing to the clarification of goal-directedness | $21,950 | Jan 2022 |
| Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safety | $121,672 | Jan 2021 |
| Funding to cover a visit to Boston for biosecurity work | $16,456 | Oct 2021 |
| Retroactive funding for running an alignment theory mentorship program with Evan Hubinger | $3,600 | Jan 2022 |
| Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) | $174,021 | Jul 2018 |
| Supporting aspiring researchers of AI alignment to boost themselves into productivity | $25,000 | Apr 2019 |
| Human Progress for Beginners children's book | $25,000 | Oct 2019 |
| Replacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemics | $42,000 | Jan 2021 |
| Research to enable transition to AI Safety | $43,000 | Oct 2019 |
| Formalizing the side effect avoidance problem research | $30,000 | Jan 2020 |
| Productivity coaching for effective altruists to increase their impact | $23,000 | Jul 2019 |
| 50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing data | $37,500 | Jan 2021 |
| 6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulations | $3,500 | Jul 2021 |
| Support for self-study in data science and forecasting, to upskill within a GCBR research career | $2,230 | Oct 2021 |
| Create AI safety videos, and offer communication and media support to AI safety orgs. | $60,000 | Jul 2020 |
| We’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting. | $50,000 | Oct 2021 |
| Developing algorithms, environments and tests for AI safety via debate. | $25,000 | Jul 2020 |
| 2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-founders | $33,762 | Jan 2022 |
| Writing fiction to convey EA and rationality-related topics | $20,000 | Jul 2019 |
| Research on the links between short- and long-term AI policy while skilling up in technical ML | $75,080 | Jul 2019 |
| 3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance" | $5,000 | Oct 2021 |
| Funding for full-time, independent research on agent foundations | $30,000 | Oct 2019 |
| PhD in machine learning with a focus on AI alignment | $85,530 | Jul 2021 |
| Buying out one year of my academic teaching so that I can spend time on AI alignment research instead | $12,000 | Jan 2022 |
| Funding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019. | $28,000 | Apr 2019 |
| For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fit | $85,000 | Jan 2021 |
| Provides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support) | $14,838 | Jan 2017 |
| Additional funding for AI strategy PhD at Oxford / FHI | $36,982 | Jul 2019 |
| 6-month salary to develop tools to test the natural abstractions hypothesis | $35,000 | Jan 2021 |
| A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers | $26,250 | Apr 2019 |
| Conducting independent research into AI forecasting and strategy questions | $30,000 | Apr 2019 |
| One year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields. | $80,000 | Jan 2021 |
| Formalizing perceptual complexity with application to safe intelligence amplification | $30,000 | Apr 2019 |
| Three months of blogging and movement building at the intersection of EA/longtermism and progress studies | $18,000 | Oct 2021 |
| Support multiple SPARC project operations during 2021 | $15,000 | Jan 2021 |
| Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decades | $11,440 | Jul 2021 |
| A two-day, career-focused workshop to inform and connect European EAs interested in AI governance | $17,900 | Jan 2019 |
| To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety | $23,000 | Jul 2019 |
| Funding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systems | $275,000 | Jan 2022 |
| 10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretability | $19,020 | Oct 2021 |
| Increasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date. | $65,000 | Jan 2020 |
| Multi-model approach to corporate and state actors relevant to existential risk mitigation | $30,000 | Jul 2019 |
| 1-year salary for Adam Shimi to conduct independent research in AI Alignment | $60,000 | Jan 2021 |
| A research agenda rigorously connecting the internal and external views of value synthesis | $30,000 | Apr 2019 |
| BERI will support SERI when university systems are unable to help | $60,000 | Jan 2021 |
| Financial support for work on a biosecurity research project and workshop, and travel expenses | $15,000 | Jan 2022 |
| 3-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurity | $15,000 | Jan 2022 |
| Support to create language model (LM) tools to aid alignment research through feedback and content generation | $40,000 | Jan 2022 |
| Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD | $10,000 | Apr 2019 |
| Longtermist lessons from COVID | $5,625 | Jan 2022 |
| Writing preliminary content for an encyclopedia of effective altruism | $17,000 | Jan 2020 |
| Understanding the Impact of Lifting Government Interventions against COVID-19 Transmission | $9,798 | Oct 2020 |
| Unrestricted donation | $50,000 | Apr 2019 |
| An offline community hub for rationalists and EAs | $50,000 | Apr 2019 |
| Upskilling investigation of AI Safety via debate and ML training | $10,000 | Oct 2019 |
| Computing resources and researcher salaries at a new deep learning + AI alignment research group at Cambridge | $200,000 | Jan 2021 |
| Funding to pay participants to test a forecasting training program | $3,200 | Oct 2021 |
| Building infrastructure for the future of effective forecasting efforts | $70,000 | Apr 2019 |
| Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks. | $40,000 | Oct 2019 |
| 8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHI | $28,320 | Jul 2021 |
| 6-month salary to work with Dan Hendrycks on research projects relevant to AI alignment | $50,000 | Jan 2022 |
| 12-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goals | $20,000 | Apr 2019 |
| Conducting postdoctoral research at Harvard on the psychology of EA/long-termism | $50,000 | Apr 2019 |
| 12-month salary to provide runway after finishing RSP | $55,000 | Jan 2021 |
| Educational Scholarship in AI Alignment | $22,000 | Jan 2022 |
| Fund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing research | $70,000 | Jan 2021 |
| Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare) | $162,537 | Jul 2018 |
| Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitter | $1,050 | Jan 2022 |
| Unrestricted donation | $50,000 | Apr 2019 |
| Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentors | $20,000 | Oct 2021 |
| 12-month salary for independent research, upskilling, and finding a stable position in AI-Safety | $24,000 | Jan 2022 |
| A major expansion of the Metaculus prediction platform and its community | $70,000 | Apr 2019 |
| Research project on the longevity and decay of universities, philanthropic foundations, and catholic orders | $3,579 | Oct 2020 |
| Organising immersive workshops on meta skills and x-risk for STEM students at top universities. | $32,660 | Oct 2020 |
| Support for alignment theory agenda evaluation | $25,000 | Jul 2022 |
| AI safety dinners | $10,000 | Jul 2022 |
| AI safety research | $1,500 | Oct 2022 |
| Compensation for a non-fiction book on threat of AGI for a general audience | $50,000 | Jul 2022 |
| Funding to perform human evaluations for evaluating different machine learning methods for aligning language models | $10,000 | 2022 |
| Travel Support to BWC RevCon & Side Events | $3,500 | Oct 2022 |
| travel funding for participants in a workshop on the science of consciousness and current and near-term AI systems | $10,840 | Jan 2023 |
| Funding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows) | $100,000 | Jan 2023 |
| Neural network interpretability research | $12,990 | Jul 2022 |
| Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAO | $4,910 | Jan 2023 |
| 6 months of independent alignment research and upskilling | $30,000 | 2022 |
| Research into the international viability of FHI's Windfall Clause | $3,000 | Jul 2022 |
| 6-month salary for research into preventing steganography in interpretable representations using multiple agents | $20,000 | Oct 2022 |
| Research on EA and longtermism | $70,000 | Jul 2022 |
| 6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations. | $40,000 | Jan 2023 |
| 1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs. | $50,182 | 2022 |
| 6-month part-time (20h/week) salary to further develop and refine the feature visualization library Lucent | $23,000 | Jan 2022 |
| This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign. | $7,500 | Jan 2023 |
| Support to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 years | $3,000 | Jan 2023 |
| Support for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety faster | $50,000 | Jul 2022 |
| 12-month salary to study and get into AI Safety Research and work on related EA projects | $14,000 | Oct 2022 |
| 4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fit | $20,000 | 2022 |
| Exploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strike | $5,000 | Jul 2022 |
| 6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastrophe | $36,000 | Jul 2022 |
| 6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paper | $32,650 | Jan 2023 |
| Financial support to help productivity and increase time of early career alignment researcher | $7,000 | Jul 2022 |
| 5-month part time salary for collaborating on a research paper analyzing the implications of compute access | $2,500 | 2022 |
| Support for living expenses while doing PhD in AI safety - technical research and community building work | $2,305 | 2022 |
| 6-month salary for self-study to be more effective at AI alignment research | $15,000 | Jul 2022 |
| The Alignable Structures workshop in Philadelphia | $9,000 | Oct 2022 |
| New laptop for technical AI safety research | $4,099 | Jul 2022 |
| 10-month funding to study ML at university and AIS independently | $500 | Jan 2023 |
| 6 month salary to improve the US regulatory environment for prediction markets | $138,000 | Jul 2022 |
| Develop and market video game to explain the Stop Button Problem to the public & STEM individuals | $100,000 | Jul 2022 |
| A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan | $72,827 | 2022 |
| Paid internships for promising Oxford students to try out supervised AI Safety research projects | $60,000 | Jul 2022 |
| Starting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positions | $3,950 | Jul 2022 |
| Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022 | $22,570 | Jan 2022 |
| Website visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday Clock | $3,500 | Jul 2022 |
| 2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing hu | $15,000 | 2022 |
| Organize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022 | $110,000 | Jul 2022 |
| 8 weeks scholars program to pair promising alignment researchers with renowned mentors | $316,000 | Oct 2022 |
| Stanford Artificial Intelligence Professional Program tution | $4,785 | Jul 2022 |
| (professional development grant) New laptop for technical AI safety research | $2,500 | 2022 |
| Year-long salary for shard theory and RL mech int research | $220,000 | Jan 2023 |
| Stipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeople | $5,000 | Jul 2022 |
| Support to further develop a branch of rationality focused on patient and direct observation | $80,000 | Jul 2022 |
| 1-year salary and costs to connect, expand and enable the AGI governance and safety community in Canada | $87,000 | Jul 2022 |
| 3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AI | $5,500 | 2022 |
| 6-month salary for two people to find formalisms for modularity in neural networks | $72,560 | 2022 |
| One-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safety | $20,815.2 | Oct 2022 |
| 6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paper | $167,480 | Jan 2023 |
| European Summer Research Program focused on building the talent pipeline for global catastrophic risk researchers | $169,947 | Jan 2022 |
| 4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreat | $10,000 | Oct 2022 |
| Make 12 more AXRP episodes | $23,544 | 2022 |
| 12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-risk | $60,000 | Jul 2022 |
| 1-year salary for research in applications of natural abstraction | $180,000 | Oct 2022 |
| Financial support to work part time on an academic project evaluating factors relevant to digital consciousness | $11,000 | Oct 2022 |
| 6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org | $98,000 | Jan 2023 |
| 6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundations | $6,000 | Jan 2023 |
| 3-month salary for upskilling in PyTorch and AI safety research. | $19,200 | Jan 2023 |
| 6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGI | $50,000 | Oct 2022 |
| Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition) | $4,000 | Oct 2022 |
| Funding to cover 4-months of rent while attending a research group with the Cambridge AI Safety group | $5,613 | 2022 |
| 6-month salary to conduct AI alignment research circuits in decision transformers | $50,000 | 2022 |
| 6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audience | $8,000 | 2022 |
| Funding for a one year machine learning and computational statistics master’s at UCL | $38,101 | Oct 2022 |
| Funding for project transitioning from AI capabilities to AI Safety research. | $8,200 | 2022 |
| Twelve month salary to work as a global rationality organizer | $130,000 | Oct 2022 |
| Support to work on Aisafety.camp project, impact of human dogmatism on training | $2,000 | Jul 2022 |
| Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safety | $54,962 | Jan 2023 |
| 6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisation | $47,074 | Oct 2022 |
| 5-month salary plus expenses to support civilizational resilience projects arising from SHELTER Weekend | $27,248 | Oct 2022 |
| One year of funding to improve an established community hub for EA in London | $50,000 | Jul 2022 |
| Support for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actions | $90,000 | Jan 2022 |
| Financial support for career exploration and related project in AI alignment upon completion of Masters in Computer Science | $26,077 | Oct 2022 |
| 6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategy | $40,250 | Oct 2022 |
| 6 months salary for independent work centered on distillation and coordination in the AI governance & strategy space | $69,940 | 2022 |
| Support to cover the costs of leaving employment in order to pursue AI safety research. | $4,000 | 2022 |
| 6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictability | $28,875 | 2022 |
| PhD Stipend Top Up for CHAI PhD Student. | $6,675 | Jan 2022 |
| Purchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at Oxford | $3,640 | Jul 2022 |
| One year part time spent on AI safety upskilling and concrete research projects | $62,500 | Oct 2022 |
| Pass on funds for Astral Codex Ten Everywhere meetups | $22,000 | Jan 2023 |
| Payment for part-time rationality community building | $4,000 | Oct 2022 |
| 4-month salary for two people to find formalisms for modularity in neural networks | $67,000 | Jan 2023 |
| Travel support to attend the Symposium on AGI Safety in Oxford in May | $1,500 | Jan 2023 |
| Funding the last year of my PhD on embedded agency, to free up my time from teaching | $64,000 | Oct 2022 |
| Funds to support travel for academic research projects relating to pandemic preparedness and biosecurity | $8,150 | Oct 2022 |
| Funding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights. | $35,625 | Oct 2022 |
| 2 years of GovAI salary and overheads for Robert Trager | $401,537 | Jul 2022 |
| Support for Jay Bailey for work in ML for AI Safety | $79,120 | Jul 2022 |
| 4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research. | $12,000 | Jan 2023 |
| Support for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp. | $10,000 | Jul 2022 |
| 4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual stream | $16,300 | Jan 2023 |
| Fine-tuning large language models for an interpretability challenge (compute costs) | $11,300 | 2022 |
| Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward | $40,000 | 2022 |
| 12-month salary to work on alignment research! | $96,000 | Oct 2022 |
| Funding for Computer Science PhD | $348,773 | Jan 2022 |
| 6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RL | $40,000 | Oct 2022 |
| 4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL models | $1,000 | 2022 |
| 12-month salary to work on ML models for detecting genetic engineering in pathogens | $85,000 | Oct 2022 |
| 2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make time | $745 | Oct 2022 |
| Piloting an EA hardware lab for prototyping hardware relevant to longtermist priorities | $44,000 | Oct 2022 |
| Retroactive grant for managing the MATS program, 1.0 and 2.0 | $27,000 | Oct 2022 |
| Enabling prosaic alignment research with a multi-modal model on natural language and chess | $25,000 | Jul 2022 |
| 2-6 months' stipend to financially cover my self-development in Machine Learning for alignment work | $16,000 | Oct 2022 |
| 3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignment | $1,000 | Oct 2022 |
| Top-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residency | $180,200 | Jul 2022 |
| 6-months salary for researching “Framing computational systems such that we can find meaningful concepts." & Upskilling | $24,000 | Oct 2022 |
| 6 months’ salary to upskill on technical AI safety through project work and studying | $50,000 | Jan 2023 |
| 6-month salary for an AI alignment research project on the manipulation of humans by AI | $25,383 | 2022 |
| 6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computation | $26,342 | Oct 2022 |
| Support for research into applied technical AI alignment work | $10,000 | Jul 2022 |
| A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research | $305,000 | Jan 2022 |
| Increase of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residence | $134,532 | Jul 2022 |
| 5-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayal | $14,300 | Jul 2022 |
| 12-Month Salary and Compute Expenses to do AI Safety Research with LLMs | $70,000 | Jan 2023 |
| I am looking for a career transition grant to give me more time for job hunting & networking | $3,618 | Jan 2023 |
| Research and a report/paper on the the role of emergency powers in the governance of X-Risk | $26,000 | Jul 2022 |
| Equipment to improve productivity while doing AI Safety research | $3,900 | Jul 2022 |
| 3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobs | $20,000 | 2022 |
| One-year funding of Astral Codex Ten meetup in Philadelphia | $5,000 | Jan 2023 |
| Reconstruction attacks in federated learning | $5,000 | Jul 2022 |
| This grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability project | $47,500 | Jan 2023 |
| Retrospective funding for research retreat on a decision-theory / cause-prioritization topic. | $10,000 | 2022 |
| Funding for the AI Safety Nudge Competition | $5,200 | Oct 2022 |
| Support to work on AI alignment research | $16,341 | Jan 2022 |
| 9 months of funding for an early-career alignment researcher, to work with Owain Evans and others. | $45,000 | 2022 |
| Office rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood Research | $4,300 | 2022 |
| One year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGI | $16,600 | Oct 2022 |
| I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the Fall | $1,800 | Oct 2022 |
| Funding 2 years of technical AI safety research to understand and mitigate risk from large foundation models | $209,501 | Oct 2022 |
| Independent research and upskilling for one year, to transition from academic philosophy to AI alignment research | $60,000 | Oct 2022 |
| Part-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detection | $20,000 | Jul 2022 |
| 6-months salary to accelerate my plans of upskilling in order to work on the issue of AI safety | $26,150 | 2022 |
| Support funding during 2 years of an AI safety PhD at Oxford | $11,579 | Jul 2022 |
| 1-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research. | $150,000 | Jul 2022 |
| Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc. | $2,100 | Jan 2023 |
| Developing and maintaining projects/resources used by the EA and rationality communities | $60,000 | Jan 2023 |
| General support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influences | $115,411 | Jan 2023 |
| Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML coding | $2,500 | Jul 2022 |
| 6-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigation | $27,800 | Jul 2022 |
| 4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgism | $32,000 | Jan 2023 |
| Top-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in Canada | $17,000 | 2022 |
| 4 month grant to upskill for AI governance work before starting Science and Technology Policy PhD | $17,220 | Jul 2022 |
| 9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical research | $62,040 | Oct 2022 |
| 300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics. | $4,500 | Oct 2022 |
| ≤1-year salary for alignment work: assisting academics, skilling up, personal research and community building | $35,000 | 2022 |
| Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicant | $6,557 | Jul 2022 |
| 6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordination | $25,000 | Jul 2022 |
| Research (and self-study) project designed to map and offer preliminary assessment of AI ideal governance research | $2,000 | Jul 2022 |
| Fund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival. | $27,000 | Jan 2022 |
| 6-month salary to develop an overview of the current state of AI alignment research, and begin contributing | $70,000 | Jul 2022 |
| Grant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration. | $63,000 | Jan 2023 |
| 7 month salary to study a Graduate Diploma of International Affairs at The Australian National University | $9,000 | Jan 2023 |
| Funding to start a longtermist org and support research | $494,510 | Oct 2022 |
| Slack money for increased productivity in AI Alignment research | $17,355 | Jan 2022 |
| 2-year salary for work on the learning-theoretic AI alignment research agenda | $100,000 | Jan 2023 |
| Support to conduct work in AI safety | $5,000 | 2022 |
| Funding to support PhD in AI Safety at Imperial College London, technical research and community building | $6,350 | Jul 2022 |
| 3-month salary for SERI-MATS extension | $24,000 | Jan 2023 |
| A relocation grant to help me to move and settle into a PhD program and cover initial expenses | $6,500 | Oct 2022 |
| Funding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified. | $16,000 | 2022 |
| 6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project. | $50,000 | Jan 2023 |
| 1-year salary for upskilling in technical AI alignment research | $96,000 | Oct 2022 |
| 6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety | $4,524 | Oct 2022 |
| 4-month salary for conceptual/theoretical research towards perfect world-model interpretability | $30,000 | 2022 |
| 6-month salary to skill up and gain experience to start working on AI safety full-time | $14,136 | 2022 |
| 3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendas | $26,000 | 2022 |
| 6 months salary to do independent AI alignment research focused on formal alignment and agent foundations | $30,000 | 2022 |
| Funding for salary and living expenses while continuing to develop a framework of optimisation. | $8,000 | 2022 |
| Retrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS program | $4,400 | Oct 2022 |
| Weekend organised as a part of the co-founder matching process of a group to found a human data collection org | $2,300 | Oct 2022 |
| 1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studies | $90,000 | Jan 2023 |
| 3-month salary to set up a distillation course helping new AI safety theory researchers to distill papers | $14,600 | Jul 2022 |
| 24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goods | $102,000 | Jan 2022 |
| 6-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AI | $11,000 | 2022 |
| Support for AI alignment outreach in France (video/audio/text/events) & field-building | $24,800 | Oct 2022 |
| 3-month stipend for upskilling in AI Safety and potentially transition to a career in Alignment | $5,000 | 2022 |
| 4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systems | $12,321 | 2022 |
| Scholarship for PhD student working on research related to AI Safety | $8,000 | 2022 |
| 12-month salary to transition career into technical alignment research | $25,000 | Oct 2022 |
| 6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedules | $40,000 | Oct 2022 |
| A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity Summit | $2,500 | Oct 2022 |
| 8-month salary for three people to investigate the origins of modularity in neural networks | $125,000 | Jul 2022 |
| 12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal Paternalism | $81,402.42 | 2022 |
| A research & networking retreat for winners of the Eliciting Latent Knowledge contest | $72,000 | Oct 2022 |
| 6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systems | $24,000 | Oct 2022 |
| Support to conduct a research project collaboration on Compute Governance | $67,800 | Jan 2022 |
| 4-month funding for independent alignment research and study | $15,478 | Oct 2022 |
| EU Tech Policy Fellowship with ~10 trainees | $68,750 | Jul 2022 |
| Funding to increase my impact as an early-career biosecurity researcher | $6,000 | Oct 2022 |
| ~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safety | $4,800 | Jan 2022 |
| Economic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical research | $2,000 | Oct 2022 |
| One year of seed funding for a new AI interpretability research organisation | $195,000 | Jan 2023 |
| Travel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022 | $1,500 | 2022 |
| One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATS | $100,000 | Oct 2022 |
| 6-month salary to upskill for AI safety | $54,250 | 2022 |
| 12-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilities | $120,000 | Jan 2023 |
| 3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignment | $22,000 | Jul 2022 |
| Cover participant stipends for AI Safety Camp Virtual 2023 | $72,500 | 2022 |
| Developing weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 people | $80,000 | Jul 2024 |
| 6-months stipend for transitioning to independent research on AI Safety | $40,000 | Apr 2024 |
| Spend 3 months (part time) assessing plausible pathways to slowing AI | $5,000 | Apr 2024 |
| 4-month part-time salary to work on interpretability projects with David Bau and Logan Riggs | $10,000 | Jul 2024 |
| 6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowships | $272,800 | Oct 2023 |
| 1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articles | $80,000 | Jan 2025 |
| A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safety | $5,000 | Oct 2023 |
| 3-month stipend to support research on the state of AI safety in China and implications for AI existential risk | $12,000 | Apr 2024 |
| 3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferences | $80,000 | Jul 2024 |
| $10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability Fellowship | $10,120 | Apr 2024 |
| 1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group) | $102,500 | Oct 2023 |
| This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity. | $70,000 | Apr 2023 |
| 6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainability | $52,118.5 | Jan 2024 |
| 6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time after | $40,000 | Jan 2024 |
| Data collection for a new paradigm for AI alignment based on downstream outcomes and human flourishing | $50,000 | Jan 2024 |
| 4-month salary for finding and characterising provably hard cases for mechanistic anomaly detection | $40,000 | Jul 2024 |
| 3-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentor | $22,500 | Jan 2024 |
| This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia. | $77,000 | Jan 2024 |
| Exploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension) | $41,000 | Jan 2024 |
| 6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum. | $8,000 | Apr 2023 |
| 4-month stipend for a career transition period to explore roles in AI safety communications | $10,120 | Apr 2024 |
| 12 week 0.6FT upskilling stipend for technical governance research management | $11,244 | Apr 2024 |
| 3-months salary for SERI MATS extention to work on internal concept extraction | $27,260 | Jul 2023 |
| 6-months of part-time stipend to launch a new science journalism outlet focused on AI Safety | $50,000 | Jan 2025 |
| 6 to 12 month fundings to continue working on model psychology and evaluation | $42,000 | Jul 2023 |
| 4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switch | $62,000 | Jan 2024 |
| This grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts. | $55,000 | Apr 2023 |
| Ten field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025 | $7,118 | Apr 2025 |
| A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economy | $36,000 | Jul 2023 |
| Developing noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deployment | $40,000 | Jul 2024 |
| 6-month salary and compute budget for continuing work on mechanistic interpretability for attention layers | $37,000 | Jul 2024 |
| 12-month support for independent AI alignment research | $45,000 | Apr 2024 |
| 4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMs | $70,000 | Jul 2024 |
| This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects. | $32,000 | Jan 2024 |
| 4-month fund for full time AI safety technical and/or governance research | $10,750 | Apr 2023 |
| This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy. | $8,673 | Apr 2023 |
| 4-month stipend to continue AI safety projects | $25,216 | Jan 2024 |
| Part-time salary for independent AI safety research | $40,000 | Jul 2023 |
| Grant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate student | $1,875 | Apr 2024 |
| Mentored independent research and upskilling to transition from theoretical physics PhD to AI safety | $50,000 | Jul 2024 |
| 6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI Safety | $77,544 | Apr 2024 |
| 2-month salary to test suitability for technical AI alignment research and identify a research direction | $8,800 | Apr 2023 |
| Meta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project) | $62,150 | Jan 2024 |
| Organizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participants | $160,000 | Jan 2024 |
| 1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension program | $15,075 | Jan 2024 |
| 1 year PhD funding and compute funding to research a novel method for training prosociality into large language models | $10,000 | Apr 2023 |
| 1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystem | $99,330 | Oct 2023 |
| 6-month salary for independent alignment research in interpretability or control | $95,000 | Jul 2023 |
| Funding to do research on understanding search in transformers at the AI safety camp during 14 weeks | $6,636 | Apr 2023 |
| One year stipend and compute budget, for full-time technical AI alignment research | $80,000 | Jul 2023 |
| 6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's Law | $60,000 | Apr 2024 |
| 6 month salary for further pursuing sparse autoencoders for automatic feature finding | $40,000 | Jul 2023 |
| 5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistance | $16,698 | Jan 2025 |
| 3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganography | $12,600 | Apr 2024 |
| 6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing study | $36,000 | Apr 2024 |
| In MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networks | $40,000 | Jul 2023 |
| Funding to attend BWC meeting to discuss transparency with country representatives & work on research project | $1,700 | Jul 2023 |
| 2 Months of living expenses while I try to establish a broad-spectrum antiviral research organization | $5,000 | Jan 2024 |
| 6-month stipend to work on AI alignment research (automated redteaming, interpretability) | $30,000 | Apr 2024 |
| 12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agenda | $27,108 | Apr 2023 |
| 1-year stipend to continue research on agency, focused on natural abstraction | $200,000 | Jul 2023 |
| This grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research. | $45,000 | Apr 2023 |
| A 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025 | $20,700 | Oct 2024 |
| Undergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshop | $33,000 | Jul 2023 |
| Monthly seminar series on Guaranteed Safe AI, from July to December 2024 | $6,000 | Apr 2024 |
| This grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research. | $35,000 | Apr 2023 |
| 5-month salary to continue work on evaluating agent self-improvement capabilities | $23,360 | Apr 2024 |
| 12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher Brass | $6,000 | Apr 2024 |
| 4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform co | $22,324.5 | Jul 2024 |
| Seeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accurate | $2,500 | Apr 2024 |
| 1-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evals | $19,000 | Jan 2024 |
| 3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel funding | $20,000 | Apr 2024 |
| Six month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverables | $61,000 | Oct 2023 |
| 6-month salary for part-time independent research on LM interpretability for AI alignment | $7,700 | Jul 2023 |
| 6-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costs | $31,600 | Apr 2023 |
| SERI MATS 3-month extension to study knowledge removal in Language Models | $12,000 | Jul 2023 |
| 6-month salary to transition to a career in AI safety while working on AI safety projects | $30,000 | Jan 2024 |
| I'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute funds | $1,500 | Apr 2023 |
| 9-month programme to help language and cognition scientists repurpose their existing skills for long-termist research | $5,000 | Jul 2024 |
| 11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in Finland | $73,333.33 | Jul 2024 |
| Compute costs for experiments to evaluate different scalable oversight protocols | $86,600 | Jan 2024 |
| 6-month salary to finish writing a book on international AI governance and three other smaller AI governance projects | $33,700 | Apr 2024 |
| This grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction. | $2,000 | Jan 2024 |
| 6-month salary for an AISC project and continuing independent mechanistic interpretability projects | $28,000 | Apr 2023 |
| 3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge. | $3,138 | Apr 2023 |
| 4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension program | $30,000 | Jan 2024 |
| Retroactive funding for GameBench paper | $9,072 | Apr 2024 |
| A podcast mainly themed around AI x-risk, aimed at a non-technical audience | $5,000 | Jan 2024 |
| ~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila | $86,400 | Apr 2024 |
| 4-month stipend for upskilling within the field of economic governance of AI | $7,000 | Oct 2023 |
| 4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentials | $15,000 | Apr 2023 |
| 6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyond | $38,688 | Jan 2024 |
| 5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projects | $21,989 | Jul 2023 |
| 6-month stipend to work on techical alignment research as part of MATS 5.0 extension program | $40,000 | Jan 2024 |
| Retroactive grant to study Goodhart effects on heavy-tailed distributions | $29,760 | Jul 2023 |
| 6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systems | $37,120 | Jan 2024 |
| 9 months support for an in-depth YouTube channel about AI safety and how AI will impact us all | $27,000 | Jul 2024 |
| Funding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew Gentzel | $31,650 | Apr 2024 |
| 4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language models | $60,000 | Jul 2024 |
| 6-month career transition and independent research in AI safety and risk mitigation | $85,000 | Jul 2024 |
| This grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research. | $5,000 | Apr 2023 |
| Two workshops on strategic communications around AI safety, focused on the AI safety community | $5,720 | Jul 2024 |
| 6 month salary to work on mech interp research with mentorship from Prof David Bau | $41,000 | Jul 2023 |
| 6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmark | $35,000 | Jan 2024 |
| Research on how much language models can infer about their current user, and interpretability work on such inferences | $55,000 | Jan 2024 |
| 4-month stipend to research the mechanisms of refusal in chat LLMs | $40,000 | Jan 2024 |
| Virtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safety | $10,000 | Jan 2024 |
| 4-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategies | $27,000 | Jul 2024 |
| Develop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viability | $40,000 | Jul 2024 |
| A fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and tech | $120,000 | Apr 2024 |
| One year funding of ACX meetup in Atlanta Georgia | $5,000 | Apr 2023 |
| 7 months of coworking-space funding continuation, during interpretability research project | $10,500 | Jan 2024 |
| Stipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attention | $25,491 | Apr 2023 |
| Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymaking | $24,339 | Oct 2023 |
| 7-month stipend for organising AI Alignment Irvine (AIAI) | $16,337 | Jul 2024 |
| 6-month stipends to develop and apply a novel method for localizing information and computation in neural networks | $160,000 | Jul 2024 |
| 9-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’ | $7,200 | Jul 2024 |
| 6-month stipend to continue independent interpretability research | $40,000 | Jan 2024 |
| 4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switch | $67,000 | Jan 2024 |
| WhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic Interpretability | $61,460 | Jul 2023 |
| 8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AI | $6,230 | Apr 2024 |
| 1-year stipend for independent research primarily on high-level interpretability | $70,000 | Apr 2024 |
| Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignment | $80,000 | Jul 2024 |
| Conference publication of interpretability and LM-steering results | $40,000 | Apr 2023 |
| 1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved | $121,575 | Jul 2023 |
| 12-month salary to set up a new org doing research and creating interventions to minimise lock-in risk | $10,000 | Oct 2024 |
| 1.5 year stipend for thorough investigation and analysis of AI lab scaling policies | $100,000 | Jan 2025 |
| 6 month SERI MATS London extension phase for continuing and scaling up the sparse coding project | $35,300 | Jul 2023 |
| 4 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognition | $34,100 | Jan 2024 |
| Studying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.I | $50,000 | Apr 2023 |
| Organizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participants | $115,000 | Apr 2025 |
| MATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systems | $17,500 | Jan 2024 |
| 6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitation | $55,660 | Jan 2024 |
| One year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's work | $150,000 | Jan 2025 |
| 6-month stipend for a small group of collaborators to continue research on the Agent Structure Problem | $60,000 | Jan 2024 |
| 4-month stipend for 3 people to create demonstrations of provably undetectable backdoors | $50,336 | Jan 2024 |
| Development of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms) | $30,000 | Apr 2024 |
| Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theory | $20,000 | Jul 2024 |
| 4-month salary to continue work on AI Control as a MATS extension | $30,000 | Jul 2024 |
| 6-month salary to build experience in AI interpretability research before PhD applications | $40,000 | Apr 2023 |
| 2 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fields | $5,000 | Jul 2024 |
| Salary Top-Up for Timaeus' Employees & Contractors | $100,000 | Jan 2024 |
| 6 month project - pending description | $10,000 | Apr 2023 |
| 3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo Research | $8,500 | Jan 2024 |
| 6-month stipend for Sparse Autoencoder Mech Interp projects | $40,000 | Jan 2024 |
| 4-month stipend to continue work on AI Control as a MATS extension | $30,000 | Jul 2024 |
| 12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour) | $80,000 | Apr 2024 |
| 6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp. | $1,739 | Apr 2023 |
| 6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on bas | $5,200 | Apr 2025 |
| 1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationality | $80,000 | Apr 2023 |
| Funding for having written AI safety distillation posts on the topic of membranes/boundaries | $4,500 | Oct 2023 |
| 4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension program | $60,000 | Jan 2024 |
| 4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension program | $30,087 | Jan 2024 |
| General support for a forecasting team | $6,000 | Oct 2023 |
| This grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence. | $44,802 | Apr 2024 |
| Year-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base. | $90,000 | Apr 2024 |
| This grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment research | $30,000 | Apr 2023 |
| Funds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRs | $5,090 | Jul 2023 |
| 4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendas | $7,200 | Apr 2023 |
| 6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deception | $55,000 | Jan 2024 |
| 6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems. | $6,500 | Apr 2023 |
| 6-month stipend to work on safe and robust reasoning via mechanistically interpreting representations | $30,000 | Apr 2024 |
| Develop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-risk | $25,000 | Jan 2025 |
| 4-month stipend to continue work on AI Control as a MATS extension | $30,000 | Jul 2024 |
| $10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestricted | $10,500 | Apr 2023 |
| 8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topic | $49,333.33 | Jul 2024 |
| 1 month long literature review on in-context learning and its relevance to AI alignment | $6,000 | Jan 2024 |
| 4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer models | $13,000 | Apr 2024 |
| 6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space intervention | $40,000 | Jul 2024 |
| Create an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governance | $5,000 | Oct 2024 |
| A private online platform for research-sharing amongst the AI governance community | $125,000 | Jul 2024 |
| 6-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchers | $50,000 | Apr 2023 |
| This grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program. | $19,248 | Apr 2023 |
| Four months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial training | $23,100 | Jan 2024 |
| 6-month incubation program for technical AI safety research organizations | $122,507 | Oct 2023 |
| 4-months stipend to apply mechanistic interpretability to a real-world application, hallucinations | $60,000 | Jul 2024 |
| 3-month part-time salary in order to work on AI governance projects and activities | $6,000 | Jul 2023 |
| Funding for (academic/technical) AI safety community events in London | $8,000 | Apr 2023 |
| Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward | $50,000 | Jan 2024 |
| 3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignment | $50,000 | Apr 2024 |
| 6 month AI alignment internship stipend top-up | $10,000 | Apr 2024 |
| Travel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI Safety | $1,800 | Jul 2024 |
| Experimentally testing generative AI's ability to persuade humans about hazardous topics | $115,000 | Jan 2024 |
| 6 month stipend for SAE-circuits | $40,000 | Jul 2024 |
| 6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIF | $42,000 | Oct 2023 |
| 3-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignment | $13,000 | Apr 2023 |
| Compute for experiment about how steganography in large language models might arise as a result of benign optimization | $2,000 | Oct 2023 |