Longterm Wiki

2-month salary to test suitability for technical AI alignment research and identify a research direction

Long-Term Future Fund (LTFF)GrantsD50N3LW8ei

Record Metadata

Record KeyD50N3LW8ei
EntityLong-Term Future Fund (LTFF)
CollectionGrants(545 records total)
SchemaIndividual grant disbursement to a specific recipient.
YAML Filepackages/kb/data/things/yA12C1KcjQ.yaml

Fields

Name2-month salary to test suitability for technical AI alignment research and identify a research direction
Amount$8,800
RecipientBart Bussmann
DateApr 2023
Sourcefunds.effectivealtruism.org
Notes[Long-Term Future Fund] 2-month salary to test suitability for technical AI alignment research and identify a research direction

Other Records in Grants (544)

KeyNameAmountRecipient
0AZ33tQF7A6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into Russian$13,000Maksim Vymenets
0eGRk20q4xWorking on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goal$40,000Tushant Jha
_0RKOhZE6hCharacterizing the properties and constraints of complex systems and their external interactions to inform AI safety research$20,000Alexander Siegenfeld
1Aar3MVKHF6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arranged$27,819Thomas Moynihan
1aYm-eVR7H12-month salary for researching value learning$50,000Charlie Steiner
1Xf_Gj52BwConducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral.$30,000Gavin Taylor
24rxx414qmSupport Sam's participation in ‘Mid-term AI impacts’ research project$4,455Sam Clarke
2iHYLSpJdiPhD at Cambridge$150,000Richard Ngo
3gYJg_G0AYFunding a nordic conference for senior X-risk researchers and junior talents interested in entering the field$4,562Effektiv Altruism Sverige (EA Sweden)
3inwoVljFzFunding for a degree in the Biological Sciences at UCSD (University of California San Diego)$250,000Kristaps Zilgalvis
4S-5d0XkGbI would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good.$2,000Ruth Grace Wong
53rBRN_39YResearch on AI safety$30,103Marius Hobbhahn
59fne-uvEELiving costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software$11,400George Green
5cng_Yg1lcDesign and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment$150,000Nick Hay
5I7yCiC5pjBuy out of teaching assistant duties for the remaining two years of my PhD program$50,000Michael Zlatin
5Ny3nH_8D8Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved$82,000Robert Miles
5Vj-_jyQUwSupport to work on biosecurity$11,400Sculpting Evolution Group, MIT
6anU7JjbgdFunding to trial a new London organization aiming to 10x the number of AI safety researchers$234,121Jessica Cooper
7DP38CekFBTime costs over six months to publish a paper on the interaction of open science practices and bio-risk$8,324James Smith
7J0aKEIBzAResearch into the nature of optimization, knowledge, and agency, with relevance to AI alignment$80,000Alex Flint
81mfM4b2ELProducing video content on AI alignment$39,000Robert Miles
87apRjJJViParticipation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interface$1,571Fabio Haenel
8AsXZQMre0Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciary$24,000Nick Hollman
8j0HGkWpffOpen Online Course on “The Economics of AI” for Anton Korinek$71,500University of Virginia
8JOfbK6Za9Organizing a workshop aimed at highlighting recent successes in the development of verified software.$5,000Gopal Sarma
8o9cB6AgILHiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization.$135,000Legal Priorities Project
8tyT9Ogstz4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effects$11,700David Rhys Bernard
9iPe5avIJ0A study of safe exploration and robustness to distributional shift in biological complex systems$30,000Nikhil Kunapuli
9nxwEJRciTConducting independent research into AI forecasting and strategy questions$40,000Tegan McCaslin
9Y55HEEvScConducting independent research on cause prioritization$33,000Michael Dickens
AbN8JE0M3CBuilding towards a "Limited Agent Foundations" thesis on mild optimization and corrigibility$30,000Alex Turner
AdYoWzcaIF6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISS$25,000AI Safety Support
an1yb0o9BvDPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relations$77,500University of Oxford, Department of Experimental Psychology
B5cxypQlBCBuild a theory of abstraction for embedded agency using real-world systems for a tight feedback loop$30,000John Wentworth
bc7utnQEshSurveying the neglectedness of broad-spectrum antiviral development$18,000Jaspreet Pannu (Jassi)
besYSyJL5_Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual books$19,000Elizabeth Van Nostrand
BnenW_3Njw12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms$250,000Berkeley Existential Risk Initiative
BQGdvYXh33Exploring crucial considerations for decision-making around information hazards$25,000Will Bradshaw
C39O3MDLmxHelp InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agents$135,000Berkeley Existential Risk Initiative
CORqykyNTuAiming to implement AI alignment concepts in real-world applications$10,0002VexoROapg
CQMGGlGpATFunding for building agents with causal models of the world and using those models for impact minimization.$10,000Vincent Luczkow
D93EqVKScLUpskilling in ML in order to be able to do productive AI safety research sooner than otherwise$10,000Joar Skalse
dgJdgWUYgSIdentifying and resolving tensions between competition law and long-term AI strategy$32,000Shin-Shin Hua and Haydn Belfield
DSCsrA733KStipends, work hours, and retreat costs for four extra students of CHERI’s summer research program$11,094Effective Altruism Geneva
eCRsD_mJg6Supporting 3-month research period$7,900Charlie Rogers-Smith
EKGAvt0T-ZPhD in Computer Science working on AI-safety$250,000Amon Elders
eMiOJupuUo4 month salary to upskill in biosecurity and explore possible career paths in biosecurity.$12,000Finan Adamson
flGz3Hy9F6New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass public$100,000Expii, Inc.
Fnja_garF_3-month funding for part-time research into US ability to maintain food supply in an extreme pandemic$3,150Adin Richards
Fn-mhVcRYUGrant to cover fees for a master's program in machine learning$27,645Andrei Alexandru
froxsF0HhcFunding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$91,450hvg9ecR3nA
fX4nZ04yXiSupporting Vanessa with her AI alignment research$100,000Vanessa Kosoy
FYh5ALalcMCreate a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processing$55,000106
FYQWivWarkBuilding understanding of the structure of risks from AI to inform prioritization$80,000David Manheim
FYzS8cRY6TWrite a SF/F novel based on the EA community.$15,000Timothy Underwood
g84AgdlHp3Educational scholarship in AI safety$13,000Paul Colognese
giSaNsL_LBScaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers$40,000Shahar Avin
gr6_LAtrdKSupport to build a forecasting platform based on user-created play-money prediction markets$200,000Stephen Grugett, James Grugett, Austin Chen
-Gs2dM9eWoSummer research program on global catastrophic risks for Swiss (under)graduate students$34,064Effective Altruism Geneva
gVnZ2psxImBuilding infrastructure to give existential risk researchers superforecasting ability with minimal overhead$27,000Jacob Lagerros
HC_xTZ2REJStrategic research and studying programming$30,000Eli Tyre
HDHoXrI4k0Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety$80,000AI Safety Support
hE9TYd61UN1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignment$2,491Marc-Everin Carauleanu
i3j2jkploW4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agent$3,273David Reber
IaxVD3CbAE7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemics$18,000Toby Bonvoisin
IGKZeH3l1ePerforming independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John Salvatier$20,000Connor Flexman
Iig4wilTrvInvestigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates$35,000Joe Collman
IoXc9BYI5jResearching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculture$3,600ALLFED
IzML292SfHReplacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhD$100,000Aryeh Englander
IzmOOWdc1sIndependent research on forecasting and optimal paths to improve the long-term - LTF fund$41,337248
j836vMIQvgPayment for AI researchers when I interview / survey them about their perceptions of safety$9,900Vael Gates
JXmpyeOEgNCataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way Forward$34,500Michael Parker
kcVfpQP4xJUnrestricted donation$150,000l5K9ZdbXww
keB-_jfpwaFunding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$488,994231
kHH6KIQ3luresearching methods to continuously monitor and analyse artificial agents for the purpose of control.$44,668Lee Sharkey
kjV6oor7_pIdentifying white space opportunities for technical projects to improve biosecurity and pandemic preparedness$30,000Kyle Fish
kKQPfNi96E2-year funding to run public and expert surveys on AI governance and forecasting$231,608Noemi Dreksler
KPBz1OZ0EAPersuasion Tournament for Existential Risk$200,000Philip Tetlock, Ezra Karger, Pavel Atanasov
kVUk1QSmfASupport to work towards developing an early-warning system for future biological risks$9,000Michael McLaren
L1_M9Gy-CrDevelop a research project on how to infer human's internal mental models from their behaviour using cognitive science modeling$7,700Sofia Jativa Vega
lCMyQHwJ8gTesting how the accuracy of impact forecasting varies with the timeframe of prediction.$55,000David Rhys Bernard
lGydfL645HSurveying experts on AI risk scenarios and working on other projects related to AI safety.$5,000Alexis Carlier
LgZOV7ucvTFunds for a 6-month project contributing to the clarification of goal-directedness$21,950Morgan Rogers
LIc7d3yUG8Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safety$121,672Caroline Jeanmaire
lJ2W3CxUe5Funding to cover a visit to Boston for biosecurity work$16,456Will Bradshaw
LlYaBTHpgXRetroactive funding for running an alignment theory mentorship program with Evan Hubinger$3,600Oliver Zhang
LU6DZ-0NvAFunding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$174,021l5K9ZdbXww
lxu_0p53tuSupporting aspiring researchers of AI alignment to boost themselves into productivity$25,000Johannes Heidecke
mgxXED3iLPHuman Progress for Beginners children's book$25,000Jason Crawford
mLexy-8U9RReplacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemics$42,000Joel Becker
MPojLPWjGfResearch to enable transition to AI Safety$43,000Vojtěch Kovařík
NALe3Kpq0PFormalizing the side effect avoidance problem research$30,000Alex Turner
nfOLm1qJhYProductivity coaching for effective altruists to increase their impact$23,000Lynette Bye
ngzHVoyCLU50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing data$37,500BugSeq Bioinformatics Inc.
NP-qaV9z9-6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulations$3,500Rutgers University, Department of Philosophy
Ns640cdsMPSupport for self-study in data science and forecasting, to upskill within a GCBR research career$2,230Benjamin Stewart
nxBl7RrlnkCreate AI safety videos, and offer communication and media support to AI safety orgs.$60,000Robert Miles
-O92gdql4wWe’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting.$50,000The Center for Election Science
Oc7FXi-kiUDeveloping algorithms, environments and tests for AI safety via debate.$25,000Joe Collman
Ofm9C5CVcU2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-founders$33,762Aligned AI
OkLC_VLKsQWriting fiction to convey EA and rationality-related topics$20,000Miranda Dixon-Luinenburg
OU5DyFhjeQResearch on the links between short- and long-term AI policy while skilling up in technical ML$75,080Jess Whittlestone
oulsifNnkQ3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance"$5,000Chelsea Liang
P7lbkUUeOaFunding for full-time, independent research on agent foundations$30,000Daniel Demski
pdiIlZZpWyPhD in machine learning with a focus on AI alignment$85,530Dmitrii Krasheninnikov
PFFi-I2cT7Buying out one year of my academic teaching so that I can spend time on AI alignment research instead$12,000David Udell
pQlv4ZnyPwFunding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019.$28,000Mikhail Yagudin
pw3zN7ur00For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fit$85,000Remmelt Ellen
QiXzaIKn9OProvides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support)$14,838Berkeley Existential Risk Initiative (BERI)
qlVJtsSCC3Additional funding for AI strategy PhD at Oxford / FHI$36,982Sören Mindermann
qnePPz-7Iy6-month salary to develop tools to test the natural abstractions hypothesis$35,000John Wentworth
QTHMmvOnMkA biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers$26,250Tessa Alexanian
qUljUyeTJmConducting independent research into AI forecasting and strategy questions$30,000Tegan McCaslin
quSg4y_gHtOne year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields.$80,000Logan Strohl
Qy9z7t9cX8Formalizing perceptual complexity with application to safe intelligence amplification$30,000Anand Srinivasan
QzvRqVqeeFThree months of blogging and movement building at the intersection of EA/longtermism and progress studies$18,000Nicholas (Nick) Whitaker
rH8A7UQ3rvSupport multiple SPARC project operations during 2021$15,000SPARC
rj9dUrarbDFunding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decades$11,440Zach Freitas-Groff
_-roriUpiiA two-day, career-focused workshop to inform and connect European EAs interested in AI governance$17,900Alex Lintz
SahWMTva8uTo spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety$23,000Stag Lynn
smRk2JfN5xFunding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systems$275,000Kush Bhatia
SPvDNeKqnC10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretability$19,020Benedikt Hoeltgen
svKqNbljWFIncreasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date.$65,000Anthony Aguirre
sweKVvzZuNMulti-model approach to corporate and state actors relevant to existential risk mitigation$30,000David Manheim
tBLs_p3PKQ1-year salary for Adam Shimi to conduct independent research in AI Alignment$60,000Adam Shimi
TXkUXkGAUBA research agenda rigorously connecting the internal and external views of value synthesis$30,000David Girardo
U3ddWtAw1xBERI will support SERI when university systems are unable to help$60,000Berkeley Existential Risks Initiative
uELBUx1PG6Financial support for work on a biosecurity research project and workshop, and travel expenses$15,000Simon Grimm
utIKV4qiG93-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurity$15,000Caleb Withers
uW3AUMQdQkSupport to create language model (LM) tools to aid alignment research through feedback and content generation$40,000Logan Smith
UxkBtpDkkOUpskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD$10,000Orpheus Lummis
UXRX--f_jILongtermist lessons from COVID$5,625Gavin Leech
V412WxjVvlWriting preliminary content for an encyclopedia of effective altruism$17,000Pablo Stafforini
VpWHefVp2UUnderstanding the Impact of Lifting Government Interventions against COVID-19 Transmission$9,798Mrinank Sharma
WAhWLNfsAZUnrestricted donation$50,0002VexoROapg
WAyW03YJ45An offline community hub for rationalists and EAs$50,000Vyacheslav Matyuhin
wn-5FO99upUpskilling investigation of AI Safety via debate and ML training$10,000Joe Collman
WrVqixhDOJComputing resources and researcher salaries at a new deep learning + AI alignment research group at Cambridge$200,000David Krueger
Wsq6Urn5LxFunding to pay participants to test a forecasting training program$3,200Logan McNichols
wtmWZQB5ViBuilding infrastructure for the future of effective forecasting efforts$70,000Ozzie Gooen
WvA1Bja5-0Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks.$40,000Damon Pourtahmaseb-Sasi
wVSlIH3cWe8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHI$28,320James Bernardi
xJ9Ks1m-8O6-month salary to work with Dan Hendrycks on research projects relevant to AI alignment$50,000Thomas Woodside
xo9qI59Fo112-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goals$20,000Lauren Lee
XvJuWd8IDSConducting postdoctoral research at Harvard on the psychology of EA/long-termism$50,000Lucius Caviola
xXcul_kHC712-month salary to provide runway after finishing RSP$55,000The Future of Humanity Institute
Y6l9vrvtL1Educational Scholarship in AI Alignment$22,000Jaeson Booker
YaFoVMM64dFund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing research$70,000t0p43V5oLA
YFWwiarl-tFunding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$162,537gNsqAes7Dw
YN9Z786ZLEAd campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitter$1,050Alex Turner
YQpfbn5JjpUnrestricted donation$50,000231
z7mYB3WNf1Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentors$20,000David Reber
ZAnCI6ziEk12-month salary for independent research, upskilling, and finding a stable position in AI-Safety$24,000Robert Kralisch
zFVQuoBgOHA major expansion of the Metaculus prediction platform and its community$70,000Anthony Aguirre
ZsTflOnI9EResearch project on the longevity and decay of universities, philanthropic foundations, and catholic orders$3,579Maximilian Negele
ZxqZmBLQRiOrganising immersive workshops on meta skills and x-risk for STEM students at top universities.$32,660Tamara Borine
04uEt3ZQEBSupport for alignment theory agenda evaluation$25,000Jack Ryan
0DBOYM7nsOAI safety dinners$10,000Neil Crawford
0gM6YoQlp-AI safety research$1,500Lukas Berglund
0jbPRKAidOCompensation for a non-fiction book on threat of AGI for a general audience$50,000Darren McKee
0KJjy_rZ63Funding to perform human evaluations for evaluating different machine learning methods for aligning language models$10,000Robert Kirk
0mXvTS1eKDTravel Support to BWC RevCon & Side Events$3,500Theo Knopfer
0rx0JWiWgbtravel funding for participants in a workshop on the science of consciousness and current and near-term AI systems$10,840Robert Long
0zwk2Qg6CeFunding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows)$100,000Nora Ammann
16QE4EvJJDNeural network interpretability research$12,990Nicholas Greig
17OHPjOPg1Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAO$4,910Jacob Mendel
1jMVqsOXxL6 months of independent alignment research and upskilling$30,000Zhengbo Xiang (Alana)
1U_NFTE3Q7Research into the international viability of FHI's Windfall Clause$3,000John Bridge
21DEc9h_SV6-month salary for research into preventing steganography in interpretable representations using multiple agents$20,000Hoagy Cunningham
27S_g_DcLMResearch on EA and longtermism$70,000Aaron Bergman
-2amN_L8Sc6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations.$40,000Logan Smith
2lSxs38icW1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs.$50,182Paul Bricman
2p6UZTSKi66-month part-time (20h/week) salary to further develop and refine the feature visualization library Lucent$23,000Tom Lieberum
2ywJ3ShrK5This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign.$7,500Naoya Okamoto
34lvqSAG5PSupport to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 years$3,000David Staley
34W2_82XwPSupport for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety faster$50,000Marius Hobbhahn
3RoyLBlMHM12-month salary to study and get into AI Safety Research and work on related EA projects$14,000Luca De Leo
3RWX-5PgAY4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fit$20,000Max Kaufmann
3Zxanp70lEExploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strike$5,000Isabel Johnson
4KgBAq7AzV6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastrophe$36,000Sasha Cooper
4tUbrTw5wg6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paper$32,650Jonathan Ng
4vL9VFLvjFFinancial support to help productivity and increase time of early career alignment researcher$7,000Max Kaufmann
5-2lQHsFAt5-month part time salary for collaborating on a research paper analyzing the implications of compute access$2,500Sage Bergerson
5jhbO5mINFSupport for living expenses while doing PhD in AI safety - technical research and community building work$2,305Francis Rhys Ward
5jvTYe5u0g6-month salary for self-study to be more effective at AI alignment research$15,000Thomas Kehrenberg
65GhtmNdn1The Alignable Structures workshop in Philadelphia$9,000Quinn Dougherty
6j-WcEZBNSNew laptop for technical AI safety research$4,099Peter Barnett
6pE9cBHED510-month funding to study ML at university and AIS independently$500Patricio Vercesi
6VLGbp_QXB6 month salary to improve the US regulatory environment for prediction markets$138,000Solomon Sia
7c4m42R2K7Develop and market video game to explain the Stop Button Problem to the public & STEM individuals$100,000Lone Pine Games, LLC
7heZu-70OUA 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan$72,82791
7LaU4KGiv9Paid internships for promising Oxford students to try out supervised AI Safety research projects$60,000AI Safety Hub Ltd
802UKZJNOIStarting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positions$3,950Kai Sandbrink
8fmxE2OhSjFunds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022$22,570William D'Alessandro
8L4_71xcnyWebsite visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday Clock$3,500Conor Barnes
8NgY6AIZ4D2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing hu$15,000Max Räuker
9IKi0O0BhkOrganize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022$110,000Czech Association for Effective Altruism (CZEA)
9sratQUNQn8 weeks scholars program to pair promising alignment researchers with renowned mentors$316,000AI Safety Support
9t3iB_n0D5Stanford Artificial Intelligence Professional Program tution$4,785Mario Peng Lee
9XxEh9DSq9(professional development grant) New laptop for technical AI safety research$2,500Max Lamparth
9z7JbhtZ_SYear-long salary for shard theory and RL mech int research$220,000Alexander Turner
A_4zP3u13FStipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeople$5,000Chris Patrick
AjQKUZC3CpSupport to further develop a branch of rationality focused on patient and direct observation$80,000Logan Strohl
aVfDxJVvIG1-year salary and costs to connect, expand and enable the AGI governance and safety community in Canada$87,000Wyatt Tessari
B3mYmfUy9G3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AI$5,500Tomislav Kurtovic
b97DqxXG2G6-month salary for two people to find formalisms for modularity in neural networks$72,560Lucius Bushnaq
bQ6nbc0LGbOne-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safety$20,815.2Steve Petersen
C0YnRlYHaQ6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paper$167,480Kaarel Hänni, Kay Kozaronek, Walter Laurito, and Georgios Kaklmanos
c2r_EnTxCrEuropean Summer Research Program focused on building the talent pipeline for global catastrophic risk researchers$169,947Effective Altruism Geneva
c806PYZq6y4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreat$10,000Jonas Hallgren
C9HbDZxOxqMake 12 more AXRP episodes$23,544Daniel Filan
cJ0GiNTvGC12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-risk$60,000Ross Graham
coTtt3GwmJ1-year salary for research in applications of natural abstraction$180,000John Wentworth
cPuV6T-966Financial support to work part time on an academic project evaluating factors relevant to digital consciousness$11,000Derek Shiller
CrzXx2ol9n6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org$98,000Jeffrey Ladish
cUTxZj2TrC6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundations$6,000Iván Godoy
cWHotG61L13-month salary for upskilling in PyTorch and AI safety research.$19,200Alex Infanger
-D2vbP-AdY6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGI$50,000Nicky Pochinkov
Dbpgkb_133Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition)$4,000Fabienne Sandkühler
DeI9YSx5qCFunding to cover 4-months of rent while attending a research group with the Cambridge AI Safety group$5,613David Quarel
dFTcfjHlHU6-month salary to conduct AI alignment research circuits in decision transformers$50,000Joseph Bloom
dHe_2StHJb6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audience$8,000Liam Carroll
Dju3SD_n6FFunding for a one year machine learning and computational statistics master’s at UCL$38,101Shavindra Jayasekera
DSPua05eizFunding for project transitioning from AI capabilities to AI Safety research.$8,200Gerold Csendes
DxlMIA1hr2Twelve month salary to work as a global rationality organizer$130,000Skyler Crossman
EARO8L0y8sSupport to work on Aisafety.camp project, impact of human dogmatism on training$2,000Kevin Wang
eJ0DJLuSW8Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safety$54,962Robert Miles
eTshEe8C_E6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisation$47,074Samuel Brown
EU52-F_46G5-month salary plus expenses to support civilizational resilience projects arising from SHELTER Weekend$27,248Joel Becker
EV9tmWSjnEOne year of funding to improve an established community hub for EA in London$50,000Newspeak House
_fgPWY9SOtSupport for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actions$90,000Columbia University
fobL05V5MiFinancial support for career exploration and related project in AI alignment upon completion of Masters in Computer Science$26,077Max Clarke
FQVg0-CrUs6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategy$40,250Will Aldred
fvqTLV4uwe6 months salary for independent work centered on distillation and coordination in the AI governance & strategy space$69,940Alexander Lintz
Fw7d15_3K_Support to cover the costs of leaving employment in order to pursue AI safety research.$4,000Kajetan Janiak
G3lE_nTQph6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictability$28,875Fabian Schimpf
Ga97nIh5LIPhD Stipend Top Up for CHAI PhD Student.$6,675Alex Turner
Gj_n9VOOiBPurchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at Oxford$3,640Bálint Pataki
gL2Brc6YbAOne year part time spent on AI safety upskilling and concrete research projects$62,500Ross Nordby
guUGefrD5rPass on funds for Astral Codex Ten Everywhere meetups$22,000Skyler Crossman
HBRDKy73BwPayment for part-time rationality community building$4,000Boston Astral Codex Ten
HHNFfSnSl84-month salary for two people to find formalisms for modularity in neural networks$67,000Lucius Bushnaq
HMtdOQh25PTravel support to attend the Symposium on AGI Safety in Oxford in May$1,500Smitha Milli
HV-FUOynejFunding the last year of my PhD on embedded agency, to free up my time from teaching$64,000Daniel Herrmann
hVvl93sUCCFunds to support travel for academic research projects relating to pandemic preparedness and biosecurity$8,150Charles Whittaker
HvZEpczBFHFunding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights.$35,625Simon Skade
iBw1TbxhJI2 years of GovAI salary and overheads for Robert Trager$401,537172
ih3Bex-ikPSupport for Jay Bailey for work in ML for AI Safety$79,120Jay Bailey
iP69HMoS0u4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research.$12,000Benjamin Sturgeon
iPCUlTLNMySupport for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp.$10,000Jan Kirchner
ixQoe-a1E-4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual stream$16,300Joshua Reiners
IzYTW2Sb17Fine-tuning large language models for an interpretability challenge (compute costs)$11,300Andrei Alexandru
j631rLRosYCatalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward$40,000Michael Parker
J78QLzvdD912-month salary to work on alignment research!$96,000Garrett Baker
JB_oLlbxGfFunding for Computer Science PhD$348,773David Reber
jDoNSNATwJ6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RL$40,000Jeremy Gillen
jiduahr7dp4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL models$1,000Abhijit Narayan S
JQXPnolb8Q12-month salary to work on ML models for detecting genetic engineering in pathogens$85,000Jade Zaslavsky
jz69W5GQRv2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make time$745Ardysatrio Haroen
k3CJ4DSrxLPiloting an EA hardware lab for prototyping hardware relevant to longtermist priorities$44,000Adam Rutkowski
kGpim5xZe_Retroactive grant for managing the MATS program, 1.0 and 2.0$27,000SERI ML Alignment & Theory Scholars
LB0zhvMPIiEnabling prosaic alignment research with a multi-modal model on natural language and chess$25,000Philipp Bongartz
LBmZVcrTMI2-6 months' stipend to financially cover my self-development in Machine Learning for alignment work$16,000Jonathan Ng
LqnEVbGgRG3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignment$1,000Amrita A. Nair
lS-INqWYiMTop-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residency$180,200Effective Altruism Geneva
lVu9TjkgO16-months salary for researching “Framing computational systems such that we can find meaningful concepts." & Upskilling$24,000Matthias Georg Mayer
mAHfCY2cU36 months’ salary to upskill on technical AI safety through project work and studying$50,000Rusheb Shah
MFBRUlWWVb6-month salary for an AI alignment research project on the manipulation of humans by AI$25,383Felix Hofstätter
MKA02RK0vW6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computation$26,342David Hahnemann, Luan Ademi
mtQGLZXszNSupport for research into applied technical AI alignment work$10,000Philippe Rivet
N0ewZko8hkA 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research$305,000Principles of Intelligent Behavior in Biological and Social Systems
N48j1v60SEIncrease of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residence$134,532Effective Altruism Geneva
N5ObgMkeS65-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayal$14,300Nikiforos Pittaras
ngtLCTqlXT12-Month Salary and Compute Expenses to do AI Safety Research with LLMs$70,000Nicky Pochinkov
Ni2Oyti8hPI am looking for a career transition grant to give me more time for job hunting & networking$3,618Alexander Large
nLvktsWoXbResearch and a report/paper on the the role of emergency powers in the governance of X-Risk$26,000Daniel Skeffington
NRi1EHN2SSEquipment to improve productivity while doing AI Safety research$3,900Tim Farrelly
nrk6P-Mz__3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobs$20,000Peter Ruschhaupt
NWcIsmcD01One-year funding of Astral Codex Ten meetup in Philadelphia$5,000Wesley Fenza
nz4iBRk6FYReconstruction attacks in federated learning$5,000University of Cambridge/ None
Nzwvafz2aTThis grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability project$47,500Bilal Chughtai
ocEhsE1JVWRetrospective funding for research retreat on a decision-theory / cause-prioritization topic.$10,000Daniel Kokotajlo
OeA9_KzvzpFunding for the AI Safety Nudge Competition$5,200AI Safety Nudge Competition
OgAZOa3V1YSupport to work on AI alignment research$16,341Matt MacDermott
oIhuOlBtcp9 months of funding for an early-career alignment researcher, to work with Owain Evans and others.$45,000Max Kaufmann
P2VO83GBtROffice rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood Research$4,300Effective Altruism Geneva
Pb3vMqu1ApOne year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGI$16,600Gunnar Zarncke
pLDGHCtmP2I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the Fall$1,800Zach Peck
PTFMIyBZTEFunding 2 years of technical AI safety research to understand and mitigate risk from large foundation models$209,501John Burden
PTYynLVWTGIndependent research and upskilling for one year, to transition from academic philosophy to AI alignment research$60,000Brian Porter
_pVY9SVyXOPart-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detection$20,000Noga Aharony
Pzn02CWqX46-months salary to accelerate my plans of upskilling in order to work on the issue of AI safety$26,150Kane Nicholson
Q2XLunzrH4Support funding during 2 years of an AI safety PhD at Oxford$11,579Ondrej Bajgar
q-NQyFFfb31-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research.$150,000Darryl Wright
QQiRb-PiX1Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc.$2,100Jingyi Wang
qTJX7fR-HwDeveloping and maintaining projects/resources used by the EA and rationality communities$60,000Said Achmiz
QuD4_gmGcNGeneral support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influences$115,411Alexander Turner
R6jIP9MVZ0Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML coding$2,500Josiah Lopez-Wild
R9yuxivxd76-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigation$27,800Theo Knopfer
Rfmvr0vvFl4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgism$32,000Quentin Feuillade--Montixi
rHy_tSWWkJTop-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in Canada$17,000Wyatt Tessari
rieHcfDx9R4 month grant to upskill for AI governance work before starting Science and Technology Policy PhD$17,220Conor McGlynn
RJryvTtPRI9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical research$62,040Magdalena Wache
rTEHjriyJL300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics.$4,500Leah Pierson
rv8JBm7ZXg≤1-year salary for alignment work: assisting academics, skilling up, personal research and community building$35,000Charlie Griffin
SAWHOhR1I3Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicant$6,557Jeffrey Ohl
sdxXcKTs7l6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordination$25,000Chloe Lee
sfaNiIIb0pResearch (and self-study) project designed to map and offer preliminary assessment of AI ideal governance research$2,000Rory Gillis
sFlihzhOVAFund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival.$27,000University of Otago, Wellington, New Zealand
SGbPrAvh5t6-month salary to develop an overview of the current state of AI alignment research, and begin contributing$70,000Gergely Szucs
SqM929HceLGrant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration.$63,000Hunar Batra
stuk8z6WfW7 month salary to study a Graduate Diploma of International Affairs at The Australian National University$9,000Matthew MacInnes
sz_fdPM1yIFunding to start a longtermist org and support research$494,510Transformative Futures Foresight Institute
T234r_pzg7Slack money for increased productivity in AI Alignment research$17,355Adam Shimi
t5N9wVBFl62-year salary for work on the learning-theoretic AI alignment research agenda$100,000Vanessa Kosoy
t700YyWsnMSupport to conduct work in AI safety$5,000Benjamin Anderson
TBCRgSaLM8Funding to support PhD in AI Safety at Imperial College London, technical research and community building$6,350Francis Rhys Ward
Tn6ed3x1o_3-month salary for SERI-MATS extension$24,000Matt MacDermott
TQFQPDRp0hA relocation grant to help me to move and settle into a PhD program and cover initial expenses$6,500Egor Zverev
TSHMTjsZ0xFunding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified.$16,000Wikiciv Foundation
__TUKywltc6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project.$50,000Jay Bailey
tY26CnXa6b1-year salary for upskilling in technical AI alignment research$96,000Chu Chen
uATHtoq4Lz6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety$4,524Samuel Nellessen
uDdP84HL2X4-month salary for conceptual/theoretical research towards perfect world-model interpretability$30,000Andrey Tumas
UdOwFjYOXE6-month salary to skill up and gain experience to start working on AI safety full-time$14,136Mateusz Bagiński
UeZSs4U2vg3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendas$26,000Sam Marks
ujzflBegrT6 months salary to do independent AI alignment research focused on formal alignment and agent foundations$30,000Tamsin Leake
uU-nxeuoW7Funding for salary and living expenses while continuing to develop a framework of optimisation.$8,000Alex Altair
UWp6dDQ_eWRetrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS program$4,400Viktoria Malyasova
v2LHnKaRbIWeekend organised as a part of the co-founder matching process of a group to found a human data collection org$2,300Patrick Gruban
v3IG4o9BPK1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studies$90,000Shoshannah Tekofsky
w3vVT8Rm_A3-month salary to set up a distillation course helping new AI safety theory researchers to distill papers$14,600Jonas Hallgren
W6tKysJp8W24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goods$102,000Lennart Stern
w9bnQJPtt46-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AI$11,000Alfred Harwood
wbXGXV5kkhSupport for AI alignment outreach in France (video/audio/text/events) & field-building$24,800Jérémy Perret
WIJmnHWB8Z3-month stipend for upskilling in AI Safety and potentially transition to a career in Alignment$5,000Amrita A. Nair
wqQzIaUIme4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systems$12,321Alan Chan
wUgQu6cSDMScholarship for PhD student working on research related to AI Safety$8,000Josiah Lopez-Wild
wulkEsCmS012-month salary to transition career into technical alignment research$25,000Dan Valentine
_WvYO5DXfA6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedules$40,000Logan Smith
WXgK-zd4H6A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity Summit$2,500Hamza Tariq Chaudhry
XAhIaWCxsJ8-month salary for three people to investigate the origins of modularity in neural networks$125,000Lucius Bushnaq, Callum McDougall, Avery Griffin
xHd_s5x7lz12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal Paternalism$81,402.42Samuel Brown
XO8iE2ZmylA research & networking retreat for winners of the Eliciting Latent Knowledge contest$72,00036
xOYY6CE_vN6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systems$24,000Johannes C. Mayer
xuTj05bx6lSupport to conduct a research project collaboration on Compute Governance$67,800Lennart Heim
xvpzQqEeko4-month funding for independent alignment research and study$15,478Arun Jose
Y8XdAqNMInEU Tech Policy Fellowship with ~10 trainees$68,750Training For Good
yfek7Xnl76Funding to increase my impact as an early-career biosecurity researcher$6,000Lennart Justen
yO00KhIVJN~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safety$4,800Anson Ho
YV-GfV4DSCEconomic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical research$2,000Antonio Franca
yyq5dNyXvLOne year of seed funding for a new AI interpretability research organisation$195,000Jessica Rumbelow
ZCqhftwGIsTravel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022$1,500Kadri Reis
zNXncdMsSnOne-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATS$100,000David Udell
zO8mDZbkLt6-month salary to upskill for AI safety$54,250Daniel O'Connell
_Z-robaRy312-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilities$120,000Nicholas Kees Dupuis
ZUhmVwIwaV3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignment$22,000Jacques Thibodeau
zZdFag9HQtCover participant stipends for AI Safety Camp Virtual 2023$72,500Remmelt Ellen
0R8ImjflgQDeveloping weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 people$80,000Michael Pearce, Alice Riggs, Thomas Dooms
-0VSs4Wqw26-months stipend for transitioning to independent research on AI Safety$40,000Glauber De Bona
0xycpsgnWkSpend 3 months (part time) assessing plausible pathways to slowing AI$5,000Gideon Futerman
1ZSAZOaevy4-month part-time salary to work on interpretability projects with David Bau and Logan Riggs$10,000Jannik Brinkmann
2E68JVqUgD6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowships$272,800Ashgro Inc. (fiscal sponsor of Apart)
2Er7YLGzsg1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articles$80,000Nicky Case
2fDNj1bXp8A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safety$5,000Chris Lakin
2l4CQdg4ZA3-month stipend to support research on the state of AI safety in China and implications for AI existential risk$12,000Andrew Zeng
2QC19m-4eK3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferences$80,000Constantin Weisser
2wqQkcoDyy$10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability Fellowship$10,120Brian Tan
36HL24ufWp1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group)$102,500Nora Ammann
3fcVvrSYE5This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity.$70,000Nathaniel Monson
3gy9ZojqgM6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainability$52,118.5Aengus Lynch
3uu8kvGT1o6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time after$40,000Joe Kwon
_3ZfUAGJkzData collection for a new paradigm for AI alignment based on downstream outcomes and human flourishing$50,000University of Massachusetts Amherst
-4PN2URi2x4-month salary for finding and characterising provably hard cases for mechanistic anomaly detection$40,000Andis Draguns
4S6TYqKEB23-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentor$22,500Aleksandar Makelov
5-2BBUlHt4This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia.$77,000AI Safety Australia and New Zealand
5etIR-Q_3OExploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension)$41,000Lucy Farnik
5g7Y3pqm3F6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum.$8,000Amritanshu Prasad
6fsR5nJSSW4-month stipend for a career transition period to explore roles in AI safety communications$10,120Sarah Hastings-Woodhouse
7b_70hXNrX12 week 0.6FT upskilling stipend for technical governance research management$11,244Morgan Simpson
8hxOJrj9nd3-months salary for SERI MATS extention to work on internal concept extraction$27,260Ann-Kathrin Dombrowski
8PXWQrxbKs6-months of part-time stipend to launch a new science journalism outlet focused on AI Safety$50,000Mordechai Rorvig
9Ix4tlPasX6 to 12 month fundings to continue working on model psychology and evaluation$42,000P.H.I
9PHai2aJzT4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switch$62,000Niels uit de Bos
ABhBa5vVNwThis grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts.$55,000Akbir Khan
aeDFZ0io6nTen field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025$7,118301
AlDcIG0Ld2A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economy$36,000Alexander Mann
anT41CIghcDeveloping noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deployment$40,000Adelin Kassler
AUbVa6dwza6-month salary and compute budget for continuing work on mechanistic interpretability for attention layers$37,000Keith Wynroe
AVw76FP5QZ12-month support for independent AI alignment research$45,000Aryeh Brill
BNCQ3Hbtbs4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMs$70,000Axel Højmark
BUso9Fygb-This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects.$32,000Dioptra (informal research group working on evals)
cBj86HUo204-month fund for full time AI safety technical and/or governance research$10,750Harrison Gietz
CboBVY_qz9This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy.$8,673Carson Ezell
CEus3vnMoe4-month stipend to continue AI safety projects$25,216Hannah Erlebach
C-G_zGbL8oPart-time salary for independent AI safety research$40,000Ross Nordby
CKdIkd7-DTGrant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate student$1,875Sumeet Motwani
_CQl3GsLBhMentored independent research and upskilling to transition from theoretical physics PhD to AI safety$50,000Einar Urdshals
cVS88xX9xQ6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI Safety$77,544Aishwarya Saxena
D6SAWnscwpMeta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project)$62,150Yoav Tzfati
dE4-p3gMGHOrganizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participants$160,000Epistea, z.s
DJFqaficoG1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension program$15,075Abhay Sheshadri
d-wRidb3qa1 year PhD funding and compute funding to research a novel method for training prosociality into large language models$10,000Scott Viteri
e7BoPTrtIZ1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystem$99,330Alignment Ecosystem Development
ekOW3SAEFa6-month salary for independent alignment research in interpretability or control$95,000Thomas Kwa
EnLQohFjAWFunding to do research on understanding search in transformers at the AI safety camp during 14 weeks$6,636Guillaume Corlouer
EOJZySAhk9One year stipend and compute budget, for full-time technical AI alignment research$80,000David Udell
fhIhF_huFl6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's Law$60,000Thomas Kwa
Fz1Mg1LEte6 month salary for further pursuing sparse autoencoders for automatic feature finding$40,000Logan Smith
G1J-d_hmbl5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistance$16,698For Collaborative Work with AI:FAR
gETBvi9IcA3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganography$12,600Mikhail Baranchuk
gFD-bO8D4K6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing study$36,000Simon Lermen
GgDUFTwmrdIn MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networks$40,000MentaLeap
giDQzKc9RgFunding to attend BWC meeting to discuss transparency with country representatives & work on research project$1,700Riya Sharma
HA0XCkcu4L2 Months of living expenses while I try to establish a broad-spectrum antiviral research organization$5,000Hayden Peacock
hfj15iDFZl6-month stipend to work on AI alignment research (automated redteaming, interpretability)$30,000Alex Infanger
HmPSaFpwBa12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agenda$27,108Jacques Thibodeau
HOQQ6sW67R1-year stipend to continue research on agency, focused on natural abstraction$200,000John Wentworth
hoTDteiRMdThis grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research.$45,000Yuxiao Li
hrCXj-JDHCA 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025$20,700Caleb Rak
I0qDo6oy6HUndergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshop$33,000Nathaniel Sharadin
iMpZTH-vRlMonthly seminar series on Guaranteed Safe AI, from July to December 2024$6,000Horizon Events
i-NgiS55SIThis grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research.$35,000Sviatoslav Chalnev
jbYGAj9YCg5-month salary to continue work on evaluating agent self-improvement capabilities$23,360Codruta Lugoj
jCjzuv3C5m12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher Brass$6,000Yashvardhan Sharma
JFD6karW6p4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform co$22,324.5Stanford University
jsKGgE3oWiSeeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accurate$2,500Kunvar Thaman
_JsR1ubUC11-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evals$19,000Sumeet Motwani
k0yodyMi-T3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel funding$20,000Hannah Erleabch
kHLPISSUHHSix month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverables$61,000Philip Quirke
KkC4nKofwy6-month salary for part-time independent research on LM interpretability for AI alignment$7,700Aidan Ewart
KPKgH0VJ546-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costs$31,600Morgan Simpson
ksYzPHhMBOSERI MATS 3-month extension to study knowledge removal in Language Models$12,000Shashwat Goel
Kx3OqURhYw6-month salary to transition to a career in AI safety while working on AI safety projects$30,000Dillon Bowen
l1yNYzYCoEI'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute funds$1,500Joshua Clymer
L6CZB-UecT9-month programme to help language and cognition scientists repurpose their existing skills for long-termist research$5,000Nikola Moore
lcxg6z9isi11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in Finland$73,333.33Santeri Tani
-LFObXJ3gTCompute costs for experiments to evaluate different scalable oversight protocols$86,600Lewis Hammond
lgwvop8hlO6-month salary to finish writing a book on international AI governance and three other smaller AI governance projects$33,700José Jaime Villalobos Ruiz
LrqXgFhHomThis grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction.$2,000Tristan Williams
LXsLCGtDnJ6-month salary for an AISC project and continuing independent mechanistic interpretability projects$28,000Christopher Mathwin
m3ewb0jYZz3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge.$3,138Benjamin Stewart
MAbuac0aRJ4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension program$30,000Aaquib Syed
MgDNlSM0QJRetroactive funding for GameBench paper$9,072Dioptra (Josh Clymber's AIS research community)
MgHo2VitGXA podcast mainly themed around AI x-risk, aimed at a non-technical audience$5,000Sarah Hastings-Woodhouse
mMKcaiXEMa~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila$86,400Brian Tan
mtCaqwpFGe4-month stipend for upskilling within the field of economic governance of AI$7,000Rafael Andersson Lipcsey
MUu2vwST7S4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentials$15,000Kurt Brown
n81C0nkIfq6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyond$38,688Felix Hofstätter
nasox678FQ5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projects$21,989Keith Wynroe
ND3pNYM0tg6-month stipend to work on techical alignment research as part of MATS 5.0 extension program$40,000Cindy Wu
Nfc716Ozy4Retroactive grant to study Goodhart effects on heavy-tailed distributions$29,760Thomas Kwa
nh8bijHv3O6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systems$37,120Lukas Fluri
nhYZP-Vq769 months support for an in-depth YouTube channel about AI safety and how AI will impact us all$27,000David Williams-King
NJjalznNkzFunding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew Gentzel$31,650Coleman Snell
NLsHSStFle4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language models$60,000Rauno Arike, Elizabeth Donoway
nLSXSF549P6-month career transition and independent research in AI safety and risk mitigation$85,000Jose Groh
nq2WjknrrHThis grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research.$5,000Cindy Wu
Nu1IWlcfFUTwo workshops on strategic communications around AI safety, focused on the AI safety community$5,720Philip Trippenbach
OYi4ZPGyeC6 month salary to work on mech interp research with mentorship from Prof David Bau$41,000Bilal Chughtai
PAQcc_C4rJ6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmark$35,000Roman Soletskyi
PgtjNbfMcgResearch on how much language models can infer about their current user, and interpretability work on such inferences$55,000Egg Syntax (legal: Jesse Davis)
P-kT-6zsfZ4-month stipend to research the mechanisms of refusal in chat LLMs$40,000Oscar Balcells Obeso
PPocAhnV6BVirtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safety$10,000Orpheus Lummis
PTVB29fhx54-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategies$27,000Kai Fronsdal
PvACZhX7CoDevelop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viability$40,000David Abecassis
pXO90SpwlwA fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and tech$120,000Geneva Centre for Security Policy
p_zhwHc-q4One year funding of ACX meetup in Atlanta Georgia$5,000ACX Atlanta
pZ-SHL7cXO7 months of coworking-space funding continuation, during interpretability research project$10,500David Udell
qd5dxJG8XyStipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attention$25,491Matthias Dellago
Q-HANYnpC7Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymaking$24,339Existential Risk Observatory
qNDlZgkTrQ7-month stipend for organising AI Alignment Irvine (AIAI)$16,337Neil Crawford
QNq0IRnrVl6-month stipends to develop and apply a novel method for localizing information and computation in neural networks$160,000Alex Cloud, Jacob Goldman-Wetzler, Evžen Wybitul, Joseph Miller
QO1vkWtAP19-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’$7,200Julian Guidote
QOUsIUII6W6-month stipend to continue independent interpretability research$40,000Sviatoslav Chalnev
R2a7mhwt8u4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switch$67,000Iván Arcuschin Moreno
r8apRwrK5zWhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic Interpretability$61,460Brian Tan
RaErh8NtAU8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AI$6,230Luise Woehlke
Rd09Ax_M7K1-year stipend for independent research primarily on high-level interpretability$70,000Arun Jose
rEZyPV3pD8Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignment$80,000Claire Short
RfX7ActU7dConference publication of interpretability and LM-steering results$40,000Alexander Turner
rjoom_xSBE1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved$121,575Robert Miles
rMwwKi6ydN12-month salary to set up a new org doing research and creating interventions to minimise lock-in risk$10,000Formation Research
ROiwTvaTrf1.5 year stipend for thorough investigation and analysis of AI lab scaling policies$100,000Aysja Johnson
Ru76GDDauO6 month SERI MATS London extension phase for continuing and scaling up the sparse coding project$35,300Hoagy Cunningham
SBqPRxOLM34 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognition$34,100Arjun Panickssery
SG2-W8bVmUStudying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.I$50,000Cole Wyeth
spr9OxtOydOrganizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participants$115,000Epistea, z.s
ssrHBQDEEoMATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systems$17,500Garrett Baker
T0QJ4JE-3Z6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitation$55,660Theodore Chapman
T5YPg4GANYOne year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's work$150,000Macrostrategy Research Initiative
t9rw4NOGgu6-month stipend for a small group of collaborators to continue research on the Agent Structure Problem$60,000Alex Altair
TarBIrjSjB4-month stipend for 3 people to create demonstrations of provably undetectable backdoors$50,336Andrew Gritsevskiy
TDNQocTmYIDevelopment of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms)$30,000Sahil Kulshrestha
ti-4D8-Jp6Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theory$20,000Wilson Wu
_tI5UUbzXu4-month salary to continue work on AI Control as a MATS extension$30,000Vasil Georgiev
TlzmG0KtHQ6-month salary to build experience in AI interpretability research before PhD applications$40,000Zach Furman
Tu4JdORAK02 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fields$5,000Krzysztof Gwiazda
tVZ_MHMtwQSalary Top-Up for Timaeus' Employees & Contractors$100,000Timaeus (Fiscally Sponsored by Ashgro, Inc.)
tWZcB0BcnY6 month project - pending description$10,000Kristy Loke
UHOmg4vRHb3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo Research$8,500Sienka Dounia
uHsI9Hu0Jx6-month stipend for Sparse Autoencoder Mech Interp projects$40,000Logan Smith
UKq4YrKA-d4-month stipend to continue work on AI Control as a MATS extension$30,000Cody Rushing
uLy3CDgpFZ12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour)$80,000Nicky Pochinkov
VcbYEe__xK6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp.$1,739Artem Karpov
VjVLSH_Ddm6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on bas$5,200Hebrew Universty
vLIZVub1eC1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationality$80,000Logan Strohl
vMakkTFIGHFunding for having written AI safety distillation posts on the topic of membranes/boundaries$4,500Chris Lakin
vp-QrifCJA4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension program$60,000Danielle Ensign
vv57hNSoDf4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension program$30,087Teun van der Weij
vYKxZvyuLyGeneral support for a forecasting team$6,000Samotsvety Forecasting
WaNvUb1NjwThis grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence.$44,802Daniel Filan
WW7rZNLugSYear-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base.$90,000Bryce Meyer
wW_cJzdQuSThis grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment research$30,000Alexander Turner
wyoTsls0DrFunds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRs$5,090Imperial College London
Wz2khPDtl_4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendas$7,200Codruta Lugoj
x0rx6s9QrU6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deception$55,000Sara Price
x8Bj7kBoZE6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems.$6,500Roman Leventov
XaEtkueSih6-month stipend to work on safe and robust reasoning via mechanistically interpreting representations$30,000Satvik Golechha
xmigvvbvJCDevelop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-risk$25,000Suzy Shepherd
XRViam0yLM4-month stipend to continue work on AI Control as a MATS extension$30,000Tyler Tracy
xVThL43ECx$10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestricted$10,500Vaidehi Agarwalla
xwkpC3Bq2s8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topic$49,333.33Vojtech Kovarik
y17NsvyT0q1 month long literature review on in-context learning and its relevance to AI alignment$6,000Alfie Lamerton
Y2RVzGYp1r4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer models$13,000Tilman Räuker
Y4RdOA4Oza6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space intervention$40,000Eric Easley
Y5gprTQFFjCreate an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governance$5,000Michel Justen
y7sWoPlQxUA private online platform for research-sharing amongst the AI governance community$125,000The AI Governance Archive (TAIGA)
yfTI7D_W696-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchers$50,000Bryce Meyer
YJEJEC2qsCThis grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program.$19,248Viktor Rehnberg
yJzGMJJyZVFour months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial training$23,100Aidan Ewart
-ymMjjVnLl6-month incubation program for technical AI safety research organizations$122,507Catalyze Impact
yqWYFC1zR74-months stipend to apply mechanistic interpretability to a real-world application, hallucinations$60,000Javier Ferrando Monsonís and Oscar Balcells Obeso
yRwtymTvlq3-month part-time salary in order to work on AI governance projects and activities$6,000Arran McCutcheon
ySmEnWKT10Funding for (academic/technical) AI safety community events in London$8,000Francis Rhys Ward
yYADn1n6q2Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward$50,000Michael Parker
Z1X-hKDVmn3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignment$50,000The University of Texas at Austin
ZaMGYFwaAY6 month AI alignment internship stipend top-up$10,000Matt MacDermott
ZbCEfgGLXhTravel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI Safety$1,800Dhruvin Patel
-zdTtA7rOoExperimentally testing generative AI's ability to persuade humans about hazardous topics$115,000Thomas Costello
zQiETP4OpA6 month stipend for SAE-circuits$40,000Logan Smith
ZsbWLYVJWa6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIF$42,000Marcus Williams
ZSouIsc6L43-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignment$13,000Simon Lermen
ZZWXxa7KdPCompute for experiment about how steganography in large language models might arise as a result of benign optimization$2,000Felix Binder
Record: D50N3LW8ei | Longterm Wiki