Longterm Wiki
Updated 2026-03-12HistoryData
Page StatusContent
Edited 1 day ago4.8k words6 backlinksUpdated every 6 weeksDue in 6 weeks
56QualityAdequate •30.5ImportanceReference42ResearchLow
Summary

LTFF is a regranting program that has distributed \$20M since 2017 (approximately \$10M to AI safety) with median grants of \$25K, filling a critical niche between personal savings and institutional funders like Coefficient Giving (median \$257K). In 2023, LTFF granted \$6.67M with a 19.3% acceptance rate, targeting 21-day decision turnarounds, and serves as an important pipeline for researchers before joining major labs or receiving larger grants.

Content7/13
LLM summaryScheduleEntityEdit historyOverview
Tables35/ ~19Diagrams3/ ~2Int. links10/ ~38Ext. links50/ ~24Footnotes0/ ~14References5/ ~14Quotes0Accuracy0RatingsN:2.5 R:5 A:6.5 C:7Backlinks6
Issues2
QualityRated 56 but structure suggests 100 (underrated by 44 points)
Links12 links could use <R> components

Long-Term Future Fund (LTFF)

Funder

Long-Term Future Fund (LTFF)

LTFF is a regranting program that has distributed \$20M since 2017 (approximately \$10M to AI safety) with median grants of \$25K, filling a critical niche between personal savings and institutional funders like Coefficient Giving (median \$257K). In 2023, LTFF granted \$6.67M with a 19.3% acceptance rate, targeting 21-day decision turnarounds, and serves as an important pipeline for researchers before joining major labs or receiving larger grants.

TypeFunder
4.8k words · 6 backlinks

Quick Assessment

DimensionAssessmentEvidence
Cumulative GrantmakingHighOver $20M since 2017, ≈$10M to AI safety
Annual Volume (2023)Medium-High$6.67M total grants
Grant SizeSmallMedian $25K (vs Coefficient median $257K)
Acceptance RateSelective19.3% (May 2023-March 2024)
Decision SpeedFastTarget 21 days, aim for 42 days maximum
FocusAI Safety Primary≈67% of grants AI-related
NicheIndividual ResearchersFills gap between personal savings and institutional grants
Pipeline RoleCriticalMany grantees later join major labs or receive Coefficient funding

Organization Details

AttributeDetails
Full NameLong-Term Future Fund
ParentEA Funds (Centre for Effective Altruism)
Part ofEffective Ventures (fiscal umbrella)
TypeRegranting Program
FoundedFebruary 2017 (EA Funds launch); restructured October 2018
Annual Grantmaking$5-8M (recent years)
Cumulative GrantmakingOver $20M (2017-2024)
Websitefunds.effectivealtruism.org/funds/far-future
Application Portalfunds.effectivealtruism.org/apply
Grant ReportsEA Forum LTFF Topic

Overview

The Long-Term Future Fund (LTFF) is a regranting program that supports individuals and small organizations working on AI safety and existential risk reduction. As part of EA Funds (operated by the Centre for Effective Altruism), LTFF fills an important niche in the AI safety funding landscape: providing fast, flexible funding for projects too small for major foundations but too large for personal savings. Since its launch in 2017, LTFF has distributed over $20 million in grants, with approximately half going directly to AI safety work.

LTFF's focus on individuals distinguishes it from funders like Coefficient Giving (which primarily funds organizations with median grants of $257K) or the Survival and Flourishing Fund (which runs larger grant rounds with median grants of $100K). With a median grant size of just $25K, LTFF is uniquely positioned to fund career transitions, upskilling periods, and early-stage research that would be too small for other institutional funders. Many AI safety researchers receive their first external funding through LTFF before joining established organizations or receiving larger grants from Coefficient Giving.

The fund operates with a team of permanent fund managers plus rotating guest evaluators—researchers and practitioners with relevant expertise in AI safety, forecasting, and related fields. This distributed model allows faster decision-making (targeting 21-day turnaround) while maintaining quality control through collective judgment. The fund's philosophy leans toward funding when at least one manager is "very excited" about a grant, even if others are more neutral—a hits-based giving approach that has produced notable successes including early funding for Manifold Markets, David Krueger's AI safety lab at Cambridge, and numerous researchers who later joined frontier AI labs.

LTFF has weathered significant funding disruption following the FTX collapse in late 2022, which affected the broader EA funding ecosystem. The fund has since stabilized with approximately 40-50% of funding coming from Coefficient Giving and the remainder from direct donations and other institutional sources. Despite these challenges, LTFF has maintained steady grantmaking volume of approximately $5-8 million annually.

Historical Grantmaking

Cumulative Grant History

PeriodTotal GrantsAI Safety PortionNotable Developments
2017≈$500K≈30%EA Funds launch; Nick Beckstead sole manager
2018≈$1.5M≈40%New management team (Habryka et al.); more speculative grants
2019≈$1.4M≈50%Expanded grant writeups; increased transparency
2020≈$1.4M≈55%COVID impact; increased applications
2021≈$4.8M≈60%Significant scaling; added guest managers
2022≈$4.5M≈65%FTX disruption; funding uncertainty
2023≈$6.67M≈67%Post-FTX stabilization; 19.3% acceptance rate
2024≈$8M (projected)≈70%Continued growth; focus on technical safety
Cumulative≈$20M+≈$10MOver 1,000 grants since founding

The fund's evolution reflects broader trends in AI safety: early grants went primarily to established organizations under Nick Beckstead's management, while later rounds under the Habryka-led team shifted toward more speculative individual grants with detailed public writeups. This transition was analyzed in a 2021 retrospective that found a 30-40% success rate among 2018-2019 grantees—consistent with appropriate risk-taking for hits-based giving.

PeriodMonthly ApplicationsAcceptance RateNotes
H2 2021≈35/month≈25%Post-pandemic increase
2022≈69/month≈22%Double previous year
Early 2023≈90/month≈20%Continued growth
2023-2024≈80/month19.3%Stabilized volume

The fund now processes approximately 80-90 applications monthly, reflecting both increased interest in AI safety careers and LTFF's reputation as an accessible entry point for funding.

Grant Categories

Distribution by Category (2023-2024)

CategoryPercentageAnnual AmountDescription
Technical AI Safety Research35%≈$2.3MIndependent research, small teams, lab collaborations
Upskilling & Career Transitions25%≈$1.7MMATS funding, course attendance, self-study periods
Field-Building20%≈$1.3MConferences, community events, infrastructure
AI Governance & Policy10%≈$670KPolicy research, advocacy, institutional engagement
Other X-Risk10%≈$670KBiosecurity, nuclear risk, forecasting

Grant Size Distribution

Size RangeFrequencyMedianTypical Use Case
$1K - $10K20%$5KConference travel, short projects, equipment
$10K - $25K35%$18KMATS supplements, 3-6 month projects
$25K - $50K25%$35K6-12 month research, career transitions
$50K - $100K15%$70KYear-long independent research
$100K - $200K5%$150KMulti-year support, org incubation

For comparison, the cost of funding one year of researcher upskilling in AI safety is estimated at approximately $53K by LTFF.

Application Process

Loading diagram...

Timeline Commitments

StageTargetMaximumNotes
Application SubmissionAnytimeRollingNo deadlines; continuous acceptance
Initial Response21 days42 daysFormal target since 2023
Time-Sensitive ApplicationsFasterVariesMark as urgent in application
Grant Disbursement1-2 weeks4 weeksAfter approval and terms
Total Time4-6 weeks8 weeksIncluding disbursement

The fund announced rolling applications in 2023, eliminating the previous round-based system. Applications marked as time-sensitive receive expedited review.

What Makes Strong Applications

Based on public grant reports, committee comments, and fund manager reflections:

FactorWeightDescriptionEvidence Types
Relevant Track RecordCriticalPast experience in AI safety or related fieldPapers, projects, employment history
Clear Theory of ChangeCriticalHow does this reduce x-risk specifically?Logical chain from activity to impact
Person-Level FitHighCan this person execute effectively?References, past output quality
Counterfactual ImpactHighWould this happen without LTFF?Alternative funding sources, personal savings
Concrete DeliverablesMediumWhat will you produce and when?Milestones, measurable outputs
Cost-EffectivenessMediumIs this efficient use of funds?Budget justification, alternatives
Portfolio FitMediumDoes this complement other grants?Novelty, strategic gaps

Application Tips from Fund Managers

From the 2023 Ask Us Anything session and grant reports:

Strong applications typically have:

  • Demonstrated interest in priority areas (AI safety, biosecurity) through previous work
  • Specific, time-bound project plans with clear milestones
  • Realistic budget with justification for each line item
  • Acknowledgment of key uncertainties and how they'll be addressed
  • Evidence that the applicant has thought carefully about the problem space

Common weaknesses:

  • Vague project descriptions without concrete outputs
  • Generic "I want to do AI safety research" without specifics
  • Overconfidence about impact without evidence
  • Poor fit between applicant background and proposed work
  • Lack of engagement with existing literature or community

What fund managers emphasize:

  • "Things like general competence, past track record, clear value proposition and neglectedness matter"
  • They use "model-driven granting"—assessing whether the proposed plan will actually work
  • A well-justified application can change a fund manager's mind even on skeptical topics
  • There's healthy disagreement within the fund; one excited manager can get a grant funded

What LTFF Is Less Interested In

From public statements by fund managers:

CategoryReason
General technology/science improvementUnless differentially benefits x-risk reduction
Economic growth accelerationNot clearly an x-risk intervention
AI capabilities researchCould accelerate rather than reduce risk
Mechanistic interpretability (2024+)Now less neglected due to field growth
Large organizational grantsBetter suited to Coefficient or SFF

Notable Grants and Grantees

High-Profile Grants

GranteeAmountYearPurposeOutcome
Manifold Markets (Grugett, Chen)$200,000Feb 20224-month runway to build prediction market platformManifold grew to major EA forecasting tool; credited as "strong signal EA community cared"
David Krueger's Lab (Cambridge)$200,0002023PhD students and compute at new AI safety labEstablished academic AI safety presence at Cambridge
Gabriel Mukobi$40,6802023-24Stanford CS master's with AI governance focusAccepted to 4/6 PhD programs; multiple publications
Lisa Thiergart & Monte MacDiarmid$40,0002024Activation addition interpretability paperConference publication on LM steering
Joshua Clymer$1,5002023A100 GPU compute for instruction-following experimentsFirst technical AI safety project
Einar Urdshals≈$50K2023Career transition from physics PhD to AI safetyMentored independent research
SecureBioVaries2025Field-building events for GCR in BostonSpring/Summer 2025 program
Julian Guidote & Ben Chancey≈$15K20259-week stipend for Mandatory AI Safety "Red Bonds" policy paperPolicy proposal development

Research Group Support

Group/LabAmountPeriodFocus
SERI MATS ScholarsMultiple2021+Program supplements and independent research
AI Safety CampMultiple2019+Research mentorship programs
AXRP Podcast (Daniel Filan)Multiple2020+AI safety podcast production
Robert MilesMultiple2019+AI safety YouTube content

Grant Type Details

Upskilling Grants: LTFF supports individuals seeking to build AI safety skills through various pathways.

Grant TypeTypical SizePurposeRequirements
MATS Supplements$10-25KLiving expenses during programMATS acceptance
Course Attendance$5-15KRegistration + living costsProgram acceptance
Self-Study Period$20-50KIndependent learning runwayStudy plan, mentorship
Research Visits$10-30KCollaboration at other orgsHost organization agreement
PhD/Master's Support$20-80KTuition and living expensesUniversity admission

Independent Research Grants: The core of LTFF's portfolio.

Grant TypeTypical SizeDurationRequirements
Exploration Grant$15-40K3-6 monthsResearch direction, preliminary work
Research Grant$40-80K6-12 monthsTrack record, detailed proposal
Multi-Year Support$80-200K1-2 yearsProven track record, clear milestones
Bridge Funding$20-60K3-9 monthsGap between positions/grants

LTFF is "pretty happy to offer 'bridge' funding for people who don't quite meet [major lab] hiring bars yet, but are likely to in the next few years."

Field-Building Grants: Supporting the AI safety ecosystem.

Grant TypeTypical SizePurposeExamples
Conference Support$10-50KEvent organizationResearch retreats, workshops
Community Building$15-40KLocal group supportUniversity groups, city hubs
Infrastructure$25-100KTools and platformsForecasting tools, research databases
Content Creation$10-40KEducational materialsYouTube, podcasts, writeups

Fund Management

LTFF uses a distinctive governance model with permanent fund managers plus rotating guest managers who provide specialized expertise.

Current Committee (2024-2025)

Permanent Fund Managers:

ManagerRoleBackgroundFocus Areas
Caleb ParikhInterim Fund Chair, EA Funds Project LeadML research background; evaluated over $34M in applicationsTechnical safety, field-building
Oliver HabrykaPermanent ManagerCEO of Lightcone Infrastructure (LessWrong); cofounder of Lighthaven venueCommunity infrastructure, epistemics
Linchuan ZhangPermanent Manager, EA Funds StaffSenior Researcher at Rethink Priorities; COVID forecasting backgroundExistential security research

Recent Guest Fund Managers (2023-2024):

Guest ManagerAffiliationExpertise
Lawrence ChanARC EvalsAI evaluations, safety research
Clara CollierIndependentAI governance
Daniel EthIndependentTechnical AI safety
Lauro LangoscoDeepMind (previously)ML research
Thomas LarsenIndependentAI safety research
Eli LiflandForecastingQuantitative analysis

Committee Evolution

PeriodChairNotable Changes
2017-2018Nick Beckstead (sole manager)Initial fund; focus on established orgs
Oct 2018-2020Matt FallshawNew team: Habryka, Toner, Wage, Zhu
2020-2023Asya BergalExpanded guest managers; transparency
2023-presentCaleb Parikh (interim)Asya stepped down to reduce Coefficient overlap

Asya Bergal stepped down from the chair role in late 2023 to reduce overlap between Coefficient Giving (where she works as a Program Associate) and LTFF.

Evaluation Philosophy

The committee uses explicit criteria combined with significant judgment:

PrincipleDescription
Hits-Based GivingWilling to fund speculative grants with high potential upside
One Excited Manager RuleGrants often funded when one manager is very excited, even if others are neutral
Model-Driven GrantingAssess whether proposed plans will actually work, not just stated intentions
Healthy DisagreementFund managers regularly disagree; diversity of views is feature not bug
Part-Time CapacityMost managers have demanding day jobs, limiting deep evaluation time

From the May 2023-March 2024 payout report: "The fund's general policy has been to lean towards funding when one fund manager is very excited about a grant even if other fund managers are more neutral. The underlying model is that individual excitement is more likely to identify grants with significant impact potential in a hits-based giving framework."

Evaluation Process

Loading diagram...

Comparison with Other Funders

LTFF occupies a specific niche in the AI safety funding landscape. Understanding its position relative to other funders helps applicants choose the right source.

Major AI Safety Funders Comparison

FunderAnnual AI SafetyMedian GrantFocusApplication Style
Coefficient Giving$70M+ (2022)$257KOrganizations, large projectsProactive research, RFPs
Survival and Flourishing Fund$10-15M$100KOrganizations, researchersAnnual S-Process + rolling
Long-Term Future Fund$4-5M$25KIndividuals, small projectsOpen applications
Lightspeed Grants$5-10MVariesRapid response, individualsApplication rounds
Manifund$1-2M$10-50KIndividuals, experimentsRegranters + applications
AI Risk Mitigation Fund$1-2MVariesAI safety specificallyApplications

Source: Overview of the AI Safety Funding Situation

When to Apply to LTFF vs Other Funders

ScenarioBest FunderWhy
Individual seeking $20-50K for 6-month researchLTFFSweet spot for LTFF's median grant size
Organization seeking $500K+ annual budgetCoefficient or SFFToo large for LTFF; need institutional funder
Career transition/upskillingLTFFExplicitly welcomes these applications
MATS living expenses supplementLTFFEstablished pipeline for program participants
Policy/governance organizationCoefficientNeeds diverse funder base for credibility
Rapid response to opportunity (< 2 weeks)Lightspeed GrantsFaster than LTFF's 21-day target
Experimental project needing community validationManifundRegranter model tests interest
Large academic lab fundingCoefficient or SFF$200K+ grants more common there

Funding Relationships

RelationshipDescription
LTFF ← Coefficient≈40-50% of LTFF funding comes from Coefficient regranting
LTFF → CoefficientMany LTFF grantees later receive Coefficient funding
LTFF ↔ SFFSimilar cause focus, complementary scale
LTFF ↔ EAIFAI-focused projects go to LTFF; general EA meta to EAIF
LTFF → Major LabsMany grantees eventually hired by Anthropic, DeepMind, etc.

Common Funding Pathways

Loading diagram...

LTFF vs EA Infrastructure Fund (EAIF)

Both LTFF and EAIF are part of EA Funds but serve different purposes:

DimensionLTFFEAIF
FocusLongtermism, AI safety, x-riskEA community building, meta-work
Cause AreaSpecific (AI safety, bio, etc.)Cause-agnostic EA infrastructure
Application VolumeHigher (≈80/month)Lower (fewer applications)
Institutional Funding≈40-50% from Coefficient≈80% from Coefficient
Strategic DirectionMore stableHigher rate of strategic changes

When unsure: If your project focuses on AI safety specifically, apply to LTFF. If it's about EA community building broadly, apply to EAIF. The funds can transfer applications between them if you apply to the wrong one.

Historical Evolution

Founding and Early Years (2017-2018)

EA Funds launched on February 28, 2017, created by the Centre for Effective Altruism (CEA) while going through Y Combinator's accelerator program. The creation was inspired by the success of the EA Giving Group donor-advised fund run by Nick Beckstead and the donor lottery run by Paul Christiano and Carl Shulman in December 2016.

Initially, Nick Beckstead served as the sole manager of the Long-Term Future Fund. During this period, grants went mostly to established organizations like CSER, FLI, Charity Entrepreneurship, and Founder's Pledge, with minimal public writeups. Beckstead stepped down in August 2018.

Transition to Active Grantmaking (2018-2020)

In October 2018, a new management team was announced: Matt Fallshaw (chair), Helen Toner, Oliver Habryka, Matt Wage, and Alex Zhu, with Nick Beckstead and Jonas Vollmer as advisors. This transition marked a fundamental shift:

AspectBeckstead EraPost-2018 Era
RecipientsPrimarily organizationsMore individuals, speculative projects
Grant SizeLargerWider range, more small grants
TransparencyMinimal writeupsDetailed public justifications
Risk ProfileConservativeMore hits-based

The new approach generated some controversy—certain grants were "scathingly criticized in the comments." However, a 2021 retrospective found a 30-40% success rate among 2018-2019 grantees, suggesting appropriate risk-taking for hits-based giving.

Scaling Period (2021-2022)

YearTotal GrantsKey Developments
2021≈$4.8MAdded guest managers; expanded capacity
Q4 2021$2.1M34 grantees in single quarter
2022≈$4.5MPeak pre-FTX; Manifold Markets grant

The fund processed 878 applications from March 2022 to March 2023, funding 263 grants worth approximately $9.1M total (average $34.6K per grant).

FTX Impact and Recovery (2022-2023)

The November 2022 FTX collapse significantly impacted the EA funding ecosystem. While LTFF itself did not receive direct FTX funding, the downstream effects included:

  • Increased uncertainty among applicants and donors
  • Some grantees lost expected Future Fund grants
  • Broader EA funding contraction

LTFF remained relatively stable, with Coefficient Giving (then Open Philanthropy) providing approximately 40-50% of funding and the remainder from direct donations. The fund issued statements about funding constraints in early 2023.

Current Period (2023-2025)

Metric20232024 (projected)
Total Grants$6.67M$8M+
AI Safety Portion≈67%≈70%
Acceptance Rate19.3%≈18-20%
Monthly Applications≈80-90≈80-90

Recent strategic shifts include:

  • Less funding for mechanistic interpretability due to field becoming less neglected
  • Continued focus on upskilling and career transitions
  • More stringent evaluation of claims about AI safety relevance

Impact and Outcomes

Researcher Pipeline

LTFF plays a critical role in the AI safety talent pipeline. Many researchers receive their first external funding through LTFF before achieving one of several outcomes:

OutcomeDescriptionEvidence
Major Lab HiresResearchers later hired by Anthropic, DeepMind, OpenAIMultiple MATS scholars post-LTFF
Academic PositionsFaculty or postdoc roles in AI safetyDavid Krueger, Gabriel Mukobi PhD admissions
Research OutputPublished papers, tools, analysesInterpretability papers, safety tooling
Career PivotsSuccessful transitions into AI safetyPhysics PhDs, software engineers entering field
Capability BuildingSkills developed through funded trainingMATS completions, self-study periods

LTFF grantees often exhibit extraordinary earning potential. As fund managers note, many "are excellent researchers (or have the potential to become one in a few years) and could easily take jobs in big tech or finance, and some could command high salaries (over $400k/year) while conducting similar research at AI labs."

Documented Success Stories

GranteeLTFF SupportSubsequent Achievement
Gabriel Mukobi$40.7K for Stanford master'sAccepted to 4/6 PhD programs; multiple publications
Manifold Markets$200K early-stageGrew to major EA forecasting platform
AXRP PodcastMultiple grantsEstablished AI safety podcast with consistent output
Robert MilesMultiple grantsMajor AI safety YouTube creator, mainstream reach
Mechanistic interpretability researchersMultipleField growth attributed partly to early LTFF support

Quantifying Impact

A 2021 retrospective analysis of 2018-2019 grantees found:

FindingInterpretation
30-40% success rateAppropriate risk level for hits-based giving
Track record correlationGrantees with prior relevant experience performed better
Renewal patternsSuccessful grantees often received follow-on funding

The fund's impact extends beyond direct grantees. By funding early-stage researchers who later join major labs or receive larger grants, LTFF serves as a "farm team" for the AI safety field.

Strengths and Limitations

Organizational Strengths

StrengthDescriptionEvidence
SpeedMuch faster than major foundations21-day target vs months at Coefficient
FlexibilityFunds individuals, not just organizationsMedian grant $25K; individuals welcome
AccessibilityLower barriers to applicationRolling applications; quick online form
Risk ToleranceWilling to fund early-stage ideasHits-based approach; one excited manager can approve
TransparencyPublishes detailed grant reasoningPayout reports on EA Forum with justifications
Renewal SupportHappy to renew successful grants"Would be happy to be primary funder for years"
Bridge FunctionSupports people building toward larger opportunitiesExplicit bridge funding category

Organizational Limitations

LimitationDescriptionMitigation
Scale$5-8M is small relative to field needsComplements rather than replaces major funders
CapacityPart-time managers limit deep evaluationGuest managers provide additional bandwidth
Individual FocusLess suited for large organizationsRefer to Coefficient or SFF for large orgs
ConcentrationHeavy AI safety focus; other x-risks less fundedApply to specialized funds for non-AI work
Funding Dependency≈40-50% from CoefficientDiversified direct donation base
Feedback QualityRejection feedback varies in depthExplicit in AMAs that feedback is limited

Strategic Challenges

ChallengeDescription
Grantmaker BottleneckAs Coefficient staff noted, "a key bottleneck is that they currently don't have enough qualified AI Safety grantmakers to hand out money fast enough"
Neglectedness ShiftsAreas like mechanistic interpretability become less neglected, requiring strategic adjustment
Talent RetentionFund managers have demanding day jobs; turnover creates institutional memory loss
Validation Without DependencyGoal is to help researchers become self-sustaining, not LTFF-dependent

How to Apply

Application Process

StepDetailsTips
1. Review GuidelinesCheck EA Funds website for current prioritiesNote recent payout reports for examples
2. Assess FitConfirm LTFF is right fund (vs EAIF, Animal Welfare, Global Health)AI safety → LTFF; general EA → EAIF
3. Prepare ApplicationDescribe project, background, budget, theory of changeBe specific; include milestones
4. Submit OnlineVia EA Funds application portalMark time-sensitive if urgent
5. Wait for ResponseTarget 21 days, max 42 daysCheck spam folder; follow up if over 6 weeks
6. Respond to QuestionsCommittee may request clarificationRespond promptly; additional info often requested
7. Negotiate TermsDiscuss grant structure, milestones, reportingStandard terms; flexibility for edge cases

Application Portal

The application form includes:

  • Project description: What you'll do and why it matters
  • Background: Relevant experience and qualifications
  • Budget: Itemized costs with justification
  • Timeline: Key milestones and deliverables
  • Theory of change: How this reduces existential risk
  • Counterfactual: What happens without LTFF funding
  • Other funding: Alternative sources, if any

After Approval

StageDetails
Grant AgreementReview and sign terms (usually standard)
DisbursementFunds sent within 1-2 weeks of signing
ReportingProgress updates as agreed; typically light-touch
RenewalApply again if continuing work; track record helps

Funding Sources and Donations

Revenue Composition

SourcePercentageNotes
Coefficient Giving≈40-50%Regranting arrangement; largest single source
Direct Donations≈40-50%Individual EA donors; recurring and one-time
Other Institutional≈10%Other foundations, DAFs

How to Donate

MethodDetails
Direct Donationfunds.effectivealtruism.org (credit card, bank transfer)
Every.orgTax-deductible for US donors
ManifundRegranting via LTFF project page
DAF GrantsSpecify "Long-Term Future Fund" via CEA

For Large Donors

Donors considering significant contributions ($50K+) can contact the fund directly. Linchuan Zhang has volunteered to speak with donors about LTFF's strategy and current funding needs.

Sources and Citations

Primary Sources

Grant Reports and Payout Announcements

AMA and Transparency Posts

Funding Landscape Analysis

EA Funds History

References

2Overview of AI Safety FundingEA Forum·Stephen McAleese·2023·Blog post
4Survival and Flourishing Fundsurvivalandflourishing.fund

SFF is a virtual fund that organizes grant recommendations and philanthropic giving, primarily supporting organizations working on existential risk and AI safety. They use a unique S-Process and have distributed over $152 million in grants since 2019.

5MATS Research Programmatsprogram.org

MATS is an intensive training program that helps researchers transition into AI safety, providing mentorship, funding, and community support. Since 2021, over 446 researchers have participated, producing 150+ research papers and joining leading AI organizations.

Structured Data

545 recordsView full profile →

Grants

545
NameAmountDate
6-month salary to translate AGI safety-related texts, e.g. LessWrong and AI Alignment Forum, into Russian$13,000Jan 2022
Working on long-term macrostrategy and AI Alignment, and up-skilling and career transition towards that goal$40,000Jan 2020
Characterizing the properties and constraints of complex systems and their external interactions to inform AI safety research$20,000Jul 2019
6-month salary to write a book on philosophy + history of longtermist thinking, while longer-term funding is arranged$27,819Oct 2021
12-month salary for researching value learning$50,000Jan 2022
Conducting a computational study on using a light-to-vibrations mechanism as a targeted antiviral.$30,000Jul 2020
Support Sam's participation in ‘Mid-term AI impacts’ research project$4,455Oct 2020
PhD at Cambridge$150,000Jul 2020
Funding a nordic conference for senior X-risk researchers and junior talents interested in entering the field$4,562Oct 2021
Funding for a degree in the Biological Sciences at UCSD (University of California San Diego)$250,000Oct 2021
I would like to produce a research paper about the history of philanthropy-driven national-scale movement-building strategy to inform how EA funders might go about building movements for good.$2,000Jan 2022
Research on AI safety$30,103Jan 2022
Living costs stipend for extra US semester + funding for open-source intelligence (OSINT) equipment & software$11,400Oct 2021
Design and implement simulations of human cultural acquisition as both an analog of and testbed for AI alignment$150,000Oct 2021
Buy out of teaching assistant duties for the remaining two years of my PhD program$50,000Jan 2022
Support to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved$82,000Jan 2022
Support to work on biosecurity$11,400Jan 2022
Funding to trial a new London organization aiming to 10x the number of AI safety researchers$234,121Jan 2022
Time costs over six months to publish a paper on the interaction of open science practices and bio-risk$8,324Oct 2021
Research into the nature of optimization, knowledge, and agency, with relevance to AI alignment$80,000Jul 2021
Producing video content on AI alignment$39,000Apr 2019
Participation in a 2-weeks summer school on science diplomacy to advane my profile in the science-policy interface$1,571Jul 2021
Research project through the Legal Priorities Project, to understand and advise legal practitioners on the long-term challenges of AI in the judiciary$24,000Oct 2020
Open Online Course on “The Economics of AI” for Anton Korinek$71,500Jan 2021
Organizing a workshop aimed at highlighting recent successes in the development of verified software.$5,000Jan 2020
Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization.$135,000Jan 2021
4-month salary for a research assistant to help with a surrogate outcomes project on estimating long-term effects$11,700Oct 2021
A study of safe exploration and robustness to distributional shift in biological complex systems$30,000Apr 2019
Conducting independent research into AI forecasting and strategy questions$40,000Oct 2019
Conducting independent research on cause prioritization$33,000Jan 2020
Building towards a "Limited Agent Foundations" thesis on mild optimization and corrigibility$30,000Apr 2019
6 month salary for JJ to continue providing 1on1 support to early AI safety researchers and transition AISS$25,000Jul 2021
DPhil project in AI that addresses safety concerns in ML algorithms and positions Kai to work on China-West AI relations$77,500Oct 2021
Build a theory of abstraction for embedded agency using real-world systems for a tight feedback loop$30,000Oct 2019
Surveying the neglectedness of broad-spectrum antiviral development$18,000Oct 2019
Create a toolkit that enables researchers to bootstrap from zero to competence in ambiguous fields, beginning with a review of individual books$19,000Oct 2019
12-month salary for a software developer to create a library for Seldonian (safe and fair) machine learning algorithms$250,000Oct 2021
Exploring crucial considerations for decision-making around information hazards$25,000Jan 2020
Help InterACT when university systems cannot, supporting InterACT’s work enabling human-compatible robots and AI agents$135,000Jan 2022
Aiming to implement AI alignment concepts in real-world applications$10,000Oct 2018
Funding for building agents with causal models of the world and using those models for impact minimization.$10,000Jan 2020
Upskilling in ML in order to be able to do productive AI safety research sooner than otherwise$10,000Jul 2019
Identifying and resolving tensions between competition law and long-term AI strategy$32,000Jan 2020
Stipends, work hours, and retreat costs for four extra students of CHERI’s summer research program$11,094Jul 2021
Supporting 3-month research period$7,900Jul 2020
PhD in Computer Science working on AI-safety$250,000Jan 2021
4 month salary to upskill in biosecurity and explore possible career paths in biosecurity.$12,000Oct 2021
New way to fight pandemics: 1-3 months of salaries for app R&D and communications in pilots and to mass public$100,000Jan 2021
3-month funding for part-time research into US ability to maintain food supply in an extreme pandemic$3,150Jan 2022
Grant to cover fees for a master's program in machine learning$27,645Oct 2021
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$91,450Jul 2018
Supporting Vanessa with her AI alignment research$100,000Oct 2020
Create a value learning benchmark with contextualized scenarios by leveraging a recent breakthrough in natural language processing$55,000Jan 2020
Building understanding of the structure of risks from AI to inform prioritization$80,000Oct 2021
Write a SF/F novel based on the EA community.$15,000Jan 2022
Educational scholarship in AI safety$13,000Jan 2022
Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers$40,000Jan 2019
Support to build a forecasting platform based on user-created play-money prediction markets$200,000Jan 2022
Summer research program on global catastrophic risks for Swiss (under)graduate students$34,064Jan 2021
Building infrastructure to give existential risk researchers superforecasting ability with minimal overhead$27,000Apr 2019
Strategic research and studying programming$30,000Apr 2019
Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety$80,000Jan 2022
1.5-month salary to write a paper/blog post on cognitive and evolutionary insights for AI alignment$2,491Jan 2021
4-month salary to research empirical and theoretical extensions of Cohen & Hutter’s pessimistic/conservative RL agent$3,273Jan 2021
7-month salary & tuition to fund the first part of a DPhil at Oxford in modelling viral pandemics$18,000Jan 2021
Performing independent research on modern institutional incentive failures and their dependencies and vital factors for aligned institutional design in collaboration with John Salvatier$20,000Apr 2019
Investigate humans’ lack of robust task alignment in amplification, and the implications for acceptability predicates$35,000Jul 2021
Researching plans to allow humanity to meet nutritional needs after a nuclear war that limits conventional agriculture$3,600Jul 2021
Replacing reduction in income due to moving from full- to part-time work in order to pursue an AI safety-related PhD$100,000Oct 2021
Independent research on forecasting and optimal paths to improve the long-term - LTF fund$41,337Oct 2020
Payment for AI researchers when I interview / survey them about their perceptions of safety$9,900Jan 2022
Cataloging the History of U.S. High-Consequence Pathogen Regulations, Evaluating Their Performance, and Charting a Way Forward$34,500Jan 2022
Unrestricted donation$150,000Apr 2019
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$488,994Jul 2018
researching methods to continuously monitor and analyse artificial agents for the purpose of control.$44,668Oct 2020
Identifying white space opportunities for technical projects to improve biosecurity and pandemic preparedness$30,000Oct 2019
2-year funding to run public and expert surveys on AI governance and forecasting$231,608Oct 2021
Persuasion Tournament for Existential Risk$200,000Jul 2021
Support to work towards developing an early-warning system for future biological risks$9,000Jan 2022
Develop a research project on how to infer human's internal mental models from their behaviour using cognitive science modeling$7,700Jan 2020
Testing how the accuracy of impact forecasting varies with the timeframe of prediction.$55,000Oct 2020
Surveying experts on AI risk scenarios and working on other projects related to AI safety.$5,000Jul 2020
Funds for a 6-month project contributing to the clarification of goal-directedness$21,950Jan 2022
Two-year funding for a top-tier PhD in public policy in Europe with a focus on promoting AI safety$121,672Jan 2021
Funding to cover a visit to Boston for biosecurity work$16,456Oct 2021
Retroactive funding for running an alignment theory mentorship program with Evan Hubinger$3,600Jan 2022
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$174,021Jul 2018
Supporting aspiring researchers of AI alignment to boost themselves into productivity$25,000Apr 2019
Human Progress for Beginners children's book$25,000Oct 2019
Replacement salary for teaching during economics Ph.D., freeing time for conduct research into forecasting and pandemics$42,000Jan 2021
Research to enable transition to AI Safety$43,000Oct 2019
Formalizing the side effect avoidance problem research$30,000Jan 2020
Productivity coaching for effective altruists to increase their impact$23,000Jul 2019
50% of 9-month salary for bioinformatician at BugSeq to democratize analysis of nanopore metagenomic sequencing data$37,500Jan 2021
6-week grant (July 15-August 31, 2021) for full-time research on existential risks associated with running simulations$3,500Jul 2021
Support for self-study in data science and forecasting, to upskill within a GCBR research career$2,230Oct 2021
Create AI safety videos, and offer communication and media support to AI safety orgs.$60,000Jul 2020
We’re unleashing the problem-solving potential of our democracy with a simple electoral reform, approval voting.$50,000Oct 2021
Developing algorithms, environments and tests for AI safety via debate.$25,000Jul 2020
2-month costs of setting up a research company in AI alignment, including buying out the time of the two co-founders$33,762Jan 2022
Writing fiction to convey EA and rationality-related topics$20,000Jul 2019
Research on the links between short- and long-term AI policy while skilling up in technical ML$75,080Jul 2019
3-month compensation to drive time sensitive policy paper: "Managing the Transition to Universal Genomic Surveillance"$5,000Oct 2021
Funding for full-time, independent research on agent foundations$30,000Oct 2019
PhD in machine learning with a focus on AI alignment$85,530Jul 2021
Buying out one year of my academic teaching so that I can spend time on AI alignment research instead$12,000Jan 2022
Funding to promote rationality and AI safety to medallists of IMO 2020 and EGMO 2019.$28,000Apr 2019
For Remmelt Ellen to run a virtual and physical camp where selected applicants prioritise AIS research & test their fit$85,000Jan 2021
Provides various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support)$14,838Jan 2017
Additional funding for AI strategy PhD at Oxford / FHI$36,982Jul 2019
6-month salary to develop tools to test the natural abstractions hypothesis$35,000Jan 2021
A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers$26,250Apr 2019
Conducting independent research into AI forecasting and strategy questions$30,000Apr 2019
One year's salary for developing and sharing an investigative method to improve traction in pre-theoretic fields.$80,000Jan 2021
Formalizing perceptual complexity with application to safe intelligence amplification$30,000Apr 2019
Three months of blogging and movement building at the intersection of EA/longtermism and progress studies$18,000Oct 2021
Support multiple SPARC project operations during 2021$15,000Jan 2021
Funding for research assistance in gathering data on the persistence, expansion, and reversal of laws over 5+ decades$11,440Jul 2021
A two-day, career-focused workshop to inform and connect European EAs interested in AI governance$17,900Jan 2019
To spend the next year leveling up various technical skills with the goal of becoming more impactful in AI safety$23,000Jul 2019
Funding towards a 2 year postdoctoral stint to work on Safety in AI, with a focus on developing value aligned systems$275,000Jan 2022
10-month salary for research on AI safety/alignment, scaling laws, and potentially interpretability$19,020Oct 2021
Increasing usefulness and availability of Metaculus, a fully-functional quantitative forecasting/prediction platform with >170,000 predictions and >1500 questions to date.$65,000Jan 2020
Multi-model approach to corporate and state actors relevant to existential risk mitigation$30,000Jul 2019
1-year salary for Adam Shimi to conduct independent research in AI Alignment$60,000Jan 2021
A research agenda rigorously connecting the internal and external views of value synthesis$30,000Apr 2019
BERI will support SERI when university systems are unable to help$60,000Jan 2021
Financial support for work on a biosecurity research project and workshop, and travel expenses$15,000Jan 2022
3-12 month stipend for pursuing longtermist research and/or upskilling in things such as ML, the Chinese language, and cybersecurity$15,000Jan 2022
Support to create language model (LM) tools to aid alignment research through feedback and content generation$40,000Jan 2022
Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD$10,000Apr 2019
Longtermist lessons from COVID$5,625Jan 2022
Writing preliminary content for an encyclopedia of effective altruism$17,000Jan 2020
Understanding the Impact of Lifting Government Interventions against COVID-19 Transmission$9,798Oct 2020
Unrestricted donation$50,000Apr 2019
An offline community hub for rationalists and EAs$50,000Apr 2019
Upskilling investigation of AI Safety via debate and ML training$10,000Oct 2019
Computing resources and researcher salaries at a new deep learning + AI alignment research group at Cambridge$200,000Jan 2021
Funding to pay participants to test a forecasting training program$3,200Oct 2021
Building infrastructure for the future of effective forecasting efforts$70,000Apr 2019
Subsidized therapy/coaching/mediation for rationalists, EA, and startups that are working on things like x-risks.$40,000Oct 2019
8-month salary to work on technical AI safety research, working closely with a DPhil candidate at Oxford/FHI$28,320Jul 2021
6-month salary to work with Dan Hendrycks on research projects relevant to AI alignment$50,000Jan 2022
12-month funding for personal career reorientation and helping individuals and organizations in the existential risk community orient towards and achieve their goals$20,000Apr 2019
Conducting postdoctoral research at Harvard on the psychology of EA/long-termism$50,000Apr 2019
12-month salary to provide runway after finishing RSP$55,000Jan 2021
Educational Scholarship in AI Alignment$22,000Jan 2022
Fund 2 FTE of longtermism-focused researchers for 1 year to do policy/security, forecasting, & message testing research$70,000Jan 2021
Funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)$162,537Jul 2018
Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitter$1,050Jan 2022
Unrestricted donation$50,000Apr 2019
Support David Reber -9.5 months of strategic outsourcing to read up on AI Safety and find mentors$20,000Oct 2021
12-month salary for independent research, upskilling, and finding a stable position in AI-Safety$24,000Jan 2022
A major expansion of the Metaculus prediction platform and its community$70,000Apr 2019
Research project on the longevity and decay of universities, philanthropic foundations, and catholic orders$3,579Oct 2020
Organising immersive workshops on meta skills and x-risk for STEM students at top universities.$32,660Oct 2020
Support for alignment theory agenda evaluation$25,000Jul 2022
AI safety dinners$10,000Jul 2022
AI safety research$1,500Oct 2022
Compensation for a non-fiction book on threat of AGI for a general audience$50,000Jul 2022
Funding to perform human evaluations for evaluating different machine learning methods for aligning language models$10,0002022
Travel Support to BWC RevCon & Side Events$3,500Oct 2022
travel funding for participants in a workshop on the science of consciousness and current and near-term AI systems$10,840Jan 2023
Funding to host additional fellows for PIBBSS research fellowship (currently funded: 12 fellows; desired: 20 fellows)$100,000Jan 2023
Neural network interpretability research$12,990Jul 2022
Flight and accomodation costs to spend a month working with Will Bradshaw's team at the NAO$4,910Jan 2023
6 months of independent alignment research and upskilling$30,0002022
Research into the international viability of FHI's Windfall Clause$3,000Jul 2022
6-month salary for research into preventing steganography in interpretable representations using multiple agents$20,000Oct 2022
Research on EA and longtermism$70,000Jul 2022
6-month salary to interpret neurons in language models & build tools to accelerate this process. The aim is to understand all features and circuits in a model and use this understanding to predict out of distribution performance in high-stake situations.$40,000Jan 2023
1-year stipend and compute for conducting a research project focused on AI safety via debate in the context of LLMs.$50,1822022
6-month part-time (20h/week) salary to further develop and refine the feature visualization library Lucent$23,000Jan 2022
This grant will support Naoya Okamoto upskill in AI Safety research. Naoya will take the Mathematics of Machine Learning course offered by the University of Illinois at Urbana-Champaign.$7,500Jan 2023
Support to maintain a copy of the alignment research dataset etc in the Arctic World Archive for 5 years$3,000Jan 2023
Support for Marius Hobbhahn for piloting a program that approaches and nudges promising people to get into AI safety faster$50,000Jul 2022
12-month salary to study and get into AI Safety Research and work on related EA projects$14,000Oct 2022
4 month salary to support an early-career alignment researcher, who is taking a year to pursue research and test fit$20,0002022
Exploratory grant for preliminary research into the civilizational dangers of a contemporary nuclear strike$5,000Jul 2022
6 months funding for supervised research on the probability of humanity becoming interstellar given non-existential catastrophe$36,000Jul 2022
6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paper$32,650Jan 2023
Financial support to help productivity and increase time of early career alignment researcher$7,000Jul 2022
5-month part time salary for collaborating on a research paper analyzing the implications of compute access$2,5002022
Support for living expenses while doing PhD in AI safety - technical research and community building work$2,3052022
6-month salary for self-study to be more effective at AI alignment research$15,000Jul 2022
The Alignable Structures workshop in Philadelphia$9,000Oct 2022
New laptop for technical AI safety research$4,099Jul 2022
10-month funding to study ML at university and AIS independently$500Jan 2023
6 month salary to improve the US regulatory environment for prediction markets$138,000Jul 2022
Develop and market video game to explain the Stop Button Problem to the public & STEM individuals$100,000Jul 2022
A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan$72,8272022
Paid internships for promising Oxford students to try out supervised AI Safety research projects$60,000Jul 2022
Starting funds and moving costs for a DPhil project in AI that addresses safety concerns in ML algorithms and positions$3,950Jul 2022
Funds to cover speaker fees and event costs for EA community building tied in with my MA course on longtermism in 2022$22,570Jan 2022
Website visualizing x-risk as a tree of branching futures per Metaculus predictions: EA counterpoint to Doomsday Clock$3,500Jul 2022
2 months of part-time salary for a trial + developer costs to maintain and improve the AI governance document sharing hu$15,0002022
Organize the third Human-Aligned AI Summer School, a 4-day summer school for 150 participants in Prague, summer 2022$110,000Jul 2022
8 weeks scholars program to pair promising alignment researchers with renowned mentors$316,000Oct 2022
Stanford Artificial Intelligence Professional Program tution$4,785Jul 2022
(professional development grant) New laptop for technical AI safety research$2,5002022
Year-long salary for shard theory and RL mech int research$220,000Jan 2023
Stipend to produce a guide about AI safety researchers and their recent work, targeted to interested laypeople$5,000Jul 2022
Support to further develop a branch of rationality focused on patient and direct observation$80,000Jul 2022
1-year salary and costs to connect, expand and enable the AGI governance and safety community in Canada$87,000Jul 2022
3-month salary to skill up in ML and Alignment with goal of developing a streamlined course in Math/AI$5,5002022
6-month salary for two people to find formalisms for modularity in neural networks$72,5602022
One-course teaching buyout for Steve Peterson for two academic semesters to work on the foundational issue of *agency* for AI safety$20,815.2Oct 2022
6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paper$167,480Jan 2023
European Summer Research Program focused on building the talent pipeline for global catastrophic risk researchers$169,947Jan 2022
4 month salary to set up AI safety groups at 2 groups covering 3 universities in Sweden with eventual retreat$10,000Oct 2022
Make 12 more AXRP episodes$23,5442022
12-month stipend and research fees for completing dissertation research on public ethical attitudes towards x-risk$60,000Jul 2022
1-year salary for research in applications of natural abstraction$180,000Oct 2022
Financial support to work part time on an academic project evaluating factors relevant to digital consciousness$11,000Oct 2022
6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org$98,000Jan 2023
6-month salary to dedicate full-time to upskilling/AI alignment research tentatively focused on agent foundations$6,000Jan 2023
3-month salary for upskilling in PyTorch and AI safety research.$19,200Jan 2023
6-month salary for SERI MATS scholar to continue working on theoretical AI alignment research, trying to better understand how ML models work to reduce X-risk from future AGI$50,000Oct 2022
Funding for coaching sessions to become a more productive researcher (I research the effect of creatine on cognition)$4,000Oct 2022
Funding to cover 4-months of rent while attending a research group with the Cambridge AI Safety group$5,6132022
6-month salary to conduct AI alignment research circuits in decision transformers$50,0002022
6-week salary to publish a series of blogposts synthesising Singular Learning Theory for a computer science audience$8,0002022
Funding for a one year machine learning and computational statistics master’s at UCL$38,101Oct 2022
Funding for project transitioning from AI capabilities to AI Safety research.$8,2002022
Twelve month salary to work as a global rationality organizer$130,000Oct 2022
Support to work on Aisafety.camp project, impact of human dogmatism on training$2,000Jul 2022
Funding for additional fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safety$54,962Jan 2023
6-month salary to research AI alignment, specifically the interaction between goal-inference and choice-maximisation$47,074Oct 2022
5-month salary plus expenses to support civilizational resilience projects arising from SHELTER Weekend$27,248Oct 2022
One year of funding to improve an established community hub for EA in London$50,000Jul 2022
Support for AI-safety related CS PhD thesis on enabling AI agents to accurately report their actions$90,000Jan 2022
Financial support for career exploration and related project in AI alignment upon completion of Masters in Computer Science$26,077Oct 2022
6-month salary to: 1) Carry out independent research into risks from nuclear weapons, 2) Upskill in AI strategy$40,250Oct 2022
6 months salary for independent work centered on distillation and coordination in the AI governance & strategy space$69,9402022
Support to cover the costs of leaving employment in order to pursue AI safety research.$4,0002022
6-month salary for Fabian Schimpf to upskill into AI alignment research and conduct independent research on limits of predictability$28,8752022
PhD Stipend Top Up for CHAI PhD Student.$6,675Jan 2022
Purchasing a technologically adequate laptop for my AI Policy studies in the ML Safety Scholars program and at Oxford$3,640Jul 2022
One year part time spent on AI safety upskilling and concrete research projects$62,500Oct 2022
Pass on funds for Astral Codex Ten Everywhere meetups$22,000Jan 2023
Payment for part-time rationality community building$4,000Oct 2022
4-month salary for two people to find formalisms for modularity in neural networks$67,000Jan 2023
Travel support to attend the Symposium on AGI Safety in Oxford in May$1,500Jan 2023
Funding the last year of my PhD on embedded agency, to free up my time from teaching$64,000Oct 2022
Funds to support travel for academic research projects relating to pandemic preparedness and biosecurity$8,150Oct 2022
Funding for 3 months independent study to gain a deeper understanding of the alignment problem, publishing key learnings and progress towards finding new insights.$35,625Oct 2022
2 years of GovAI salary and overheads for Robert Trager$401,537Jul 2022
Support for Jay Bailey for work in ML for AI Safety$79,120Jul 2022
4 month salary for independent research and AIS field building in South Africa, including working with the AI Safety Hub to coordinate reading groups, research projects and hackathons to empower people to start working on research.$12,000Jan 2023
Support for working on "Language Models as Tools for Alignment" in the context of the AI Safety Camp.$10,000Jul 2022
4-month salary to work on a project finding the most interpretable directions in gpt2-small's early residual stream$16,300Jan 2023
Fine-tuning large language models for an interpretability challenge (compute costs)$11,3002022
Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward$40,0002022
12-month salary to work on alignment research!$96,000Oct 2022
Funding for Computer Science PhD$348,773Jan 2022
6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RL$40,000Oct 2022
4-month stipend to study AI Alignment,apply for ML Safety Courses and implemen it on RL models$1,0002022
12-month salary to work on ML models for detecting genetic engineering in pathogens$85,000Oct 2022
2 months rent and living cost to attend MLSS in Indonesia because I need to move closer to my workplace to make time$745Oct 2022
Piloting an EA hardware lab for prototyping hardware relevant to longtermist priorities$44,000Oct 2022
Retroactive grant for managing the MATS program, 1.0 and 2.0$27,000Oct 2022
Enabling prosaic alignment research with a multi-modal model on natural language and chess$25,000Jul 2022
2-6 months' stipend to financially cover my self-development in Machine Learning for alignment work$16,000Oct 2022
3-month funding for upskilling in technical AI Safety to test personal fit and potentially move to a career in alignment$1,000Oct 2022
Top-up grant to run the PIBBSS fellowship with more fellows than originally anticipated and to realize a local residency$180,200Jul 2022
6-months salary for researching “Framing computational systems such that we can find meaningful concepts." & Upskilling$24,000Oct 2022
6 months’ salary to upskill on technical AI safety through project work and studying$50,000Jan 2023
6-month salary for an AI alignment research project on the manipulation of humans by AI$25,3832022
6-month salary for 2 people working on modularity, a subproblem of Selection Theorems and budget for computation$26,342Oct 2022
Support for research into applied technical AI alignment work$10,000Jul 2022
A 10-12 week summer research fellowship program to facilitate interdisciplinary AI alignment research$305,000Jan 2022
Increase of stipends for living expenses coverage and higher travel allowance for students of 2022 CHERI’s summer residence$134,532Jul 2022
5-month salary and compute expenses for technical AI Safety research on penalizing RL agent betrayal$14,300Jul 2022
12-Month Salary and Compute Expenses to do AI Safety Research with LLMs$70,000Jan 2023
I am looking for a career transition grant to give me more time for job hunting & networking$3,618Jan 2023
Research and a report/paper on the the role of emergency powers in the governance of X-Risk$26,000Jul 2022
Equipment to improve productivity while doing AI Safety research$3,900Jul 2022
3 months exploring career options in AI governance, upskilling, networking, producing work samples, applying for jobs$20,0002022
One-year funding of Astral Codex Ten meetup in Philadelphia$5,000Jan 2023
Reconstruction attacks in federated learning$5,000Jul 2022
This grant is funding a 6-month stipend for Bilal Chughtai to work on a mechanistic interpretability project$47,500Jan 2023
Retrospective funding for research retreat on a decision-theory / cause-prioritization topic.$10,0002022
Funding for the AI Safety Nudge Competition$5,200Oct 2022
Support to work on AI alignment research$16,341Jan 2022
9 months of funding for an early-career alignment researcher, to work with Owain Evans and others.$45,0002022
Office rent, setup, and food for studying causal scrubbing in compiled transformers, supported by Redwood Research$4,3002022
One year grant for a project to reverse-engineer human social instincts by implementing Steven Byrnes' brain-like AGI$16,600Oct 2022
I am seeking funding to attend a Center for the Advancement of Rationality (CFAR) workshop in Prague during the Fall$1,800Oct 2022
Funding 2 years of technical AI safety research to understand and mitigate risk from large foundation models$209,501Oct 2022
Independent research and upskilling for one year, to transition from academic philosophy to AI alignment research$60,000Oct 2022
Part-time work buyout and equipment funding for PhD developing computational techniques for novel pathogen detection$20,000Jul 2022
6-months salary to accelerate my plans of upskilling in order to work on the issue of AI safety$26,1502022
Support funding during 2 years of an AI safety PhD at Oxford$11,579Jul 2022
1-year stipend (and travel and equipment expenses) for support for work on 2 AI safety projects: 1) Penalising neural networks for learning polysemantic neurons; and 2) Crowdsourcing from volunteers for alignment research.$150,000Jul 2022
Organizing OPTIC: in-person, intercollegiate forecasting tournament. Boston, Apr 22. Funding is for prizes, venue, etc.$2,100Jan 2023
Developing and maintaining projects/resources used by the EA and rationality communities$60,000Jan 2023
General support for Alexander Turner and team research project - Writing new motivations into a policy network by understanding and controlling its internal decision-influences$115,411Jan 2023
Funding a new computer for AI alignment work, specifically a summer PIBBSS fellowship and ML coding$2,500Jul 2022
6-month salary to explore biosecurity policy projects: BWC/ European early detection systems/Deep Vision risk mitigation$27,800Jul 2022
4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgism$32,000Jan 2023
Top-up funding for a 3-month new hire trial to help me connect, expand and enable the AGI gov/safety community in Canada$17,0002022
4 month grant to upskill for AI governance work before starting Science and Technology Policy PhD$17,220Jul 2022
9-month part-time salary for Magdalena Wache to self-study AI safety, test fit for theoretical research$62,040Oct 2022
300-hour salary for a research assistant to help implement a survey of 2,250 American bioethicists to lead to more informed discussions about bioethics.$4,500Oct 2022
≤1-year salary for alignment work: assisting academics, skilling up, personal research and community building$35,0002022
Tuition to take one Harvard economics course in the Fall of 2022 to be a more competitive econ graduate school applicant$6,557Jul 2022
6-month salary to study China’s views & policy in biosecurity for better understanding and global response coordination$25,000Jul 2022
Research (and self-study) project designed to map and offer preliminary assessment of AI ideal governance research$2,000Jul 2022
Fund a research fellow to identify island societies that are likely to survive sun-blocking catastrophes and research ways to optimise their chances of survival.$27,000Jan 2022
6-month salary to develop an overview of the current state of AI alignment research, and begin contributing$70,000Jul 2022
Grant to cover 1 year of tuition fees and living expenses to pursue PhD CS at the University of Oxford. Acelerate alignment research by building Alignment Research tools using expert iteration based amplification from Human-AI collaboration.$63,000Jan 2023
7 month salary to study a Graduate Diploma of International Affairs at The Australian National University$9,000Jan 2023
Funding to start a longtermist org and support research$494,510Oct 2022
Slack money for increased productivity in AI Alignment research$17,355Jan 2022
2-year salary for work on the learning-theoretic AI alignment research agenda$100,000Jan 2023
Support to conduct work in AI safety$5,0002022
Funding to support PhD in AI Safety at Imperial College London, technical research and community building$6,350Jul 2022
3-month salary for SERI-MATS extension$24,000Jan 2023
A relocation grant to help me to move and settle into a PhD program and cover initial expenses$6,500Oct 2022
Funding for labour to expand content on Wikiciv.org, a wiki for rebuilding civilizational technology after a catastrophe. Project: writing instructions for recreating one critical technology in a post-disaster scenario fully specified and verified.$16,0002022
6-month salary plus expenses for Jay Bailey to work on Joseph Bloom's Decision Transformer interpretability project.$50,000Jan 2023
1-year salary for upskilling in technical AI alignment research$96,000Oct 2022
6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety$4,524Oct 2022
4-month salary for conceptual/theoretical research towards perfect world-model interpretability$30,0002022
6-month salary to skill up and gain experience to start working on AI safety full-time$14,1362022
3-week salaries for Sam, Eric, and Drake to work on reviewing various AI alignment agendas$26,0002022
6 months salary to do independent AI alignment research focused on formal alignment and agent foundations$30,0002022
Funding for salary and living expenses while continuing to develop a framework of optimisation.$8,0002022
Retrospective funding of salary for up-skilling in infrabayesianism prior to start of SERI MATS program$4,400Oct 2022
Weekend organised as a part of the co-founder matching process of a group to found a human data collection org$2,300Oct 2022
1 year salary to research new alignment strategy to analyze and enhance Collective Human Intelligence in 7 pilot studies$90,000Jan 2023
3-month salary to set up a distillation course helping new AI safety theory researchers to distill papers$14,600Jul 2022
24-month salary for a postdoc in economics to do research on mechanisms to improve the provision of global public goods$102,000Jan 2022
6-month salary to research geometric rationality, ergodicity economics and their applications to decision theory and AI$11,0002022
Support for AI alignment outreach in France (video/audio/text/events) & field-building$24,800Oct 2022
3-month stipend for upskilling in AI Safety and potentially transition to a career in Alignment$5,0002022
4-month salary for a research visit with David Krueger on evaluating non-myopia in language models and RLHF systems$12,3212022
Scholarship for PhD student working on research related to AI Safety$8,0002022
12-month salary to transition career into technical alignment research$25,000Oct 2022
6-month salary for continued work on shard theory: studying how inner values are formed by outer reward schedules$40,000Oct 2022
A new laptop to conduct remote work as a Summer Research Fellow at CERI and organize virtual Future of Humanity Summit$2,500Oct 2022
8-month salary for three people to investigate the origins of modularity in neural networks$125,000Jul 2022
12-month salary to research AI alignment, with a focus on technical approaches to Value Lock-in and minimal Paternalism$81,402.422022
A research & networking retreat for winners of the Eliciting Latent Knowledge contest$72,000Oct 2022
6 months salary. Turn intuitions, like goals, wanting, abilities, into concepts applicable to computational systems$24,000Oct 2022
Support to conduct a research project collaboration on Compute Governance$67,800Jan 2022
4-month funding for independent alignment research and study$15,478Oct 2022
EU Tech Policy Fellowship with ~10 trainees$68,750Jul 2022
Funding to increase my impact as an early-career biosecurity researcher$6,000Oct 2022
~3-month funding for a project analysing fast/slow AI takeoffs and upskilling in AI safety$4,800Jan 2022
Economic stipend for MLSS scholar to set up a proper working environment in order to do research in AI technical research$2,000Oct 2022
One year of seed funding for a new AI interpretability research organisation$195,000Jan 2023
Travel help to go to Biological Weapons Convention in Geneva between 28.11 and 16.12.2022$1,5002022
One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATS$100,000Oct 2022
6-month salary to upskill for AI safety$54,2502022
12-month salary to continue developing research agenda on new ways to make LLMs directly useful for alignment research without advancing capabilities$120,000Jan 2023
3-month salary to continue working on AISC project to build a dataset for alignment and a tool to accelerate alignment$22,000Jul 2022
Cover participant stipends for AI Safety Camp Virtual 2023$72,5002022
Developing weight-based decomposition methods for interpretability - MATS extension, 6 months stipend for 2 people$80,000Jul 2024
6-months stipend for transitioning to independent research on AI Safety$40,000Apr 2024
Spend 3 months (part time) assessing plausible pathways to slowing AI$5,000Apr 2024
4-month part-time salary to work on interpretability projects with David Bau and Logan Riggs$10,000Jul 2024
6 months of funding (salaries & ops costs) for AI Safety talent incubation through research sprints and fellowships$272,800Oct 2023
1-year stipend to make accessible-yet-rigorous explainers on AI Alignment/Security, in the form of games/videos/articles$80,000Jan 2025
A small, short workshop focused on coordinating/planning/applying «boundaries» idea to safety$5,000Oct 2023
3-month stipend to support research on the state of AI safety in China and implications for AI existential risk$12,000Apr 2024
3-month stipend for MATS extension establishing a benchmark for LLMs’ tendency to influence human preferences$80,000Jul 2024
$10,120 for most of the ops expenses of the research phase (June-Sep 2024) of WhiteBox’s AI Interpretability Fellowship$10,120Apr 2024
1 year funding for PIBBSS (incl. several programs, e.g. fellowship 2024, affiliate program, reading group)$102,500Oct 2023
This grant is for Nathaniel Monson to spend 6 months studying to transition to AI alignment research, with a focus on methods for mechanistic interpretability and resolving polysemanticity.$70,000Apr 2023
6 months of funding for MATS 5.0 extension, with projects on latent adversarial training and persona explainability$52,118.5Jan 2024
6-month stipend to work on an ML safety project, with the aim of joining a ML safety team full-time after$40,000Jan 2024
Data collection for a new paradigm for AI alignment based on downstream outcomes and human flourishing$50,000Jan 2024
4-month salary for finding and characterising provably hard cases for mechanistic anomaly detection$40,000Jul 2024
3-month stipend during post-SERI MATS alignment job search and wrapping up paper w/ mentor$22,500Jan 2024
This grant will support Yanni Kyriacos (through Good Ancestors Project) with one year of costs for AI Safety movement building work in Australasia.$77,000Jan 2024
Exploring the feasibility of circuit-style analysis on the level of SAE features (MATS extension)$41,000Jan 2024
6-month Scholarship to support Amritanshu Prasad's upskilling in technical AI alignment. Amritanshu will study the AGI Safety Fundamentals Alignment Curriculum and create an accessible and informative summary of the curriculum.$8,000Apr 2023
4-month stipend for a career transition period to explore roles in AI safety communications$10,120Apr 2024
12 week 0.6FT upskilling stipend for technical governance research management$11,244Apr 2024
3-months salary for SERI MATS extention to work on internal concept extraction$27,260Jul 2023
6-months of part-time stipend to launch a new science journalism outlet focused on AI Safety$50,000Jan 2025
6 to 12 month fundings to continue working on model psychology and evaluation$42,000Jul 2023
4-month salary and office for MATS 5.0 extension on adversarial circuit evaluation + 2-month runway for career switch$62,000Jan 2024
This grant provides funding for a project that explores debate as a tool that can verify the output of agents which have more domain knowledge than their human counterparts.$55,000Apr 2023
Ten field-building events for global catastrophic risks in the greater Boston area for Spring/Summer 2025$7,118Apr 2025
A megaproject proposal: Building a longtermist industrial conglomerate aligned via a reputation based economy$36,000Jul 2023
Developing noise-injection methods to reveal and reduce deceptive behaviors in language models prior to deployment$40,000Jul 2024
6-month salary and compute budget for continuing work on mechanistic interpretability for attention layers$37,000Jul 2024
12-month support for independent AI alignment research$45,000Apr 2024
4-month stipend: Research on agent scaling laws—relationships between training compute and agent capabilities of LLMs$70,000Jul 2024
This grant will support Josh Clymer and collaborators with summer stipends + research budget to execute technical safety standards projects.$32,000Jan 2024
4-month fund for full time AI safety technical and/or governance research$10,750Apr 2023
This grant provides a 3-month part-time stipend for Carson Ezell to conduct 2 research projects related to AI governance and strategy.$8,673Apr 2023
4-month stipend to continue AI safety projects$25,216Jan 2024
Part-time salary for independent AI safety research$40,000Jul 2023
Grant to attend ICLR 2024 for an accepted alignment related paper as an undergraduate student$1,875Apr 2024
Mentored independent research and upskilling to transition from theoretical physics PhD to AI safety$50,000Jul 2024
6-month stipend to work on a research project on AI Liability Insurance as an additional lever for AI Safety$77,544Apr 2024
2-month salary to test suitability for technical AI alignment research and identify a research direction$8,800Apr 2023
Meta level adversarial evaluation of debate (scalable oversight technique) on simple math problems (MATS 5.0 project)$62,150Jan 2024
Organizing the fourth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague/Vienna for 130 participants$160,000Jan 2024
1-month full-time + 3 months part-time salary to work on two research projects during the MATS 5.0 extension program$15,075Jan 2024
1 year PhD funding and compute funding to research a novel method for training prosociality into large language models$10,000Apr 2023
1-year stipend to continue building and maintaining key digital infrastructure for the AI safety ecosystem$99,330Oct 2023
6-month salary for independent alignment research in interpretability or control$95,000Jul 2023
Funding to do research on understanding search in transformers at the AI safety camp during 14 weeks$6,636Apr 2023
One year stipend and compute budget, for full-time technical AI alignment research$80,000Jul 2023
6-month stipend to continue research on benchmarks for interpretability and on characterizing Goodhart's Law$60,000Apr 2024
6 month salary for further pursuing sparse autoencoders for automatic feature finding$40,000Jul 2023
5-months funding for RA work with Sean Ó hÉigeartaigh on AI Governance research assistance / AI:FAR admin assistance$16,698Jan 2025
3-month stipend and cloud credits to research AI collusion mitigation strategies and develop secure steganography$12,600Apr 2024
6-month stipend on evaluating robustness of AI agents safety guardrails and for running an AI spear-phishing study$36,000Apr 2024
In MentaLeap, cybersecurity experts, AI researchers, and neuroscientists collaborate to reverse-engineer neural networks$40,000Jul 2023
Funding to attend BWC meeting to discuss transparency with country representatives & work on research project$1,700Jul 2023
2 Months of living expenses while I try to establish a broad-spectrum antiviral research organization$5,000Jan 2024
6-month stipend to work on AI alignment research (automated redteaming, interpretability)$30,000Apr 2024
12-month salary to continue working tools for accelerating alignment and the Supervising AIs Improving AIs agenda$27,108Apr 2023
1-year stipend to continue research on agency, focused on natural abstraction$200,000Jul 2023
This grant is funding a $35,000 stipend plus $10,000 in compute costs for Yuxiao Li's independent inference-based AI interpretability research.$45,000Apr 2023
A 4-5 day workshop for agent foundations researchers at Carnegie Mellon University in March, 2025$20,700Oct 2024
Undergrad buyout to teach AI safety in Hong Kong’s new MA program on AI; China-West AI Safety workshop$33,000Jul 2023
Monthly seminar series on Guaranteed Safe AI, from July to December 2024$6,000Apr 2024
This grant is funding for a 6-month stipend for Sviatoslav Chalnev to work on independent interpretability research, specifically mechanistic interpretability and open-source tooling for interpretability research.$35,000Apr 2023
5-month salary to continue work on evaluating agent self-improvement capabilities$23,360Apr 2024
12-week part-time stipend to research specialized AI hardware requirements for large AI training, with IAPS mentor Asher Brass$6,000Apr 2024
4-month stipend to bring to completion a mechanistic interpretability research project on how neural networks perform co$22,324.5Jul 2024
Seeking funds to present “Benchmark Inflation” at ICML 2024, my paper on making AI progress measures more accurate$2,500Apr 2024
1-month pt. stipend for 4 MATS scholars working on autonomous web-browsing LLM agents that can hire humans + safety evals$19,000Jan 2024
3-month stipend to continue working on goal misgeneralisation project for ICLR deadline, plus travel funding$20,000Apr 2024
Six month study grant to speed up my career pivot into AI safety and alignment research, with specific deliverables$61,000Oct 2023
6-month salary for part-time independent research on LM interpretability for AI alignment$7,700Jul 2023
6-month salary to produce 2 AI governance white papers and a series of case studies, with additional research costs$31,600Apr 2023
SERI MATS 3-month extension to study knowledge removal in Language Models$12,000Jul 2023
6-month salary to transition to a career in AI safety while working on AI safety projects$30,000Jan 2024
I'm writing a research paper that introduces an instruction-following generalization benchmark and I need compute funds$1,500Apr 2023
9-month programme to help language and cognition scientists repurpose their existing skills for long-termist research$5,000Jul 2024
11 months stipend for 1.5 FTEs and funding for other costs for an AI Safety field-building organization TUTKE in Finland$73,333.33Jul 2024
Compute costs for experiments to evaluate different scalable oversight protocols$86,600Jan 2024
6-month salary to finish writing a book on international AI governance and three other smaller AI governance projects$33,700Apr 2024
This grant will support Tristan with funding to attend EAG and to apply for grad school, aiming for an impactful policy role targeting x-risk reduction.$2,000Jan 2024
6-month salary for an AISC project and continuing independent mechanistic interpretability projects$28,000Apr 2023
3-month (+buffer to prepare project reports) part-time salary to upskill in biosecurity research and prioritisation. Courses include Infectious Disease Modelling by Imperial College London. Projects include UChicago's Market Shaping Accelerator challenge.$3,138Apr 2023
4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension program$30,000Jan 2024
Retroactive funding for GameBench paper$9,072Apr 2024
A podcast mainly themed around AI x-risk, aimed at a non-technical audience$5,000Jan 2024
~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila$86,400Apr 2024
4-month stipend for upskilling within the field of economic governance of AI$7,000Oct 2023
4 weeks dev time to make a cryptographic tool enabling anonymous whistleblowers to prove their credentials$15,000Apr 2023
6-month stipend for conducting AI-safety research during the MATS 5.0 extension program and beyond$38,688Jan 2024
5-month funding to continue upskilling in mechanistic interpretability post-SERI MATs, and to continue open projects$21,989Jul 2023
6-month stipend to work on techical alignment research as part of MATS 5.0 extension program$40,000Jan 2024
Retroactive grant to study Goodhart effects on heavy-tailed distributions$29,760Jul 2023
6-month stipend to do an unpaid internship focused on using theory/interpretability to increase the safety of AI systems$37,120Jan 2024
9 months support for an in-depth YouTube channel about AI safety and how AI will impact us all$27,000Jul 2024
Funding for 6-Month AI strategy/policy research stay at CSER with Seán Ó hÉigeartaigh & Matthew Gentzel$31,650Apr 2024
4-month stipend and expenses to create a benchmark for evaluating goal-directedness in language models$60,000Jul 2024
6-month career transition and independent research in AI safety and risk mitigation$85,000Jul 2024
This grant provides a stipend for Cindy Wu to spend 4 months working on AI safety research.$5,000Apr 2023
Two workshops on strategic communications around AI safety, focused on the AI safety community$5,720Jul 2024
6 month salary to work on mech interp research with mentorship from Prof David Bau$41,000Jul 2023
6-month salary to verify neural network scalably for RL and produce a human to super-human scalable oversight benchmark$35,000Jan 2024
Research on how much language models can infer about their current user, and interpretability work on such inferences$55,000Jan 2024
4-month stipend to research the mechanisms of refusal in chat LLMs$40,000Jan 2024
Virtual AI Safety Unconference 2024 is a collaborative online event by and for researchers of AI safety$10,000Jan 2024
4-month grant to conduct deceptive alignment evaluation research and explore control and mitigation strategies$27,000Jul 2024
Develop proposals for off-switch designs for AI, including policy games, that have been rigorously evaluated for their effectiveness, technical feasibility and political viability$40,000Jul 2024
A fellowship for 3 fellows in synthetic biology, artificial intelligence and neurotechnology to bridge policy and tech$120,000Apr 2024
One year funding of ACX meetup in Atlanta Georgia$5,000Apr 2023
7 months of coworking-space funding continuation, during interpretability research project$10,500Jan 2024
Stipend for a master’s thesis and paper on technical alignment research: mechanistic interpretability of attention$25,491Apr 2023
Organize AI xrisk events with experts (e.g. Stuart Russell), politicians, and journalists to influence policymaking$24,339Oct 2023
7-month stipend for organising AI Alignment Irvine (AIAI)$16,337Jul 2024
6-month stipends to develop and apply a novel method for localizing information and computation in neural networks$160,000Jul 2024
9-week stipend for two part-time researchers to write and publish a policy proposal: Mandatory AI Safety ‘Red Bonds’$7,200Jul 2024
6-month stipend to continue independent interpretability research$40,000Jan 2024
4-month stipend for MATS extension on mechanistic interpretability benchmark + 2-month stipend for career switch$67,000Jan 2024
WhiteBox Research: 1.9 FTE for 9 months to pilot a training program in Manila focused on Mechanistic Interpretability$61,460Jul 2023
8-week stipend for a research project supervised by John Halstead, PhD, on US regulatory decision-making and Frontier AI$6,230Apr 2024
1-year stipend for independent research primarily on high-level interpretability$70,000Apr 2024
Stipend and expenses to run the second Athena mentorship program for gender-minority researchers in technical AI alignment$80,000Jul 2024
Conference publication of interpretability and LM-steering results$40,000Apr 2023
1yr stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved$121,575Jul 2023
12-month salary to set up a new org doing research and creating interventions to minimise lock-in risk$10,000Oct 2024
1.5 year stipend for thorough investigation and analysis of AI lab scaling policies$100,000Jan 2025
6 month SERI MATS London extension phase for continuing and scaling up the sparse coding project$35,300Jul 2023
4 months of stipend for MATS extension work in London studying the safety implications of LLM self-recognition$34,100Jan 2024
Studying extensions of the AIXI model to reflective agents to understand the behavior of self modifying A.G.I$50,000Apr 2023
Organizing the fifth Human-Aligned AI Summer School (HAAISS), a 4-day conference in Prague for 120 participants$115,000Apr 2025
MATS 3-month stipend to use singular learning theory to explain & control the development of values in ML systems$17,500Jan 2024
6-month researcher stipend to explore the effect of chat fine-tuning on LLM capability elicitation$55,660Jan 2024
One year of operating expenses for a nonprofit that facilitates and amplifies Nick Bostrom's work$150,000Jan 2025
6-month stipend for a small group of collaborators to continue research on the Agent Structure Problem$60,000Jan 2024
4-month stipend for 3 people to create demonstrations of provably undetectable backdoors$50,336Jan 2024
Development of mathematical language for highly adaptive entities (having stable commitments, not stable mechanisms)$30,000Apr 2024
Researching neural net generalization on algorithmic tasks, upskilling in math relevant to singular learning theory$20,000Jul 2024
4-month salary to continue work on AI Control as a MATS extension$30,000Jul 2024
6-month salary to build experience in AI interpretability research before PhD applications$40,000Apr 2023
2 month funding to get into mechanistic interpretability and to do 2-3 projects, than learn briefly related fields$5,000Jul 2024
Salary Top-Up for Timaeus' Employees & Contractors$100,000Jan 2024
6 month project - pending description$10,000Apr 2023
3 months relocation from Chad to London to work on Eliciting Latent Knowledge with Jake Mendel from Apollo Research$8,500Jan 2024
6-month stipend for Sparse Autoencoder Mech Interp projects$40,000Jan 2024
4-month stipend to continue work on AI Control as a MATS extension$30,000Jul 2024
12 month stipend and expenses to research in AI Safety (Unlearning; Modularity; Probing Long-term behaviour)$80,000Apr 2024
6-month support for self study and development in ML and AI Safety. Goals include producing an academic paper while working on the "Inducing Human-Like Biases in Moral Reasoning LMs" project run by AI Safety Camp.$1,739Apr 2023
6-month research extending the landmark Betley et al. emergent misalignment paper through fine-tuning experiments on bas$5,200Apr 2025
1 year stipend to develop materials demonstrating an investigative procedure for advancing the art of rationality$80,000Apr 2023
Funding for having written AI safety distillation posts on the topic of membranes/boundaries$4,500Oct 2023
4-6 month salary to do circuit-based mech interp on Mamba, as part of the MATS extension program$60,000Jan 2024
4-month expenses for AI safety research on personas and sandbagging during the MATS 5.0 extension program$30,087Jan 2024
General support for a forecasting team$6,000Oct 2023
This grant will support Daniel Filan in producing 18 episodes of AXRP, the AI X-risk Research Podcast. The podcast aims to increase in-depth understanding of potential risks from artificial intelligence.$44,802Apr 2024
Year-long stipend to work as the primary maintainer of TransformerLens, and implement large changes to the code base.$90,000Apr 2024
This grant provides 5 months of funding for office space for collaboration on interpretability/model-steering alignment research$30,000Apr 2023
Funds to support travel for research with the Nucleic Acid Observatory relating to biosecurity and GCBRs$5,090Jul 2023
4-month wage for alignment upskilling: gain research eng skills (projects) + understand current alignment agendas$7,200Apr 2023
6-month stipend to continue independent AI alignment research from MATS 5.0 on situational awareness and deception$55,000Jan 2024
6-month salary to continue developing as an AI safety researcher. Goals include writing a review paper about goal misgeneralisation from the perspective of Active Inference and pursue collaborative projects on collective decision-making systems.$6,500Apr 2023
6-month stipend to work on safe and robust reasoning via mechanistically interpreting representations$30,000Apr 2024
Develop a short fiction film at a top film school, to spread accurate and emotive understanding of AI x-risk$25,000Jan 2025
4-month stipend to continue work on AI Control as a MATS extension$30,000Jul 2024
$10,500 in funding to run a trial of a longtermist mentorship program similar to Magnify Mentoring but unrestricted$10,500Apr 2023
8 months stipend during job transition, to finish current projects (AI Goodharting, coop. AI) and find suitable next topic$49,333.33Jul 2024
1 month long literature review on in-context learning and its relevance to AI alignment$6,000Jan 2024
4 weeks expenses for FAR Labs Residency for research group focusing on goal-directedness in transformer models$13,000Apr 2024
6-month stipend to remove conditional bad behaviors from LLMs via a learned latent space intervention$40,000Jul 2024
Create an animated video essay explaining how AI could accelerate AI R&D and its implications for AI governance$5,000Oct 2024
A private online platform for research-sharing amongst the AI governance community$125,000Jul 2024
6-month salary to build & enhance open-source mechanistic interpretability tooling for AI safety researchers$50,000Apr 2023
This grant will support Viktor Rehnberg's project identifying key steps in reducing risks from learned optimisation and working towards solutions that seem the most important. Viktor will start working on this project as part of the SERI MATS program.$19,248Apr 2023
Four months of funding for a MATS 5.0 extension, working on improving methods in latent adversarial training$23,100Jan 2024
6-month incubation program for technical AI safety research organizations$122,507Oct 2023
4-months stipend to apply mechanistic interpretability to a real-world application, hallucinations$60,000Jul 2024
3-month part-time salary in order to work on AI governance projects and activities$6,000Jul 2023
Funding for (academic/technical) AI safety community events in London$8,000Apr 2023
Catalog the history of U.S. high-consequence pathogen regulations, evaluate their performance, and chart a way forward$50,000Jan 2024
3–6 months stipend for first full year as a research professor of CS at UT Austin, researching technical AI alignment$50,000Apr 2024
6 month AI alignment internship stipend top-up$10,000Apr 2024
Travel Funding Request for Early-Career Researcher to Attend Workshop on Biosecurity and AI Safety$1,800Jul 2024
Experimentally testing generative AI's ability to persuade humans about hazardous topics$115,000Jan 2024
6 month stipend for SAE-circuits$40,000Jul 2024
6-month 1 FTE funding to train Multi-Objective RLAIF models and compare their safety performance to standard RLAIF$42,000Oct 2023
3-month salary + compute expenses to study and publish on shutdown evasion in LLMs and to use LLMs as tools for alignment$13,000Apr 2023
Compute for experiment about how steganography in large language models might arise as a result of benign optimization$2,000Oct 2023

Related Pages

Top Related Pages

Analysis

AI Watch

Key Debates

Technical AI Safety Research

Other

Nick BecksteadEli LiflandHelen TonerPaul ChristianoVipul Naik

Organizations

QURI (Quantified Uncertainty Research Institute)Manifold (Prediction Market)ManifundRethink Priorities

Concepts

Funders OverviewEA Funding Absorption Capacity