Skip to content

Center for Applied Rationality

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:62 (Good)⚠️
Importance:18.5 (Peripheral)
Last edited:2026-02-03 (3 days ago)
Words:3.6k
Structure:
📊 4📈 0🔗 14📚 3814%Score: 13/15
LLM Summary:Berkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-year follow-up. Received $3.5M+ from Open Philanthropy and $5M from FTX (later clawed back); faced major controversies over abuse allegations handling and cult-like dynamics, now operating with 8 part-time staff after multi-year hiatus.
Issues (1):
  • QualityRated 62 but structure suggests 87 (underrated by 25 points)
AspectDetails
TypeNonprofit organization (501(c)(3))
Founded2012
LocationBerkeley, California (now mostly remote)
Primary FocusTeaching rationality techniques through workshops and programs
Key PeopleAnna Salamon (President), Julia Galef, Andrew Critch, Michael Smith (co-founders)
Major FundersOpen Philanthropy ($3.5M+), Survival and Flourishing Fund ($1.6M+)
StatusActive with reduced operations; resumed workshops in 2025 after hiatus
SourceLink
Official Websiterationality.org
Wikipediaen.wikipedia.org
LessWronglesswrong.com
EA Forumforum.effectivealtruism.org

The Center for Applied Rationality (CFAR) is a nonprofit organization founded in 2012 that develops and teaches techniques to improve epistemic and instrumental rationality through immersive workshops, coaching programs, and curriculum development. The organization emerged from the rationality movement around Eliezer Yudkowsky’s LessWrong community and focuses on applying insights from cognitive science, behavioral economics, psychology, and decision theory to real-world decision-making and problem-solving.12

CFAR’s stated mission is “Developing clear thinking for the sake of humanity’s future,” with particular emphasis on addressing challenges related to existential risks, including AI safety.3 The organization has trained over 1,300 workshop alumni through approximately 60 flagship workshops held between 2012 and early 2020, with participants reporting an average satisfaction rating of 9.2 out of 10.45 Beyond its core workshop programs, CFAR has contributed to the broader rationality and effective altruism communities through specialized programs, curriculum development, and connections to organizations working on AI alignment and existential risk reduction.

The organization has evolved significantly since its founding, transitioning from a full-time staff of approximately 12 to a mostly-remote operation with eight part-time curriculum developers and instructors as of September 2025.6 After a multi-year hiatus from regular programming, CFAR resumed mainline workshops in November 2025, testing new formats while continuing to refine its approach to teaching applied rationality.7

CFAR was founded in 2012 by Anna Salamon, Julia Galef, Michael Smith (also known as Valentine Smith), and Andrew Critch.18 The organization’s origins trace to Anna Salamon’s work at the Machine Intelligence Research Institute (MIRI) in 2011, where she began experimenting with teaching rationality techniques.9 CFAR initially functioned as an extension of MIRI before becoming an independent nonprofit organization.9

The founders shared a common observation that intelligence, education, and good intentions were insufficient to guarantee sound decision-making. According to co-founder Julia Galef, they were motivated by the realization that “being smart, and being well educated and even being really well intentioned was far from a guarantee from making what turned out to be really stupid decisions.”9 All four founders brought strong backgrounds in mathematics, artificial intelligence, and science to the organization’s development.

CFAR officially began offering classes in 2012, developing a flagship workshop model that charged $3,900 for multi-day intensive programs.9 The organization held monthly workshops training diverse participants including scientists, police officers, teachers, entrepreneurs, and high school students.10 Workshop activity varied by year:11

  • 2013: 7 workshops
  • 2014: 9 workshops (peak year)
  • 2015: 4 workshops
  • 2017: 8 workshops (including specialized workshops for MIRI researchers and Effective Altruism Global participants)

Julia Galef served as CFAR’s initial president until 2016, when Anna Salamon assumed the role.1 Salamon has continued as president through at least 2022, with Jesse Liptrap and Michael Blume serving on the board of directors.1 The organization’s advisors have included Paul Slovic and Keith Stanovich.1

In 2019, CFAR spun off the European Summer Program on Rationality (ESPR) into a separate organization run by Jan Kulveit, marking a shift in how the organization managed related programs.11 By September 2025, CFAR had transitioned to a mostly-remote operation with a significantly reduced staff, theorizing that part-time work might help avoid organizational pitfalls while maintaining curriculum development quality.6

After a hiatus from regular programming beginning in early 2020, CFAR conducted an experimental mini-workshop in June 2025 at Arbor Summer Camp and resumed mainline workshops in November 2025, marking the beginning of a pilot program testing new workshop formats.712

CFAR’s primary educational delivery mechanism has been immersive multi-day workshops, typically running 4.5 days and costing $3,900 to $4,000.413 These workshops emphasize three core pillars: applying critical thinking to real-world problems, having students practice skills rather than merely learn concepts, and building lasting habits.10

The organization has offered scholarship programs, including funding from Jaan Tallinn (Skype co-founder) for Estonian students.1 CFAR has also provided specialized training for organizations including Facebook and participants in the Thiel Fellowship.1

CFAR develops and refines rationality techniques by synthesizing insights from cognitive science, psychology, neuroscience, behavioral economics, mathematics, statistics, and game theory.14 The organization conducts empirical testing of techniques and invents new methods when academic research proves insufficient for practical application.14

Key techniques taught at CFAR workshops include:1415

TechniquePurpose
Double CruxImproves collaboration and conceptualizing research questions
Goal FactoringAddresses mismatches between goals and plans
FocusingModels second-to-second cognition
Resolve CyclesBuilds motivation and action
Murphy-JitsuPrepares for obstacles through pre-mortems
Trigger-Action PlansImproves habit formation and research practices
Urge PropagationEnhances understanding of motivation and decision-making

According to CFAR, these techniques aim to bridge System 1 and System 2 cognition, helping participants work with emotions, reframe problems, and understand the modular nature of the mind.16

Beyond flagship workshops, CFAR has developed targeted programs for specific communities:

  • SPARC (Summer Program on Applied Rationality and Cognition): Annual summer programs, including events in 2019 and funding from Open Philanthropy ($304,000 over two years in 2016).11
  • MIRI-focused workshops: Specialized sessions for AI safety researchers, including the 2015 MIRI Summer Fellows Program (a 3-week training program for AI safety researchers).14
  • Alumni support: Six-week official follow-up programs and ongoing coaching/mentorship for workshop graduates.17

CFAR has collected extensive survey data from workshop participants, reporting several positive outcomes:14

MetricResultTimeframe
Neuroticism decreaseStatistically significantPost-workshop
Self-efficacy increaseMarked (though less significant than neuroticism decrease)Post-workshop
Life satisfaction increase0.17σ1 year after participation
Overall satisfaction9.2/10 average ratingPost-workshop

The organization’s 2017 Impact Report indicated that alumni reported higher impact through better motivation navigation and clearer thinking.18 However, these results are primarily based on internal surveys without independent replication or external validation.19

Contributions to AI Safety and Effective Altruism

Section titled “Contributions to AI Safety and Effective Altruism”

CFAR has documented case studies of alumni contributions to existential risk reduction and AI safety work. Notable examples include:20

  • Scott Garrabrant: Joined MIRI in 2015 after participating in CFAR/MIRI programs; contributed to logical induction research
  • Nate Soares: MIRI researcher who credited techniques like Double Crux with improving research collaboration
  • Ben Hoffman: Contributed to effective altruism and rationality writing, as well as AI risk modeling

The organization has positioned itself as contributing to the social infrastructure and individual skill development of communities working on AI safety and existential risk reduction, though direct causal impact is difficult to attribute.6

Limitations and Criticisms of Impact Claims

Section titled “Limitations and Criticisms of Impact Claims”

CFAR’s impact evidence relies heavily on self-reported participant surveys and case studies rather than rigorous experimental design with control groups. A 2016 article in VICE noted that while CFAR’s workshops had measurable effects on participants, the organization’s approach had both strengths and flaws.1 The lack of recent external evaluations and reliance on internal data collection methods limit the strength of effectiveness claims.19

As of June 2022, CFAR had received substantial funding from organizations aligned with effective altruism and existential risk reduction:1

  • Open Philanthropy: Over $3.5 million
    • $1,035,000 grant in 2016 (two years) for operational improvements and research21
    • Multi-year institutional grant renewed in 2018 for 2018-201922
  • Survival and Flourishing Fund: Over $1.6 million
  • Effective Altruism Funds: Over $300,000
    • $150,000 from Long-Term Future Fund in April 2019 (addressing funding shortfall after 2018 controversy)11
    • Additional $150,000 recommended in August 201911

CFAR received substantial funding from FTX before the cryptocurrency exchange’s collapse in November 2022. Between March and September 2022, $3,405,000 was transferred from the FTX Foundation to CFAR, with an additional $1,500,000 transferred on October 3, 2022, in ten separate transactions.23 Following FTX’s collapse, FTX debtors required CFAR to return approximately $5 million.23

As of December 2019 or 2020, CFAR reported approximately $1.4 million in total liquidity, including $650,000 in cash, $575,000 in expected grants, and $215,000 in accounts receivable.24 The organization’s total assets have been reported at $23,039,646, though this figure may include the venue property acquired in 2018.25

In 2018, CFAR’s fundraising efforts (including the Open Philanthropy renewal) totaled over $2.5 million and enabled the organization to make a down payment on a permanent venue, reducing costs and expanding program capacity.22

Connection to AI Safety and Existential Risk

Section titled “Connection to AI Safety and Existential Risk”

CFAR is categorized as an existential risk organization with explicit connections to existential risk from artificial intelligence.1 The organization originated from the rationality movement around LessWrong, which pioneered discussions of AI alignment and existential risks.1

While CFAR does not operate explicit AI alignment research programs, the organization’s focus on improving individual and collective reasoning capacity is positioned as supporting the broader AI safety ecosystem. The rationality techniques taught at CFAR workshops—such as Bayesian reasoning, debiasing methods, and systematic decision-making frameworks—are presented as relevant to challenges in AI safety work, including addressing scope insensitivity and improving collaborative research practices.2627

Co-founder Andrew Critch has been involved in multiple AI safety organizations beyond CFAR, including the Center for Human-Compatible AI and FAR.AI, illustrating the overlap between CFAR’s rationality-focused mission and AI safety work.1 The organization has conducted specialized workshops for MIRI researchers and other AI safety communities, though the scale and ongoing nature of these programs is unclear given CFAR’s reduced operations.14

Handling of Abuse Allegations (Brent Case)

Section titled “Handling of Abuse Allegations (Brent Case)”

In 2018-2019, CFAR faced significant criticism for its handling of allegations against a community member known as Brent (likely Brent Hildebrand). In January 2018, CFAR’s semi-independent investigation panel, ACDC, received allegations of physical, sexual, and emotional abuse from one of Brent’s former partners.28 Despite these allegations, ACDC recommended against banning the individual in April 2018, and CFAR leadership followed this recommendation.28

CFAR later publicly acknowledged serious failures in this case, stating they “had sufficient evidence long beforehand to notice that Brent might be harmful” and that their “failure to investigate this hypothesis explicitly was a mistake.”28 The organization allowed Brent to attend and assist at several CFAR events, which “afforded him social legitimacy and caused significant harm in expectation.”28 Staff members reported feeling manipulated by Brent, and CFAR acknowledged that as an organization “which exists to promote epistemic integrity,” they should be held to “an especially high bar” on such matters.28

Following public criticism in September 2018, CFAR disbanded ACDC, determining that “the panelists were in over their heads.”28 The organization issued a detailed public apology acknowledging that their safety procedures and investigation processes were inadequate and that the harm was preventable.28

The Brent controversy led to CFAR not holding a fundraiser in 2018, creating a funding shortfall that was later addressed by emergency grants from the Long-Term Future Fund in 2019.11

”Zizians” Incident and Associated Violence

Section titled “”Zizians” Incident and Associated Violence”

On November 15, 2019, four individuals identifying as “Zizians” (a rationalist splinter group wearing Guy Fawkes masks) were arrested for barricading a CFAR wooded retreat event in Sonoma County.1 The protesters accused CFAR’s leader of discriminating against trans women and failing to develop novel rationality techniques.1

According to Wikipedia, alleged Zizian members were later linked to serious violent crimes: an attempted murder in November 2022, and four murders in 2022 and 2025, including the killing of a U.S. Border Patrol officer in a shootout, with two of the alleged members dying violently.1 While these crimes occurred after the 2019 protest and the connection between the protesters and the later violent incidents is not fully detailed in available sources, the association has contributed to negative perceptions of elements within the broader rationality community.

In a 2025 interview with NBC News, CFAR president and co-founder Anna Salamon stated: “We didn’t know at the time, but in hindsight we were creating conditions for a cult.”23 Salamon characterized the organization’s implicit messaging as suggesting that “human thinking is flawed and biased,” with the exception that “ours [is unique]. We have a unique method of seeing things clearly”—representing what she described as an overestimation of their own competence.23

Critics have described CFAR workshops as “vaguely culty,” blending cognitive science with elements reminiscent of self-help movements and featuring rhetoric about humans as fundamentally flawed beings needing systematic fixes.13 The high cost of workshops ($3,900 for 4 days) has also drawn criticism, with some comparing CFAR’s claims about cognitive improvement to Lumosity’s claims that resulted in an FTC fine for making unfounded assertions.13

Effectiveness and Institutional Incentives

Section titled “Effectiveness and Institutional Incentives”

Some rationality community members have criticized CFAR for limited progress in creating a “real art of rationality” that aids meaningful intellectual advancement, citing “model gaps,” impure motives (such as prioritizing AI safety career placement and deference to organizations like MIRI over pure rationality development), and institutional incentives that may distort efforts.29 A perceived tension exists between epistemic rationality (exemplified by the LessWrong principle “Politics is the Mind-Killer”) and real-world decision-making contexts where instructors may hold preconceptions about desirable outcomes.29

Broader Effective Altruism Funding Structure

Section titled “Broader Effective Altruism Funding Structure”

CFAR’s funding has been part of a larger effective altruism ecosystem characterized by centralized funding from major grantmakers like Open Philanthropy. According to a 2024 analysis, this structure created “strong incentives to align with funder worldviews to get money,” turning what was originally a niche intellectual community into a career track with concentrated power among funders while providing “thin governance/guardrails.”23 Bloomberg characterized the broader EA movement as featuring “a culture of free-flowing funding with minimal accountability focused on selective universities.”23

Relationship to Rationality and EA Communities

Section titled “Relationship to Rationality and EA Communities”

CFAR occupies a central position in the rationality movement and has strong ties to the effective altruism community. The organization originated directly from the LessWrong community and broader rationality movement associated with Eliezer Yudkowsky and the blog Overcoming Bias.127

CFAR has contributed to building social infrastructure within rationality and EA communities through workshops, alumni networks, and partnerships with organizations like Effective Altruism Global.30 The organization has served as a gathering point for “rationality geeks” to exchange ideas on cognitive improvement and has helped connect individuals interested in existential risk reduction, AI safety, and effective altruism.10

Community-led initiatives have emerged from CFAR’s alumni network. The Applied Rationality Unconference series, organized by CFAR alumni, began with a first retreat in 2023, followed by a second in November 2024 that opened participation beyond CFAR alumni.31 Multiple additional events were organized throughout 2025, including the “Blackpool Applied Rationality Unconference,” representing unofficial, participant-organized versions of CFAR’s official workshops.31

As of recent reports, CFAR appears to have reduced its role as a central organizing force within the rationality community, though “shards of the organisation still help with AIRCS workshops.”32 According to LessWrong discussions, there is no longer “a concentration of force working towards a public accessible rationality curriculum.”32 The organization’s shift to part-time remote operations and multi-year hiatus from regular programming has reduced its direct community presence, though the resumption of workshops in 2025 may signal renewed engagement.67

CFAR has publicly acknowledged several organizational failures and challenges throughout its history:

In 2012-2013, CFAR’s early attempts at exponential growth of workshops failed due to small-organization logistics challenges.17 Expanding the instructor base risked what the organization termed “failure mode #1”—enthusiastic but inadequately trained instructors teaching poor-quality classes and potentially damaging rationality’s reputation.17 Initial scaling efforts with 2011-era knowledge produced subpar lessons that required significant iteration and refinement.17

Early workshops (circa May 2012) were criticized as too short and intense, overwhelming participants’ capacity to digest ideas.17 CFAR responded by developing informal Skype chat follow-ups, which were later formalized into a six-week official support program.17

As of 2012 and continuing forward, CFAR has faced challenges in scaling “heavy-duty rationality skills” into critical fields like medicine, education, government, or software development.17 With capacity for approximately 20 participants per month in workshop programs, the organization’s direct reach has remained limited relative to the scale of impact in these sectors.17 Health care professionals have been notably absent from workshops despite the sector’s potential need for improved decision-making frameworks.17

In 2013, CFAR highlighted as an example of “wise failure” the “impressively failed” Effective Fundraising Project—a two-person nonprofit startup founded in 2013 to write grants for effective charities that shut down gracefully after six months.3334 While CFAR presented this as a case study in productive failure, it illustrates challenges in expanding the organization’s impact model beyond direct workshop delivery.

The organization’s transition from approximately 12 full-time staff to eight very-part-time instructors and curriculum developers represents a significant scaling back of operations.6 While CFAR has theorized that part-time work may help avoid organizational pitfalls like burnout or “going off the rails,” this shift also reflects challenges in maintaining a sustainable full-time organizational model.6

As of September 2025, CFAR operates primarily as a remote organization with eight part-time curriculum developers and instructors, plus part-time staff for accounting and venue management.6 This represents a significant reduction from the organization’s earlier staffing levels of approximately 12 full-time employees.6

After a hiatus from regular programming beginning in early 2020, CFAR conducted several experimental programs in 2025:712

  • June 2025: Experimental mini-workshop at Arbor Summer Camp, teaching rationality material alongside other content. This pilot project expanded from a single instructor to multiple instructors and marked the first “workshop-like” content CFAR had presented in several years.12
  • November 2025: First mainline workshop since 2022, representing the beginning of a pilot program testing new workshop formats.7

According to CFAR’s updates, the June 2025 mini-workshop energized staff to pursue further programming and provided valuable learning experiences for curriculum refinement.12

Anna Salamon continues to serve as CFAR’s president as of the most recent updates, with Jesse Liptrap and Michael Blume on the board of directors.1 The organization maintains its 501(c)(3) nonprofit status and continues to operate with a focus on curriculum development and testing techniques through workshops and online sessions.6

CFAR has described itself as “low-key” in its hiring approach, being selective about adding new part-time team members while focusing on iterative refinement of rationality techniques.6 The organization runs seminars, visitor programs, and tests techniques on volunteers as part of its ongoing curriculum development work.30

Several important questions about CFAR’s effectiveness, impact, and future remain uncertain or inadequately documented:

  1. Causal impact on AI safety: While CFAR has documented case studies of alumni who work in AI safety, the causal contribution of CFAR training to their work is difficult to isolate from other factors like selection effects and broader community influences.

  2. Effectiveness of specific techniques: The organization’s reported outcomes rely primarily on self-reported participant surveys without independent replication or rigorous experimental designs with control groups. The extent to which specific rationality techniques produce lasting behavioral and cognitive changes remains empirically undervalidated.

  3. Scalability limitations: CFAR’s challenges in expanding beyond workshop-based delivery and reaching critical professional domains (medicine, policy, education) raise questions about whether the organization’s model can achieve broad societal impact.

  4. Organizational sustainability: The transition to part-time remote operations and multi-year hiatus from regular programming may reflect either a strategic refinement of approach or ongoing challenges in maintaining a sustainable organizational model.

  5. Community influence measurement: While CFAR clearly influenced the rationality and EA communities, the magnitude and longevity of this influence—particularly after reduced operations—is difficult to quantify.

  6. Future trajectory: Whether the 2025 workshop resumption represents a sustained return to programming or a limited pilot effort remains to be seen, as does the organization’s long-term strategy for curriculum development and community engagement.

  1. Wikipedia - Center for Applied Rationality 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

  2. CFAR Press Kit

  3. CFAR - About Mission

  4. CFAR Official Website 2 3

  5. VICE - Center for Applied Rationality

  6. CFAR - About Mission (September 2025) 2 3 4 5 6 7 8 9 10

  7. CFAR Resources Updates 2 3 4 5

  8. EA Forum - Center for Applied Rationality

  9. VICE - Center for Applied Rationality (Founding) 2 3 4

  10. YouTube - CFAR Overview 2 3

  11. Timeline of Center for Applied Rationality 2 3 4 5 6

  12. CFAR - June 2025 Experimental Miniworkshop 2 3 4

  13. SFist - Debug My Brain 2 3

  14. CFAR 2016 Case Studies 2 3 4 5

  15. Effective Altruism - EA Global 2018 CFAR Workshop

  16. Ben Kuhn - CFAR Workshop Review

  17. LessWrong - CFAR A Year Later 2 3 4 5 6 7 8 9

  18. CFAR 2017 Impact Report

  19. VICE - Center for Applied Rationality (Impact) 2

  20. CFAR 2016 Case Studies

  21. CFAR - Grant from Open Philanthropy Project

  22. CFAR - Fundraising and Leadership Updates 2

  23. AI Panic News - The Rationality Trap 2 3 4 5 6

  24. CFAR Financial Overview

  25. Instrumentl - CFAR 990 Report

  26. LessWrong - CFAR’s New Focus and AI Safety

  27. Andrew Critch - CFAR 2

  28. CFAR - Mistakes Regarding Brent 2 3 4 5 6 7

  29. LessWrong - Comment on Why CFAR Didn’t Get Far 2

  30. Cause IQ - Center for Applied Rationality 2

  31. LessWrong - Blackpool Applied Rationality Unconference 2025 2

  32. LessWrong - What is Going on with CFAR 2

  33. CFAR - An Impressive Failure

  34. CFAR Updates Archive