Skip to content
Longterm Wiki
Back

Author

AnnaSalamon

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

This 2016 announcement marked CFAR's strategic pivot toward AI safety, reflecting the broader rationalist community's growing concern about existential risk and the belief that better collective reasoning is a key bottleneck for the field.

Forum Post Details

Karma
51
Comments
88
Forum
lesswrong
Forum Tags
Center for Applied Rationality (CFAR)Project Announcement

Metadata

Importance: 45/100blog postprimary source

Summary

CFAR announced a strategic pivot to focus on AI safety and existential risk reduction, arguing that progress is bottlenecked by collective epistemology rather than awareness. The organization aims to improve individual reasoning and collaborative thinking among AI safety researchers, effective altruists, and rationalists, believing this offers the highest leverage for improving humanity's survival odds.

Key Points

  • CFAR reoriented its mission toward AI safety and existential risk, viewing rationality training as a high-leverage intervention for the field.
  • Progress on existential risk is seen as bottlenecked by collective epistemology—how well people reason together—rather than by lack of awareness or advocacy.
  • CFAR aims to serve AI safety researchers, effective altruists, and rationality-focused individuals by improving their individual and collaborative thinking skills.
  • The organization explicitly chose not to signal-boost existing AI safety models, instead focusing on helping people reason more rigorously about the problems.
  • This represents an indirect approach to AI safety: improving the cognitive infrastructure of the people working on it rather than directly producing technical research.

Cited by 1 page

PageTypeQuality
Center for Applied RationalityOrganization62.0

Cached Content Preview

HTTP 200Fetched Apr 10, 20267 KB
# CFAR’s new focus, and AI Safety
By AnnaSalamon
Published: 2016-12-03
A bit about our last few months:

*   We’ve been working on getting a simple clear mission and an organization that actually works.  We think of our goal as analogous to the transition that the old Singularity Institute underwent under Lukeprog (during which chaos was replaced by a simple, intelligible structure that made it easier to turn effort into forward motion).
*   As part of that, we’ll need to find a way to be intelligible.
*   This is the first of several blog posts aimed at causing our new form to be visible from outside.  (If you're in the Bay Area, you can also come meet us at [tonight's open house](https://www.facebook.com/events/227971164302371/).) (We'll be talking more about the causes of this mission-change; the extent to which it is in fact a change, etc. in an upcoming post.)

  

Here's a short explanation of our new mission:

*   We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.
    
*   Also, we\[1\] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who [verbally endorse or act on](/lw/jb/applause_lights/) "AI Safety", or any other "[spreadable viewpoint](https://wiki.lesswrong.com/wiki/Guessing_the_teacher's_password)" [disconnected](/lw/l9/artificial_addition/) [from](/lw/sp/detached_lever_fallacy/) its [derivation](/lw/la/truly_part_of_you/).
    
*   Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think _[together](/lw/o6p/double_crux_a_strategy_for_resolving_disagreement/)_.  And to do this among the relatively small sets of people tackling existential risk. 
    

  

To elaborate a little:

  

Existential wins and AI safety
------------------------------

By an “existential win”, we mean humanity creates a stable, positive future.  We care a heck of a lot about this one.

  

Our working model here accords roughly with the model in Nick Bostrom’s book [Superintelligence](https://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2/ref=sr_1_1_twi_kin_1?s=books&ie=UTF8&qid=1480715153&sr=1-1&keywords=nick+bostrom+superintelligence).  In particular, we believe that if general artificial intelligence is at some point invented, it will be an [enormously big deal](http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html).

  

(Lately, AI Safety is being discussed by everyone from [The Economist](http://www.economist.com/news/leaders/21650543-powerful-computers-will-reshape-humanitys-future-how-ensure-promise-outweighs) to [Newsweek](http://www.newsweek.com/artificial-intelligence-coming-and-it-will-wipe-us-out-if-were-not-careful-433506) to [Obama](https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/?mbid=social_fb) to an [open letter from eight thousand](http://futu

... (truncated, 7 KB total)
Resource ID: bb93f09b90d6582c | Stable ID: sid_z0INKWzwme