Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Giving What We Can

This page is aimed at donors and general audiences rather than technical researchers, making it a useful introductory resource for understanding why AI safety is considered a philanthropic priority within the effective altruism community.

Metadata

Importance: 42/100homepageeducational

Summary

Giving What We Can presents AI as a major global catastrophic risk and makes the case for charitable giving toward AI safety organizations. The page outlines why advanced AI poses existential and catastrophic risks, highlights leading organizations working on the problem, and provides guidance for donors interested in supporting AI safety efforts.

Key Points

  • Frames AI risk within the broader context of global catastrophic risks, arguing misaligned or misused AI could cause large-scale or existential harm
  • Highlights top recommended organizations for AI safety donations, including technical and policy-focused groups
  • Explains the difference between near-term AI harms and longer-term existential risk scenarios to help donors understand the landscape
  • Argues that AI safety is relatively neglected and tractable, making philanthropic contributions especially high-impact
  • Provides an accessible, non-technical introduction to AI risk for a general philanthropic audience

Cited by 2 pages

PageTypeQuality
Giving PledgeOrganization68.0
Giving What We CanOrganization62.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202617 KB
Beneficial artificial intelligence · Giving What We Can Beneficial artificial intelligence

 Why is ensuring beneficial AI important? Beneficial artificial intelligence 

 Why is ensuring beneficial AI important? 
 What is the potential scale of AI's impact? 
 Transformative AI 
 AI as an existential risk 
 Is promoting beneficial AI neglected? 
 Is promoting beneficial AI tractable? 
 How can we promote beneficial AI? 
 Technical challenge: Ensuring AI systems are safe 
 Political challenge: Promoting beneficial AI governance 
 Why might you not prioritise promoting beneficial AI? 
 What are some charities, organisations, and funds trying to promote beneficial AI? 
 How else can you help? 
 Learn more 
 Our research 
 Your feedback 
 Artificial intelligence (AI) might be the most important technology we ever develop. Ensuring it is safe and used beneficially is one of the best ways we can safeguard the long-term future .

 

 AI is already incredibly powerful: it’s used to decide who receives welfare , whether a loan is approved , or whether a job applicant receives an interview (which may even be conducted by an AI ).

 It’s also a research tool. In 2020, an AI system called AlphaFold made a “ gargantuan leap ” towards solving problems in protein folding, which scientists have been working on for decades.

 Despite these impressive accomplishments, AIs don’t always do what we want them to. When OpenAI trained an agent to play CoastRunners , they rewarded it for increasing its points, expecting to incentivise it to finish the race as fast as possible. As you can see in the video below, the AI instead realises it can achieve a higher score by repeatedly hitting the same target, never crossing the finish line.

 Unfortunately, unintended consequences like these are not limited to amusing failures in outdated video games. [0] Amazon used an AI to screen resumés, thinking this would increase the fairness and efficiency of their hiring process. Instead, they discovered the AI was biased against women . It penalised resumés containing words like "women's" and "netball," while favouring language more frequently used by men, such as "executed" and "captured." This was not intended, but that may be of little comfort to the women whose applications were rejected because of their gender.

 Ensuring AI is used to benefit everyone is already a challenge, and it's critical we get it right. As AI becomes more powerful, so does its scope for affecting our economy, politics, and culture. This has the potential to be either extremely good, or extremely bad. On the one hand, AI could help us make advances in science and technology that allow us to tackle the world's most important problems. On the other hand, powerful but out-of-control AI systems ("misaligned AI") could result in disaster for humanity. Given the stakes, working towards beneficial AI is a high-priority cause that we recommend supporting, especially if you care about safeguardin

... (truncated, 17 KB total)
Resource ID: b5e1d26038b00571 | Stable ID: sid_TRLO3OxGn5