Skip to content
Longterm Wiki
Back

AlgorithmWatch – NGO for Algorithmic Accountability and AI Ethics

web
algorithmwatch.org·algorithmwatch.org/en/

AlgorithmWatch is a prominent European civil society voice on AI accountability; useful for tracking NGO perspectives on AI governance, EU regulation, and near-term harms as a counterpoint to safety-focused longtermist discourse.

Metadata

Importance: 42/100homepage

Summary

AlgorithmWatch is a Berlin/Zurich-based NGO that investigates the societal impact of algorithms and AI, focusing on justice, human rights, democracy, and sustainability. It publishes research, position papers, and investigative reporting on topics such as AI discrimination, platform accountability, and the risks of generative AI. The organization advocates for regulatory frameworks and responsible AI use over speculative AGI narratives.

Key Points

  • Publishes investigative research and guidelines on responsible use of generative AI tools like ChatGPT, Claude, and Gemini.
  • Advocates for algorithmic accountability grounded in present harms (discrimination, rights violations) rather than speculative longtermist or AGI framings.
  • Critiques platforms like X for enabling AI-generated non-consensual sexual imagery and calls on EU regulators to enforce the Digital Services Act.
  • Explicitly positions against both uncritical AI optimism and existential-risk framings, arguing real accountability questions are being sidelined.
  • Provides public-facing policy analysis and civil society perspectives on AI governance in the EU and globally.

Cited by 1 page

PageTypeQuality
AI-Induced Cyber PsychosisRisk37.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20267 KB
AlgorithmWatch 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 

 

 
 
 AlgorithmWatch is a non-governmental, non-profit organization based in Berlin and Zurich. We fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, human rights, democracy, and sustainability but strengthen them.

 

 
 

 
 
 
 Publication 

 January 14, 2026

 AlgorithmWatch’s guidelines to use generative AI responsibly

 Whether you use ChatGPT, Claude or Gemini, Copilot or Perplexity – generative AI poses massive problems: many results are inaccurate and politically problematic, the systems’ energy and water consumption is enormous. At the same time, they have become an integral part of everyday life. AlgorithmWatch has developed guidelines to help use generative AI responsibly.

 Read more 

 Auf Deutsch lesen 

 Hanna Barakat & Archival Images of AI + AIxDESIGN / Textiles and Tech 1 / Licenced by CC-BY 4.0 
 
 Blog 

 January 8, 2026

 #discrimination #dsa #eu #platforms 

 Sexualized images on X: What we are doing to stop them and what we expect from the EU

 X’s Grok chatbot is the focus of yet another scandal after generating pictures of real people in bikinis, without their consent, including children. But the problem of AI-generated sexual images without consent on X goes much further than Grok — and X blocked our research to address the problem. The EU Commission needs to step up their game to protect people from this kind of violence.

 Read more 

 Auf Deutsch lesen 

 Dominika Čupková & Archival Images of AI + AIxDESIGN / 19th Century Shallowfake P0rn / Licenced by CC-BY 4.0 
 
 Position 

 September 29, 2025

 Position Paper

 Focus Attention on Accountability for AI − not on AGI and Longtermist Abstractions

 Many tech CEOs and scientists praise AI as the savior of humanity, while others see it as an existential threat. We explain why both fail to address the real questions of responsibility.

 Read more 

 Rose Willis & Kathryn Conrad / A Rising Tide Lifts All Bots / Licenced by CC-BY 4.0 
 
 Story 

 August 9, 2025

 #ai #dataworkers 

 The AI Revolution Comes With the Exploitation of Gig Workers 

 Business process outsourcing (BPO) companies manage the human work behind AI development. However, they face accusations of worker exploitation, underpayment and wage theft. Big tech companies benefit from this work model.

 Read more 

 Auf Deutsch lesen 

 Kathryn Conrad & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/ 
 
 Blog 

 #discrimination 

 Report algorithmic discrimination!

 When we apply for credit, apartments, or jobs online, companies increasingly use automated systems to process our data and make decisions that impact our daily lives. The problem: Such systems are not neutral and can reproduce inequalities and assumptions about people that already exist in society. What can we do to ensure that the use of non-transparent automated systems does not lead to peopl

... (truncated, 7 KB total)
Resource ID: 598754bad5ccad69 | Stable ID: N2UzNzQ0ZD