Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: European Union

This is a key EU policy document that shaped European AI regulation and influenced global AI ethics frameworks; relevant for understanding how safety principles translate into governance structures and regulatory requirements.

Metadata

Importance: 72/100guidance documentprimary source

Summary

The EU High-Level Expert Group on AI published Ethics Guidelines for Trustworthy AI, establishing a framework for AI systems that are lawful, ethical, and robust. The guidelines introduce seven key requirements for trustworthy AI including human agency, privacy, transparency, and accountability. This document became foundational to the EU's broader AI regulatory agenda, influencing the EU AI Act.

Key Points

  • Defines 'Trustworthy AI' through three pillars: lawful, ethical, and robust AI systems.
  • Introduces seven requirements: human agency, technical robustness, privacy, transparency, diversity, societal wellbeing, and accountability.
  • Provides a practical assessment checklist for organizations to evaluate their AI systems against these requirements.
  • Represents the EU's soft-law approach to AI governance before the binding EU AI Act was developed.
  • Influential in shaping international norms around responsible AI development and deployment.

Cited by 1 page

PageTypeQuality
AI Governance and PolicyCrux66.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20265 KB
Ethics guidelines for trustworthy AI | Shaping Europe’s digital future 
 
 
 
 
 
 
 

 
 
 
 Skip to main content 

 

 
 

 
 
 
 

 
 

 Ethics guidelines for trustworthy AI 

 REPORT / STUDY
 Publication 08 April 2019
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 On 8 April 2019, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. This followed the publication of the guidelines' first draft in December 2018 on which more than 500 comments were received through an open consultation.

 
 
 
 
 According to the Guidelines, trustworthy AI should be:

 (1) lawful - respecting all applicable laws and regulations

 (2) ethical - respecting ethical principles and values

 (3) robust - both from a technical perspective while taking into account its social environment

 

 Download the Guidelines in your language below: 

 BG | CS | DE | DA | EL | EN | ES | ET | FI | FR | HR | HU | IT | LT | LV | MT | NL | PL | PT | RO | SK | SL | SV 

 The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:

 
 Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches

 Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.

 Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.

 Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.

 Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.

 Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured

... (truncated, 5 KB total)
Resource ID: a306d20a4548ab9a | Stable ID: NTQxZDViOT