Back
AI & Human Rights – EPIC – Electronic Privacy Information Center
webepic.org·epic.org/issues/ai/
EPIC's AI & Human Rights hub covers advocacy for transparent, accountable AI policy, relevant to AI safety governance discussions around bias, surveillance, and regulatory frameworks.
Metadata
Importance: 38/100homepage
Summary
EPIC's AI & Human Rights page serves as a hub for the organization's advocacy on AI transparency, accountability, and regulation. It covers areas including AI in law enforcement, commercial and government AI use, chatbots, risk assessments, and screening tools. EPIC pushes for protective regulations with private rights of action to address harms from opaque AI systems.
Key Points
- •AI systems are deployed opaquely across government and private sectors, often without public awareness of their decision-making role.
- •EPIC advocates for transparent, equitable AI policy including oversight mechanisms and private rights of action.
- •Key focus areas include AI in law enforcement, chatbot accountability, risk assessments, and discriminatory screening/scoring tools.
- •U.S. AI regulation is described as insufficient, with more progress seen at state and international levels.
- •EPIC critiques frameworks like the White House AI policy for protecting companies rather than individuals.
1 FactBase fact citing this source
Cached Content Preview
HTTP 200Fetched Apr 7, 20264 KB
Artificial intelligence and machine learning systems are being deployed in opaque and unaccountable ways that can harm individuals and exacerbate biases. EPIC advocates for transparent, equitable, and commonsense AI policy and regulations.
">
AI & Human Rights – EPIC – Electronic Privacy Information Center
Join EPIC’s fight to STOP THE SURVEILLANCE STATE.
epic.org/stop-the-surveillance-state
Dismiss message.
Issues
AI & Human Rights
Artificial intelligence and machine learning systems are being deployed in opaque and unaccountable ways that can harm individuals and exacerbate biases. EPIC advocates for transparent, equitable, and commonsense AI policy and regulations.
Background
Artificial Intelligence (AI) systems are used across private sectors and government entities. How to define AI given the huge variance in the degree of technological sophistication in the systems is a challenge, but there is no doubt that AI use and development is expanding rapidly.
The use of AI is poorly regulated in the United States, and the inner workings of the systems are often opaque. In many cases, members of the public are not even aware that AI systems are being used to make decisions that impact their lives.
It is essential to establish regulations that recognize the harms posed by AI systems and require transparency, oversight, and accountability for both commercial and government uses of AI. Further, regulations must create opportunities for individuals to enforce protective rules with private rights of action.
Areas of Focus Within AI & Human Rights
Get more detail on issues related to AI & Human Rights
AI in Law Enforcement
AI is used widely and opaquely by federal, state, and local law enforcement, both directly in the criminal legal system and in ways that feed the criminal justice cycle in the U.S.
AI Policy
Substantial protective AI regulation is still sorely lacking in the U.S., but there has been movement in state legislatures and internationally, as well as policies and frameworks embraced by the federal government.
Chatbots
Chatbots are products, not people. They are created and released by companies that must be held accountable when their products harm people, just like a charger that starts a fire or faulty airbags that fail to deploy in a car crash.
Commercial AI Use
AI and ADS are used widely in commerce – in health care, education, hiring, housing, and more.
Government AI Use
Governments at every level have adopted the use of AI and ADS to assist in law enforcement, benefit distribution, education, and more.
Risk Assessments
Risk assessments are a key accountability
... (truncated, 4 KB total)Resource ID:
kb-f11ac1d05fca5207