Skip to content
Longterm Wiki
Back

Authenticated Delegation and Authorized AI Agents

paper

Authors

Tobin South·Samuele Marro·Thomas Hardjono·Robert Mahari·Cedric Deslandes Whitney·Dazza Greenwood·Alan Chan·Alex Pentland

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Relevant to AI governance and safety practitioners designing infrastructure for agentic AI systems; proposes concrete technical standards for accountability and access control as autonomous agents proliferate.

Paper Details

Citations
33
6 influential
Year
2025

Metadata

Importance: 62/100arxiv preprintprimary source

Abstract

The rapid deployment of autonomous AI agents creates urgent challenges around authorization, accountability, and access control in digital spaces. New standards are needed to know whom AI agents act on behalf of and guide their use appropriately, protecting online spaces while unlocking the value of task delegation to autonomous agents. We introduce a novel framework for authenticated, authorized, and auditable delegation of authority to AI agents, where human users can securely delegate and restrict the permissions and scope of agents while maintaining clear chains of accountability. This framework builds on existing identification and access management protocols, extending OAuth 2.0 and OpenID Connect with agent-specific credentials and metadata, maintaining compatibility with established authentication and web infrastructure. Further, we propose a framework for translating flexible, natural language permissions into auditable access control configurations, enabling robust scoping of AI agent capabilities across diverse interaction modalities. Taken together, this practical approach facilitates immediate deployment of AI agents while addressing key security and accountability concerns, working toward ensuring agentic AI systems perform only appropriate actions and providing a tool for digital service providers to enable AI agent interactions without risking harm from scalable interaction.

Summary

This paper introduces a framework for secure, auditable delegation of authority to autonomous AI agents, extending OAuth 2.0 and OpenID Connect with agent-specific credentials. It addresses authorization, accountability, and access control challenges by translating natural language permissions into formal access control configurations, enabling organizations to deploy AI agents with verifiable, restricted scopes of action.

Key Points

  • Extends existing OAuth 2.0 and OpenID Connect protocols with agent-specific credentials and metadata to maintain compatibility with current web infrastructure.
  • Proposes a method for translating natural language permissions into auditable access control configurations for flexible yet verifiable agent scoping.
  • Maintains clear chains of accountability so human principals can track and verify what actions AI agents perform on their behalf.
  • Aims for immediate practical deployability while addressing key security concerns for digital service providers interacting with autonomous agents.
  • Addresses the risk of scalable harm from AI agents by enabling service providers to gate and restrict agentic interactions at the protocol level.

Cited by 1 page

PageTypeQuality
Multi-Agent SafetyApproach68.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202698 KB
Authenticated Delegation and Authorized AI Agents 
 
 
 
 
 
 

 
 
 

 
 
 
 
 Authenticated Delegation and Authorized AI Agents

 
 
 Tobin South
 
    
 Samuele Marro
 
    
 Thomas Hardjono
 
    
 Robert Mahari
 
    
 Cedric Deslandes Whitney
 
    
 Dazza Greenwood
 
    
 Alan Chan
 
    
 Alex Pentland
 
 
 
 Abstract

 The rapid deployment of autonomous AI agents creates urgent challenges around authorization, accountability, and access control in digital spaces.
New standards are needed to know whom AI agents act on behalf of and guide their use appropriately, protecting online spaces while unlocking the value of task delegation to autonomous agents.
We introduce a novel framework for authenticated, authorized, and auditable delegation of authority to AI agents, where human users can securely delegate and restrict the permissions and scope of agents while maintaining clear chains of accountability.
This framework builds on existing identification and access management protocols, extending OAuth 2.0 and OpenID Connect with agent-specific credentials and metadata, maintaining compatibility with established authentication and web infrastructure.
Further, we propose a framework for translating flexible, natural language permissions into auditable access control configurations, enabling robust scoping of AI agent capabilities across diverse interaction modalities.
Taken together, this practical approach facilitates immediate deployment of AI agents while addressing key security and accountability concerns, working toward ensuring agentic AI systems perform only appropriate actions and providing a tool for digital service providers to enable AI agent interactions without risking harm from scalable interaction.

 
 Machine Learning, ICML
 
 
 
 
 
 
 1 Introduction

 
 Agentic AI systems, also referred to as AI assistants or simply ‘agents’, are AI systems that can pursue complex goals with limited direct supervision on behalf of a user (Gabriel et al., 2024 ; Chan et al., 2024a ; Shavit et al., 2023 ; Chan et al., 2023 ; Kenton et al., 2023 ) , including by interacting with a variety of external digital tools and services (Nakano et al., 2021 ; Lieberman, 1997 ; Fourney et al., 2024 ) . For example, AI agents given a prompt to book travel arrangements for a holiday may browse the web for recommendations, search for flights via APIs, or message an airline agent in natural language via chat services to arrange a booking. Such communications could even extend to AI agent negotiations (Abdelnabi et al., 2023 ) and other multi-agent contexts.

 
 
 While current AI agents have limitations (Raji et al., 2022 ; Wang et al., 2023 ) , lack the ability to perform certain tasks (Liu et al., 2023a ) , and may be susceptible to attacks such as prompt injections (Yao et al., 2024 ; Liu et al., 2023b ; Zhu et al., 2023 ) , there has been rapid progress in their development and commercial interest.

 
 
 This has raised many concerns over the risks of AI ag

... (truncated, 98 KB total)
Resource ID: dbe4f4ed096008e4 | Stable ID: sid_GyIYE9ggQp