Back
EchoLeak exploit (CVE-2025-32711)
webunit42.paloaltonetworks.com·unit42.paloaltonetworks.com/agentic-ai-threats/
A Unit 42 security research disclosure detailing a concrete agentic AI exploit; highly relevant for practitioners building or auditing AI agent systems that interact with external tools and data sources.
Metadata
Importance: 62/100blog postanalysis
Summary
Unit 42 (Palo Alto Networks) analyzes EchoLeak (CVE-2025-32711), a vulnerability in agentic AI systems that allows adversarial prompt injection via tool/function calls and API integrations, enabling data exfiltration and unauthorized actions. The research demonstrates how multi-step AI agents can be compromised through malicious content in external data sources, highlighting systemic risks in agentic architectures. It serves as a concrete case study in real-world AI security vulnerabilities.
Key Points
- •CVE-2025-32711 (EchoLeak) exploits prompt injection in agentic AI pipelines where AI agents process untrusted external content via function calls and API integrations.
- •Attackers can embed malicious instructions in documents or web content that AI agents retrieve, causing the agent to exfiltrate data or perform unauthorized actions.
- •The vulnerability demonstrates how agentic systems that chain multiple tool calls are especially susceptible to indirect prompt injection attacks.
- •The research underscores that current AI agent frameworks lack robust input sanitization and trust boundary enforcement between internal and external data.
- •Mitigations include output filtering, strict tool-use policies, sandboxing agent actions, and treating all external content as untrusted input.
Cited by 3 pages
| Page | Type | Quality |
|---|---|---|
| Tool Use and Computer Use | Capability | 67.0 |
| Sandboxing / Containment | Approach | 91.0 |
| Tool-Use Restrictions | Approach | 91.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202655 KB
AI Agents Are Here. So Are the Threats.
Threat Research Center
Threat Research
Malware
Malware
AI Agents Are Here. So Are the Threats.
21 min read
Related Products Prisma SASE Secure Access Service Edge (SASE) Unit 42 AI Security Assessment Unit 42 Incident Response
By: Jay Chen
Royce Lu
Published: May 1, 2025
Categories: Malware
Threat Research
Tags: Agentic AI
AI
BOLA
GenAI
Prompt injection
Share
Executive Summary
Agentic applications are programs that leverage AI agents — software designed to autonomously collect data and take actions toward specific objectives — to drive their functionality. As AI agents are becoming more widely adopted in real-world applications, understanding their security implications is critical. This article investigates ways attackers can target agentic applications, presenting nine concrete attack scenarios that result in outcomes such as information leakage, credential theft, tool exploitation and remote code execution.
To assess how widely applicable these risks are, we implemented two functionally identical applications using different open-source agent frameworks — CrewAI and AutoGen — and executed the same attacks on both. Our findings show that most vulnerabilities and attack vectors are largely framework-agnostic, arising from insecure design patterns, misconfigurations and unsafe tool integrations, rather than flaws in the frameworks themselves.
We also propose defense strategies for each attack scenario, analyzing their effectiveness and limitations. To support reproducibility and further research, we’ve open-sourced the source code and datasets on GitHub .
Key Findings
Prompt injection is not always necessary to compromise an AI agent. Poorly scoped or unsecured prompts can be exploited without explicit injections.
Mitigation : Enforce safeguards in agent instructions to explicitly block out-of-scope requests and extraction of instruction or tool schema.
Prompt injection remains one of the most potent and versatile attack vectors , capable of leaking data, misusing tools or subverting agent behavior.
Mitigation : Deploy content filters to detect and block prompt injection attempts at runtime.
Misconfigured or vulnerable tools significantly increase the attack surface and impact.
Mitigation : Sanitize all tool inputs, apply strict access controls and perform routine security testing, such as with Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST) or Software Composition Analysis (SCA).
Unsecured code interpreters expose agents to arbitrary code execution and unauthorized access to host resources and netwo
... (truncated, 55 KB total)Resource ID:
d6f4face14780e85 | Stable ID: sid_YUSrXbgWJ8