Claude.ai Prompt Injection to Data Exfiltration

You search for "Claude AI" on Google. You click the top result. Behind that normal-looking link, a hidden prompt injection silently instructs your AI agent to extract your private conversations. You never notice.

This is not just a prompt injection story. It is a complete attack pipeline.

Oasis Security researchers chained three vulnerabilities into an end-to-end exploit:

  1. Invisible URL-based prompt injection
  2. Data exfiltration through Anthropic's own Files API
  3. An open redirect on claude.com to bypass security controls

The chain works on a bare-bones Claude session. Zero integrations. Zero tools. Zero MCP servers. If you use Claude, you were exposed, simply because of how the platform's default capabilities interact.

What you can do today:

  • Sanitize URL-based prompt inputs
  • Audit sandbox network access
  • Require user approval before first-prompt actions
Claude.ai Prompt Injection to Data Exfiltration
Share