Summary
Researchers uncovered 'Claudy Day,' a three-part attack chain in Claude.ai that uses invisible prompt injection and open redirects to silently exfiltrate sensitive conversation history. The exploit allows attackers to bypass sandbox restrictions and steal data via the Anthropic Files API without requiring any third-party integrations.
Take Action:
Treat AI chat links like any other untrusted URL and avoid clicking pre-filled prompt links from external sources. Be VERY cautious with any shared Claude.ai links or pre-filled prompts. Don't click or send prompts you didn't write yourself, especially from ads or unfamiliar sources. Review your Claude.ai conversation history and avoid storing highly sensitive information like credentials or financial details in AI chat sessions.
Read the full article on BeyondMachines
This article was originally published on BeyondMachines
Top comments (0)