New research from Shivan Kaul Sahib and Artem Chaikin exposes critical AI browser prompt injection vulnerabilities, confirming that indirect prompt injection is a systemic threat to agentic browsers. Their findings build on previous disclosures, demonstrating how malicious instructions can bypass traditional security measures in platforms like Perplexity Comet and Fellou. These vulnerabilities underscore the urgent need for a fundamental re-evaluation of security architectures in AI-powered browsing. The implications for user data and financial security are substantial, as these agents operate with elevated privileges. According to the announcement this research highlights a consistent theme of failing to properly isolate trusted user input from untrusted web content.
Perplexity’s Comet assistant, for instance, proved susceptible to prompt injection via screenshots. Attackers can embed nearly invisible text within web content, using faint colors on contrasting backgrounds, which human users cannot easily discern. When a user captures a screenshot of such a page, Comet’s underlying text recognition (likely OCR) extracts these camouflaged instructions. These imperceptible commands are then passed to the LLM alongside the user’s legitimate query, allowing the attacker’s hidden directives to manipulate the AI’s actions. This method effectively transforms benign visual content into a potent vector for malicious control, bypassing typical input sanitization.
