• StartupHub.ai
    StartupHub.aiAI Intelligence
Discover
  • Home
  • Search
  • Trending
  • News
Intelligence
  • Market Analysis
  • Comparison
  • Market Map
Workspace
  • Email Validator
  • Pricing
Company
  • About
  • Editorial
  • Terms
  • Privacy
  • v1.0.0
  1. Home
  2. News
  3. New Ai Browser Prompt Injection Attacks Revealed
Back to News
Ai research

New AI Browser Prompt Injection Attacks Revealed

S
StartupHub Team
Oct 21, 2025 at 10:17 PM3 min read
New AI Browser Prompt Injection Attacks Revealed

New research from Shivan Kaul Sahib and Artem Chaikin exposes critical AI browser prompt injection vulnerabilities, confirming that indirect prompt injection is a systemic threat to agentic browsers. Their findings build on previous disclosures, demonstrating how malicious instructions can bypass traditional security measures in platforms like Perplexity Comet and Fellou. These vulnerabilities underscore the urgent need for a fundamental re-evaluation of security architectures in AI-powered browsing. The implications for user data and financial security are substantial, as these agents operate with elevated privileges. According to the announcement this research highlights a consistent theme of failing to properly isolate trusted user input from untrusted web content.

Perplexity’s Comet assistant, for instance, proved susceptible to prompt injection via screenshots. Attackers can embed nearly invisible text within web content, using faint colors on contrasting backgrounds, which human users cannot easily discern. When a user captures a screenshot of such a page, Comet’s underlying text recognition (likely OCR) extracts these camouflaged instructions. These imperceptible commands are then passed to the LLM alongside the user’s legitimate query, allowing the attacker’s hidden directives to manipulate the AI’s actions. This method effectively transforms benign visual content into a potent vector for malicious control, bypassing typical input sanitization.

Another significant vulnerability was identified in the Fellou browser, where simply navigating to a website could trigger an AI browser prompt injection. While Fellou showed some resistance to hidden instructions, it still treats visible webpage content as implicitly trusted input for its LLM. An attacker can embed malicious, yet visible, instructions directly onto their website. When a user asks the AI assistant to visit this page, the browser automatically sends the website’s content to its LLM, even without an explicit summarization request. This allows the attacker’s webpage text to override or modify the user’s original intent, leading to unauthorized actions.

A Systemic Challenge for Agentic Browsers

These discoveries reinforce a critical point: agentic browsers fundamentally break long-standing web security assumptions. Traditional protections like the same-origin policy become irrelevant when an AI assistant, operating with the user’s authenticated privileges, can be instructed by untrusted webpage content. This means a seemingly innocuous Reddit comment or a malicious website could trigger cross-domain actions, potentially accessing sensitive accounts like banks, email providers, or corporate systems. The core issue remains a failure to establish clear boundaries between what the user intends and what untrusted web content dictates.

The consistent pattern across these AI browser prompt injection attacks points to a deep architectural flaw: the LLM’s inability to reliably distinguish between user-initiated commands and malicious external content. While this is a complex problem, the current approach of treating all input equally, especially when powerful browser tools are at the AI’s disposal, is inherently dangerous. Until categorical safety improvements are implemented across the agentic browser landscape, these platforms will remain high-risk. Immediate mitigations should include strict isolation of agentic browsing sessions and requiring explicit user invocation for any agentic action that involves opening websites or accessing sensitive data.

The ongoing research and public disclosures are vital for pushing the industry toward more secure agentic AI. Brave’s commitment to detailing its plans for secure agentic browsing, as mentioned in the source, indicates a growing recognition of these challenges. Ultimately, the future of AI-powered browsers hinges on developing robust security architectures that prioritize user safety and data integrity above all else, ensuring that convenience does not come at the cost of control.

#AI
#AI Agents
#ai security
#Artem Chaikin
#LLM
#Perplexity AI
#Research
#Shivan Kaul Sahib

AI Daily Digest

Get the most important AI news daily.

GoogleSequoiaOpenAIa16z
+40k readers